id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2310.18942 | The rotation effect on the thermodynamics of the QCD matter | In this study, we investigate the impact of rotation on the thermodynamic
characteristics of QCD matter using the three-flavor NJL model. We examine the
temperature, quark chemical potential, and angular velocity dependencies of key
thermodynamic quantities, such as the trace anomaly, specific heat, speed of
sound, angular momentum, and moment of inertia. As the main finding of our
analysis, we observe that the speed of sound exhibits a nonmonotonic behavior
as the angular velocity changes. | Fei Sun, Shuang Li, Rui Wen, Anping Huang, Wei Xie | 2023-10-29T09:05:57Z | http://arxiv.org/abs/2310.18942v1 | # The rotation effect on the thermodynamics of the QCD matter
###### Abstract
In this study, we investigate the impact of rotation on the thermodynamic characteristics of QCD matter using the three-flavor NJL model. We examine the temperature, quark chemical potential, and angular velocity dependencies of key thermodynamic quantities, such as the trace anomaly, specific heat, speed of sound, angular momentum, and moment of inertia. As the main finding of our analysis, we observe that the speed of sound exhibits a nonmonotonic behavior as the angular velocity changes.
## I Introduction
Over the decades, intensive investigations in high-energy physics have revealed the rich variety of phenomena exhibited by quantum chromodynamics (QCD) matter at finite temperatures and/or baryon densities. Consequently, determining the phase diagram has become a topic of considerable interest in this field, as it is shaped by the fundamental properties of QCD, namely spontaneous chiral symmetry breaking and confinement. The QCD diagram plays a vital role not only in understanding heavy-ion collision experiments but also in shedding light on the early universe and compact stars. The critical endpoint (CEP) on the phase diagram offers crucial information for inferring the phase boundary, which can be characterized by critical phenomena that manifest in thermodynamic and hydrodynamic properties. Therefore, exploring the thermodynamics of strong interaction matter contributes significantly to understanding the phase diagram and the properties of matter created in heavy-ion experiments. Extensive research has already examined the phase transition in the presence of finite temperature and chemical potential.
In recent years, there has been a shift in focus towards noncentral high-energy heavy-ion collisions (HIC), where both a strong magnetic field and rotation are generated. The study of matter under these extreme conditions is of great interest in the field of QCD. The QCD matter created through off-central collisions carries a nonzero angular momentum on the order of \(10^{4}\sim 10^{5}\hbar\) with local angular velocity ranging from \(0.01\sim 0.1\) GeV [1; 2; 3; 4; 5; 6], and the angular momentum affects both the orbital motion and the individual spins of the particles. Among them, it is particularly important to investigate the phase diagram and thermodynamics of QCD matter under rotation through experimental, theoretical, and computational studies. In 2017, the STAR Collaboration published the first observation of global polarization resulting from noncentral heavy-ion collisions, which has led to the exploration of numerous spin-related quantum phenomena and the remarkably strong fluid vorticity structures[7; 8; 9]. Additionally, several upcoming experiments, the Facility for Antiproton and Ion Research (FAIR) in Darmstadt, the Nuclotron-based Ion Collider Facility (NICA) in Dubna, and the CSR-External target Experiment (CEE) in Huizhou are also planning to conduct BES to identify the CEP, and also to conduct noncentral heavy-ion collision experiments.
In theoretical aspects, there have also been significant advances on the matter under rotation. The influence of rotation on various physical scenarios is currently being actively investigated. These include noncentral heavy-ion collisions in high-energy nuclear physics [10; 11; 12; 13; 14; 15], as well as in hadron physics [16; 17], trapped non-relativistic bosonic cold atoms in condensed matter physics [18; 19; 20; 21], and rapidly spinning neutron stars in astrophysics [22; 23; 24]. Due to the non-Abelian nature of QCD, a thorough understanding of dynamical chiral symmetry breaking (DCSB) is challenging, several effective models have been proposed, such as the Nambu and Jona-Lasinio (NJL) models, quark-meson (QM) models, holographic QCD models; also some functional QCD approaches have been developed, such as Dyson-Schwinger equations (DSE), functional renormalization group (fRG), as well as their extensions to investigate the QCD phase diagram [25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 111; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 187; 188; 189; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 212; 213; 214; 215; 216; 217; 218; 219; 223; 219; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 282; 287; 289; 288; 289; 291; 289; 292; 300; 310; 311; 329; 331; 340; 341; 35; 361; 371; 38; 393; 42; 395; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 557; 58; 59; 60; 61; 629; 63; 64; 65; 66; 67; 68; 69; 70; 72; 73; 74; 75; 76; 78; 79; 80; 82; 83; 84; 85; 86; 87; 89; 91; 92; 93; 94; 95; 96; 97; 98; 99; 99; 101; 112; 113; 114; 115; 116; 117; 118; 119; 122; 124; 125; 126; 127; 128; 129; 131; 140; 141; 15; 153; 156; 157; 158; 159; 161; 179; 183; 190; 184; 185; 186; 187; 188; 189; 192; 193; 195; 196; 197; 198; 1998; 299; 301; 203; 205; 206; 207; 208; 209; 211; 222; 223; 224; 245; 246; 247; 249; 251; 268; 270; 209; 227; 283; 289; 293; 310; 201; 204; 206; 207; 209; 229; 294; 295; 296; 297; 298; 299; 311; 205; 208; 209; 212; 209; 221; 230; 231; 241; 242; 243; 246; 247; 248; 249; 252; 269; 271; 28; 299; 320; 321; 334; 35; 362; 373; 38; 394; 39; 400; 342; 39; 411; 42; 43; 445; 46; 48; 49; 51; 52; 53; 54; 56; 57; 58; 59; 61; 70; 71; 72; 73; 74; 75; 76; 78; 79; 81; 82; 83; 84; 85; 86; 87; 89; 92; 933; 940; 86; 88; 89; 950; 87; 88; 89; 960; 88; 97; 99; 198; 97; 199; 98; 99; 199; 199; 199; 199; 199; 199; 199; 200; 203; 205; 206; 207; 209; 212; 213; 214; 215; 216; 217; 218; 219; 226; 219; 233; 247; 248; 249; 253; 2556; 257; 266; 277; 288; 293; 297; 298; 299; 330; 227; 2
compared to light quarks, determining the phase diagram of strange quark matter has become a subject of significant theoretical and experimental endeavors. This is due to the crucial role played by the strange quark in shaping the behavior of the phase diagram for both chiral and deconfinement transitions. Furthermore, the strange quark also has a profound impact on the stability limit of neutron stars, which are believed to exist under extreme temperature and pressure conditions [73]. Interest in investigating neutron stars has surged following the remarkable findings of the LIGO and VIRGO collaborations [74]. Additionally, since compact stars like neutron stars can exhibit rapid rotation, exploring the effects of rotation on the phase transitions of these astrophysical objects is both intriguing and significant. So, the extension to \(2+1\) flavors with inclusion of strange quarks should be explored and there are several works on this topic [75; 76; 77].
Investigating QCD matter under rotation is a captivating area of research. In addition to studying transport properties like the chiral vortical effect and chiral vortical wave [78; 79; 80; 81], it is also highly intriguing to explore the effects of rotation on phase transitions and thermodynamics. The QCD phase diagram is expected to exhibit great complexity in the presence of angular velocity and chemical potential, potentially revealing interesting phases and regimes. Studying the thermodynamics of quarks (including the strange quark), under extreme conditions such as large chemical potential and strong rotation, can contribute to our understanding of compact stars, the evolution of the universe, and high-energy nuclear physics. Thermodynamics has been the subject of intense investigation in recent years, it is natural to inquire about the influence of rotation on QCD thermodynamics. In this paper, our main focus is the global rotation effect on the thermodynamics of the three-flavor NJL model. We will also explore the model's capacity to describe the essential aspects of QCD thermodynamics around the critical point.
Our work is organized as follows. In Section II, we begin by introducing the formalism of the three-flavor NJL model and derive the detailed expressions for thermodynamics in the presence of rotation. In Section III, we present the numerical results and discussions on thermodynamic quantities, analyzing both the light quarks and strange quark. Finally, in Section IV, we summarize our findings and conclude the paper.
## II Formalism
First, we provide a very brief sketch of the basis for studying the rotating matter. The metric tensor can describe the structure of space-time under rotating frame reads
\[g_{\mu\nu}=\left(\begin{array}{cccc}1-\vec{v}\,^{2}&-v_{1}&-v_{2}&-v_{3}\\ -v_{1}&-1&0&0\\ -v_{2}&0&-1&0\\ -v_{3}&0&0&-1\end{array}\right), \tag{1}\]
where \(v_{i}\) is the velocity. Our starting point is the partition function
\[\mathcal{Z}=\int D[\bar{\psi}]D[\psi]e^{iS}, \tag{2}\]
here, \(S\) denotes the quark action, which is the integration of the Lagrangian density \(\mathcal{L}\). When extending to the case of rotating fermions [82; 83; 18] with non-zero chemical potential, the Lagrangian in the three-flavor NJL model is given by
\[\mathcal{L} =\bar{\psi}\left(i\bar{\varsigma}^{\mu}(\partial_{\mu}+\Gamma_{ \mu})-m+\gamma^{0}\mu\right)\psi\] \[+G\sum_{a=0}^{8}\left(\bar{\psi}\lambda^{a}\psi\right)^{2}\] \[-K\{\det[\bar{\psi}(1+\gamma^{5})\psi]+\det[\bar{\psi}(1-\gamma^ {5})\psi]\}, \tag{3}\]
here, \(\psi\) is the quark field, \(\bar{\varsigma}^{\mu}=e_{a}^{\,\mu}\gamma^{a}\) with \(e_{a}^{\,\mu}\) being the tetrads for spinors and \(\gamma^{a}\) represents the gamma matrix, \(\Gamma_{\mu}\) is defined as \(\Gamma_{\mu}=\frac{1}{4}\times\frac{1}{2}[\gamma^{a},\gamma^{b}]\)\(\Gamma_{ab}\), which is the spinor connection, where \(\Gamma_{ab\mu}=\eta_{ac}(e_{a}^{\,\sigma}G_{\,\mu\nu}^{\sigma}e_{b}^{\,\nu}-e_{b }^{\,\nu}\partial_{\mu}e_{\,\nu}^{\,c})\), and \(G_{\,\mu\nu}^{\sigma}\) is the affine connection determined by \(g^{\mu\nu}\). \(m\) is the bare quark mass matrix, \(\mu\) denotes the chemical potential, and \(G\) represents the coupling constant of four-point interaction term. \(\lambda^{a}(a=1,...8)\) are the Gell-Mann matrices in flavor space. The last term corresponds to the t'Hooft interaction with coupling strength \(K\), which is a determinant in flavor space. Considering a system with an angular velocity along the fixed \(z\)-axis, then \(\vec{v}=\vec{\omega}\times\vec{x}\). By choosing \(e_{\,\mu}^{a}=\delta_{\,\mu}^{a}+\delta_{\,\mu}^{a}\delta_{\,\nu}^{0}\)\(v_{i}\) and \(e_{\,\mu}^{\,\mu}=\delta_{\,\mu}^{\,\mu}-\delta_{\,\alpha}^{\,0}\delta_{\,\mu}^{\,\mu} \,v_{i}\), and expanding to first order of angular velocity the Lagrangian has following expression:
\[\mathcal{L} = \bar{\psi}\left[i\gamma^{\mu}\partial_{\mu}-m+\gamma^{0}\mu\right]\psi \tag{4}\] \[+ \bar{\psi}\left[\left(\gamma^{0}\right)^{-1}\left(\left(\vec{ \omega}\times\vec{x}\right)\cdot\left(-i\vec{\partial}\right)+\vec{\omega} \cdot\vec{S}_{4\times 4}\right)\right]\psi\] \[+ G\sum_{a=0}^{8}\left(\bar{\psi}\lambda^{a}\psi\right)^{2}\] \[- K\{\det[\bar{\psi}(1+\gamma^{5})\psi]+\det[\bar{\psi}(1-\gamma^ {5})\psi],\]
where \(\bar{S}_{4\times 4}=\frac{1}{2}\left(\begin{array}{cc}\vec{\sigma}&0\\ 0&\vec{\sigma}\end{array}\right)\) is the spin operator. With the technique of the path integral formulation for Grassmann variables theory and the mean field approximation, the linearization is done to a 4-quark interaction and 6-quark interaction, we get the expression of \(\log\mathcal{Z}\) as follows:
\[\log\mathcal{Z} = \frac{1}{T}\int d^{3}x\left(2G\sum_{f}\left\langle\bar{\psi}_{f} \psi_{f}\right\rangle^{2}-4K\prod_{f}\left\langle\bar{\psi}_{f}\psi_{f}\right\rangle\right) \tag{5}\] \[+ \sum_{f}\log\det\frac{D_{f}^{-1}}{T}.\]
The inverse fermion propagator \(D^{-1}\) in Eq. (5) can be derived as follows,
\[D^{-1}=\gamma^{0}\left(-i\omega_{\mathcal{I}}+\left(n+\frac{1}{2}\right)\omega +\mu\right)-M-\vec{\gamma}\cdot\vec{p}, \tag{6}\]
here we introduce Matsubara frequency \(\mathbf{\omega_{l}}=-ip_{0}=2\pi lT\) with the temperature \(T\), and \(M\) denotes the dynamical mass of quark
\[M_{q} = m_{q}+\left(2K\left\langle\bar{s}s\right\rangle-4G\right)\left\langle \bar{q}q\right\rangle, \tag{7}\] \[M_{s} = m_{s}-4G\left\langle\bar{s}s\right\rangle+2K\langle\bar{q}q \rangle^{2}. \tag{8}\]
To find solutions of the Dirac equation, we start by choosing a complete set of commutating operators consisting of \(\hat{H}\), which can be obtained from Eq. (4) by using the relation \(\mathcal{H}=\bar{\psi}\left(i\gamma^{0}\partial_{0}\right)\psi-\mathcal{L}\), the momentum in the \(z\)-direction \(\hat{p}_{z}\), the square of transverse momentum \(\hat{p}_{t}^{2}\), the \(z\)-component of the total angular momentum \(\hat{J}_{z}\) and the transverse helicity \(\hat{h}_{t}\), here \(\hat{h}_{t}=\gamma^{5}\gamma^{3}\hat{p}_{t}\cdot\widetilde{S}\), and \(\gamma^{5}=i\gamma^{0}\gamma^{1}\gamma^{2}\gamma^{3}\). By solving the eigenvalue equations of the complete set of commuting operators \(\{\hat{H}\), \(\hat{p}_{z}\), \(\hat{p}_{t}^{2}\),\(\hat{J}_{z}\), \(\hat{h}_{t}\}\), we obtain the positive and negative energy solutions of the Dirac field as follows: In cylindrical coordinates, the general spinor eigenstates can be written as
\[u=\sqrt{\frac{E+m}{4E}}\left(\begin{array}{c}e^{ip_{z}z}e^{\mathrm{i}n\theta }J_{n}\left(p_{t}r\right)\\ se^{ip_{z}z}e^{\mathrm{i}n+1\theta}J_{n+1}\left(p_{t}r\right)\\ \frac{p_{z}-ip_{t}}{E+m}e^{ip_{z}z}e^{\mathrm{i}n\theta}J_{n}\left(p_{t}r \right)\\ -se^{-ip_{z}z}e^{\mathrm{i}n+1\theta}J_{n+1}\left(p_{t}r\right)\end{array} \right), \tag{9}\]
\[v=\sqrt{\frac{E+m}{4E}}\left(\begin{array}{c}\frac{p_{z}-ip_{z}e^{-ip_{z}z}e ^{\mathrm{i}n\theta}J_{n}\left(p_{t}r\right)}{\frac{E+m}{4E}}-\frac{sp_{z}p_{z }e^{-ip_{z}z}e^{\mathrm{i}n+1\theta}J_{n+1}\left(p_{t}r\right)}{-e^{-ip_{z}z}e ^{\mathrm{i}n+1\theta}J_{n+1}\left(p_{t}r\right)}\\ -se^{-ip_{z}z}e^{\mathrm{i}n\theta}J_{n}\left(p_{t}r\right)\\ -se^{-ip_{z}z}e^{\mathrm{i}n+1\theta}J_{n+1}\left(p_{t}r\right)\end{array} \right). \tag{10}\]
Here, \(s=\pm 1\) represent the transverse helicity, \(n\) denotes the \(z\)-direction angular momentum quantum number. After the summation of all the Matsubara frequencies and carrying out the general approach of the finite temperature fields [84], it can be shown that the grand thermodynamic potential (\(\Omega=-\frac{T}{V}\log\mathcal{Z}\)) has following form,
\[\Omega = 2G\left(2\langle\bar{q}q\rangle^{2}+\langle\bar{s}s\rangle^{2} \right)-4K\langle\bar{q}q\rangle^{2}\left\langle\bar{s}s\right\rangle \tag{11}\] \[- \frac{3}{2\pi^{2}}\sum\limits_{n=-\infty}^{\infty}\int_{0}^{ \Lambda}\mathrm{p}_{t}d\mathrm{p}_{t}\int_{-\sqrt{\Lambda^{2}-p_{t}^{2}}}^{ \sqrt{\Lambda^{2}-p_{t}^{2}}}dp_{z}\left(\left(J_{n+1}(\mathrm{p}_{t}r)^{2}+J _{n}(\mathrm{p}_{t}r)^{2}\right)\left(\varepsilon_{q}-\left(\frac{1}{2}+n \right)\omega\right)\right.\] \[\left.+\log\left(e^{-\frac{\mu_{q}+\varepsilon_{q}-\left(\frac{1} {2}+n\right)\omega}{2}}+1\right)\right\}\] \[- \frac{3}{4\pi^{2}}\sum\limits_{n=-\infty}^{\infty}\int_{0}^{ \Lambda}\mathrm{p}_{t}d\mathrm{p}_{t}\int_{-\sqrt{\Lambda^{2}-p_{t}^{2}}}^{ \sqrt{\Lambda^{2}-p_{t}^{2}}}dp_{z}\left(\left(J_{n+1}(\mathrm{p}_{t}r)^{2}+J _{n}(\mathrm{p}_{t}r)^{2}\right)\left(\varepsilon_{s}-\left(\frac{1}{2}+n \right)\omega\right)\right.\] \[\left.- \frac{3}{4\pi^{2}}\sum\limits_{n=-\infty}^{\infty}\int_{0}^{ \infty}\mathrm{p}_{t}d\mathrm{p}_{t}\int_{-\infty}^{\infty}dp_{z}\left(\left(J _{n+1}(\mathrm{p}_{t}r)^{2}+J_{n}(\mathrm{p}_{t}r)^{2}\right)T\left\{\log\left( e^{-\frac{\mu_{s}+\varepsilon_{s}-\left(\frac{1}{2}+n\right)\omega}{2}}+1\right)\right.\right.\] \[\left.\left.+\log\left(e^{-\frac{\mu_{s}+\varepsilon_{s}-\left( \frac{1}{2}+n\right)\omega}{2}}+1\right)\right\}.\]
Here, the quark quasiparticle energy \(\varepsilon=\sqrt{M^{2}+p_{t}^{2}+p_{z}^{2}}\). For simplicity we also introduce the quark quasiparticle energy under rotation as follows,
\[\varepsilon_{f,n}=\varepsilon_{f}-\left(\frac{1}{2}+n\right)\omega. \tag{12}\]
Note that, the above expression of grand thermodynamic potential contains an explicit cutoff dependence, due to the NJL model being nonrenormalizable. Here, our thermodynamic potential naturally separates into the vacuum piece and the temperature-dependent matter part, which is very helpful in calculating the thermodynamic quantities. The three-momentum cutoff in the vacuum piece should be chosen to reproduce observables, such as pion mass, pion decay constant, and so on, also, in principle, the cutoff in the matter part should take the infinite value. Here, \(\Lambda\) is the three-momentum cutoff of the vacuum part in the potential.
Then, we consider the gap equations which will be required to minimize the grand potential, the dynamical quark mass \(M_{f}\) can be determined by solving the stationary condition,
and we also require the solutions satisfy to get the minimum of the potential, namely,
\[\frac{\partial\Omega}{\partial\left\langle\bar{q}q\right\rangle}=\frac{\partial \Omega}{\partial\left\langle\bar{s}s\right\rangle}=0, \tag{13}\]
\[\frac{\partial^{2}\Omega}{\partial\left\langle\bar{q}q\right\rangle^{2}}>0,\ \ \frac{ \partial^{2}\Omega}{\partial\left\langle\bar{s}s\right\rangle^{2}}>0, \tag{14}\]
which leads to the following coupled gap equations:
\[0 = \left(8G\left\langle\bar{q}q\right\rangle-8K\left\langle\bar{s}s \right\rangle\left\langle\bar{q}q\right\rangle\right)-\frac{3}{\pi^{2}}\sum_{n=- \infty}^{\infty}\int_{0}^{\Lambda}\mathrm{p}_{t}dp_{t}\int_{-\sqrt{\Lambda^{2} -p_{t}^{2}}}^{\sqrt{\Lambda^{2}-p_{t}^{2}}}dp_{z}\left(\left(J_{n+1}(\mathrm{p} _{t}r)^{2}+J_{n}(\mathrm{p}_{t}r)^{2}\right)\right. \tag{15}\] \[\times \left(\frac{\left(-2G+K\left\langle\bar{s}s\right\rangle\right) M_{q}}{\varepsilon_{u}}+\frac{K\left\langle\bar{q}q\right\rangle M_{s}}{\varepsilon_{s}}\right)\] \[+ \frac{3}{\pi^{2}}\sum_{n=-\infty}^{\infty}\int_{0}^{\infty} \mathrm{p}_{t}dp_{t}\int_{-\infty}^{\infty}dp_{z}\left(\left(J_{n+1}(\mathrm{ p}_{t}r)^{2}+J_{n}(\mathrm{p}_{t}r)^{2}\right)\right.\] \[\times \left.\left\{\frac{\left(-2G+K\left\langle\bar{s}s\right\rangle \right)M_{q}}{\varepsilon_{q}}\left[n_{f}(\varepsilon_{q,n},T,\mu)+\bar{n}_{f} (\varepsilon_{q,n},T,\mu)\right]+\frac{K\left\langle\bar{q}q\right\rangle M_{s }}{\varepsilon_{s}}\left[n_{f}(\varepsilon_{s,n},T,\mu)+\bar{n}_{f}( \varepsilon_{s,n},T,\mu)\right]\right\},\]
here, \(n_{f}\) and \(\bar{n}_{f}\) denote the quark and anti-quark distribution functions:
\[n_{f}(\varepsilon,T,\mu)=\frac{1}{e^{\frac{\varepsilon_{q}}{T}}+1}, \tag{17}\] \[\bar{n}_{f}(\varepsilon,T,-\mu)=\frac{1}{e^{\frac{\varepsilon_{q }}{T}}+1}. \tag{18}\]
This set of coupled equations is then solved for the fields as functions of temperature \(T\), quark chemical potential \(\mu\), and angular velocity \(\omega\). Now we turn to the thermodynamics of the rotating system, when we extend to the rotating system, the vorticity should also be considered a further intensive thermodynamic quantity which is necessary for the description of the local fluid, so, some corrections may need to be carried out for the energy density as follows [85; 86; 87]:
\[\varepsilon=-p+Ts+\mu n+\omega J. \tag{19}\]
Here, \(n\) denotes quark number density, \(J\) presents the (polarization) angular momentum density.
From the standard thermodynamic relations, the pressure, and (polarization) angular momentum density (the angular velocity can be regarded as an "effective chemical potential", similarly, we can define the angular momentum by a derivative with respect to the angular velocity of the grand canonical potential) and the quark number density are given as follows,
\[p=\Omega\left(T=0,\mu=0,\omega=0\right)-\Omega\left(T,\mu,\omega\right), \tag{20}\]
\[s=-\bigg{(}\frac{\partial\Omega}{\partial T}\bigg{)}_{\mu,\omega}, \tag{21}\]
\[n=-\bigg{(}\frac{\partial\Omega}{\partial\mu}\bigg{)}_{T,\omega}, \tag{22}\]
\[J=-\bigg{(}\frac{\partial\Omega}{\partial\omega}\bigg{)}_{T,\mu}, \tag{23}\]
note here, to get a physical pressure, we have renormalized the thermodynamical potential, and the subscript represents keeping the chemical potential and angular velocity fixed during taking the partial differentiation, and the trace anomaly can be defined as
\[\Theta=\varepsilon-3p. \tag{24}\]
For each flavor, the explicit formulae of entropy density and quark number density, and the angular momentum along the \(z\)-axis are listed in Appendix A.
Once get the expression of the angular momentum, then we can directly obtain the moment of inertia of the rotating system
\[I=\frac{1}{\omega}(-\frac{d\Omega}{d\omega})=\frac{J}{\omega}. \tag{25}\]
For the description of the expansion of dense matter created in heavy ion collisions, a fundamental quantity that determines the expansion of hot dense matter is the speed of sound
\[c_{s}^{2}=\frac{dp}{d\epsilon}, \tag{26}\]
another quantity of interest is the specific heat
\[C_{V}=\frac{d\epsilon}{dT}. \tag{27}\]
Here, we don't list the detailed expressions of them, and one can easily get that from the above expressions.
## III Numerical results and discussions
### Dynamical quark mass and chiral transition
In this section, we will present our numerical results for the dynamical quark mass and chiral transition in the three-flavor Nambu and Jona-Lasinio (NJL) model under rotation. In our calculations, the input parameters in the NJL are the coupling constants \(G\), the light quark mass \(m_{q}\) (throughout, we ignore isospin breaking effects and work with \(m_{u}=m_{d}=m_{q}\)), the strange quark mass \(m_{s}\) and the three-momentum cutoff \(\Lambda\) and the 't Hooft term coupling constant. We use the model parameters reported in Ref. [88], which have been estimated by the fitting in light of the following observations: \(m_{\pi}=138\) MeV, \(f_{\pi}=92\) MeV, \(m_{K}=495\) MeV and \(m_{q^{\prime}}=958\) MeV, the input parameters are as follows: \(m_{q}=0.005~{}{\rm GeV},\,m_{s}=0.1283~{}{\rm GeV},\,G=3.672~{}{\rm GeV}^{-2}, \,K=59.628~{}{\rm GeV}^{-5},\,\Lambda=0.6816~{}{\rm GeV}\). Throughout the text, unless otherwise specified, the radius \(r\) is taken as \(0.1~{}{\rm GeV}^{-1}\).
We present the evolution of the light quark mass with respect to \(T\), \(\omega\) in Fig. 1, and the strange quark mass in Fig. 1. We observe a decrease in mass as the temperature or angular velocity increases, indicating the restoration of chiral symmetry at high temperatures or large angular velocities. It is remarkable that there exist fast transitions in the low temperature and large angular velocity region, while in the high temperature and small angular velocity region only exhibits a very slow change. It is easy to see that when we fix the temperature is very low, the restoration of chiral symmetry experiences a fast transition with rapid rotation. This can be seen as the continuous crossover becoming steeper with increasing angular velocity, eventually merging into a fast transition at large angular velocity. By comparing the critical angular velocities \(\omega_{c}\) for the light and strange quarks, we find that the decrease is faster for the light quark, which suggests that the chiral symmetry restoration is more efficient for the light quark compared to the strange quark.
Then, we extended the investigation of effective quark mass to the \(\omega-\mu\) plane. In Fig. 2, we show the evolution of the effective masses for the light quark and strange quark as functions of the angular velocity and quark chemical potential. Clearly, at sufficiently large angular velocity or (and) sufficiently large quark chemical potential the quark effective mass
Figure 1: (Color online) The effective mass of light quark and strange quark according to temperature \(T\) and angular velocity \(\omega\) with \(\mu=0.01\) GeV.
is very small. When angular velocity is small, we can find there is a very swift change for the light quark around \(\mu=0.3\) GeV, and for the strange effective quark there are two quickly change regions. We can also observe similar transition regions of small quark chemical potential and large angular velocity as shown in the left rear side of the figure.
The temperature and chemical potential dependence of the light and strange quark effective masses at \(\omega=0.1\) GeV is depicted in Fig. 3, one sees that at low temperature and small quark chemical potential, the chiral symmetry is spontaneously broken, with increasing temperature or quark chemical potential, the effective mass of strange quark is only mildly dependent on them while the light quark shows more sensitive to them compared the strange quark. It can be found that in the low temperature region, the effective mass of the light quark has a sharp drop at certain values of \(\mu\) with increasing \(\mu\). One can see at high temperature or large quark chemical potential the effective mass of these quarks become small, and at sufficiently high temperature or sufficiently large chemical potential the effective mass of light quark almost approaches its current mass, while the strange quark still with a heavy effective mass, this indicates that the current mass of the quark plays an important role in the chiral transition.
The quark condensate \(\langle\bar{q}q\rangle\) or \(\langle\bar{s}s\rangle\) is often treated as an order parameter for spontaneous chiral symmetry breaking. The temperature-dependence of the order parameters for different values of the rotational speed are shown in Fig. 4. It shows a rapid cross-over with a critical angular velocity at about \(0.6\) GeV and \(1.0\) GeV for the light quark and strange quark condensate, respectively. At low temperatures and small angular velocities, the chiral symmetry is spontaneously broken. While at high temperatures or (and) large angular velocities, chiral symmetry may gradually be restored.
Next, we determine the chiral phase transition temperature in the presence of angular velocity in Fig. 5. The definition of \(T_{pc}\) in this context is determined by the maximum of \(\left|\frac{d\phi_{f}}{dT}\right|\), here, \(f=u,d,s\) and \(\phi_{u}=\phi_{d}=\left\langle\bar{q}q\right\rangle,\phi_{s}=\langle\bar{s}s\rangle\). From Fig. 5 we can see the pseudocritical temperature decreases as angular velocity becomes larger. At small angular velocity region, the pseudocritical temperature of strange quark is about \(0.1\) GeV larger than that of the light quark, even at a large angular velocity around \(0.6\) GeV, where the pseudocritical temperature of light quark is very small and by contrast the
Figure 3: (Color online) The effective mass of light quark and strange quark according to temperature \(T\) and quark chemical potential \(\mu\) with \(\omega=0.1\) GeV.
Figure 2: (Color online) The effective mass of light quark and strange quark according to angular velocity \(\omega\) and quark chemical potential \(\mu\) with \(T=0.01\) GeV.
pseudocritical temperature of strange is still very large. Thus, a conclusion seems to be that the rotation can lead to an obvious change to the chiral transition of light quarks compared to that of strange quark due to whose mass is heavier.
### Thermodynamics results for different angular velocities at vanishing quark chemical potential
As can be seen in Fig. 6, the scaled pressure, energy, and entropy densities increase with increasing temperature. These quantities start to increase rapidly with temperature and then gradually grow as the temperature continues to rise after passing through the transition region. The observable rotation enhances these scaled quantities, at low temperatures, the enhancements are significant, while at high temperatures, the enhancements are less pronounced. The scaled trace anomaly exhibits a peak in the transition region, as the temperature continues to increase, the scaled trace anomaly decreases for all different angular velocities. It is evident that at low temperatures the variation in angular velocity can result in significant deviations of the scaled trace anomaly, and this effect diminishes as the firing temperature increases.
The specific heat is an important quantity in thermodynamics as it can be considered a response function of the phase transition, its variation with temperature is presented in Fig. 7(a). As the angular velocity increases, the peak of the specific heat, which occurs at the transition temperature, shifts towards lower temperatures, this indicates that the transition temperature decreases with an increase in angular velocity. In Fig. 7(b), the speed of sound squared increases with temperature and shows little sensitivity to the chosen angular velocities. However, this observation may be attributed to the consideration of only small values of angular velocity. It is evident that the speed of sound squared approaches the conformal limit of 1/3 for different angular velocities at high temperature limits.
An intriguing quantity in the rotating system is the angular momentum. Fig. 8(a) displays the results of the scaled angular momentum as functions of temperature at zero chemical potential for various angular velocities. The scaled angular momentum initially increases with temperature and reaches its peak across the chiral transition region (\(\sim 150\) MeV) for all angular velocities. Beyond this temperature, it decreases with further temperature increment. The moment of inertia is also of interest in our calculation as it represents the linear response of the system's angular momentum \(J\) to the angular velocity \(\omega\). Fig. 8(b) displays the results of the moment of inertia as functions of temperature at zero chemical potential for various angular velocities. It is evident that the scaled moment of inertia always increases with temperature for different angular velocities. Moreover, for a fixed temperature, the scaled moment of inertia becomes larger with increasing angular velocity.
### The influence of the radius on the thermodynamics in the rotating system
In the directions perpendicular to the rotating axis, the rotating radius should be a finite value determined by the causal condition \(\omega r<1\). It has been expected that the presence of boundaries can modify the properties of a rotating system
Figure 4: (Color online) Condensates of light quark and strange quark as functions of temperature for different values of the rotational speed.
Figure 5: (Color online) The pseudocritical temperatures for the chiral transition of rotating quark matter as functions of the angular velocity.
[86; 89; 90; 91; 92]. However, this is only true when the angular velocity \(\omega\) is much smaller than the inverse of the system's size [93]. In the present calculations, we will neglect the finite volume effect and consider it in future studies.
In the standard NJL model, these thermodynamics are functions of temperature and quark chemical potential. However, in a rotating system, these thermodynamics should also depend on the finite size. Due to the cylindrical symmetry, these quantities are dependent on the transverse radius \(r\). It would be interesting to investigate how the various thermodynamic quantities in a strongly interacting rotating matter depend on the radius of the rotating system. The properties as a function of the radius of the rotating system may be related to experimental observations in the future. Additionally, the radius should drastically change the angular momentum and the momentum of the inertia. Therefore, it becomes important to study how the various thermodynamic quantities in the QCD matter under rotation depend on the rotation radius of the system.
We show the densities of the scaled pressure, energy, entropy, and trace anomaly as functions of temperature at zero chemical potential for different radii in Fig. 9. As can be seen, the radius effect is visible and enhances these thermo
Figure 6: (Color online) Scaled pressure, energy density, entropy density and trace anomaly as functions of temperature at zero chemical potential for different angular velocities.
Figure 7: (Color online) Scaled specific heat and speed of sound squared as functions of temperature at zero chemical potential for different angular velocities.
dynamic quantities. The radius effect does not qualitatively affect the behavior of these thermodynamic quantities even in the high-temperature region; it just shifts these thermodynamics at a given temperature. It is also noted that the differences for each thermodynamic quantity between different radii seem unchanged even at high-temperature region.
Scaled angular momentum and moment of inertia as functions of temperature at zero chemical potential for different radii are shown in Fig. 10. In the Fig. 10(a) we find that there is a rapid change near the chiral transition region (around 150 MeV) for the scaled specific heat. It can be seen that there exists a characteristic where the location of the summits almost to have no change with increasing radius. In Fig. 10(b), it is remarkable that the speed of sound squared curves seem the same around the chiral transition region. It is also found that at extremely high temperatures, for different angular velocities, all the values of the speed of sound squared approach the Stefan-Boltzmann limit, whose value is 1/3. This indicates that the speed of sound squared of quark matter at high temperature is not sensitive to the transverse radius.
From Fig. 11(a), one could also infer the dependence of the scaled angular momentum on the radius. Unlike other thermodynamics, the angular momentum has a strong dependence on the system radius. At low temperatures, the angular mo
Figure 8: (Color online) Scaled angular momentum and moment of inertia as functions of temperature at zero chemical potential for different angular velocities.
Figure 9: (Color online) Scaled pressure, energy density, entropy density and trace anomaly as functions of temperature at zero chemical potential for different radii.
mentum increases smoothly with increasing radius. At high temperatures, the angular momentum becomes stronger. It is also evident that the scaled moment of inertia always increases with increasing temperature for different radii from Fig. 11(b). In the region of high temperature, the scaled moment of inertia shows a strong radius dependence. As the temperature increases, the difference between any two curves in the figure becomes larger for both quantities.
### Thermodynamics in rotating system at finite chemical potential
Studying the thermodynamics at finite chemical potential in the rotating system is important for understanding the phase structure of QCD, modeling compact stars, and interpreting heavy ion collision experiments. In Fig. 12 we show the densities of scaled pressure, energy, entropy, and trace anomaly as functions of temperature at \(\omega=0.2\) GeV for different chemical potentials. It can be easy to see that, at low temperature region, there can be a nontrivial contribution from the chemical potential. It is shown that the pressure, energy, and entropy density increase with increasing for different angular velocities, and these quantities are also enhanced by the chemical potential. An increase in the chemical potential leads to increases in these thermodynamics, which can be easily understood as more degrees of freedom are active. In the rotating system, the trace anomaly is enhanced by the chemical potential below the critical transition region, while across the transition region, it can be found the trace anomaly is suppressed by the chemical potential, in addition, with increasing chemical potential, the crossover pattern evolves to lower transition temperatures. For all the quantities in this figure show that below the crossover temperature they exhibit a strong chemical potential dependence. In Fig. 13 we show scaled specific heat and speed of sound squared as functions of temperature at \(\omega=0.2\) GeV for different chemical potentials. As shown in Fig. 13(a), scaled specific heat increases with increasing temperature and reaches a peak at the chiral transition region. Then it first decreases quickly around the critical chiral transition and finally changes little with temperature. The figure shows that the peak position moves to a smaller temperature as quark chemical potential increases. From Fig. 13(b), we can see there is a significant increase in the speed of sound squared for increased quark chemical potential, even near the transition region, which means in the finite chemical potential may have an important effect on the thermalization of the QCD matter in the rotating system. Here, the speed of sound
Figure 11: (Color online) Scaled angular momentum and moment of inertia as functions of temperature at zero chemical potential for different radii.
Figure 10: (Color online) Scaled specific heat and speed of sound squared as functions of temperature at zero chemical potential for different radii.
squared also conveys relevant information: it displays no local minimum at a crossover transition for the quark chemical potential considered, due to first is that the system is not an infinite volume as the standard NJL model, another reason is that the energy density has been modified, ie. we add the contribution of \(J\omega\). It is not hard to see there is a trend when increasing chemical potential in the rotating system, there will appear a local minimum in the phase transition. From this figure, we can see the speed of sound squared will approach the conformal limit of 1/3 for different angular velocities at a large temperature limit.
Another basic thermodynamic quantity is the angular momentum and moment of inertia, these quantities measure the breaking of conformal symmetry in the interaction theory. In Fig. 14, we show the scaled angular momentum and moment of inertia as functions of the temperature for different chemical potential at finite angular velocities. They have similar characteristics as scaled quantities in Fig. 12 below the critical transition. When continue increasing the temperature, the scaled angular momentum slowly decreases, while the scaled moment of inertia always keeping increasing with temperature.
Another possible signature of the chiral transition is offered
Figure 12: (Color online) Scaled pressure, energy density, entropy density and trace anomaly as functions of temperature at \(\omega=0.2\) GeV for different chemical potentials.
Figure 13: (Color online) Scaled specific heat and speed of sound squared as functions of temperature at \(\omega=0.2\) GeV for different chemical potential.
by the behavior of the quark number densities. In Fig. 15 we show the results of the scaled quark number density as a function of the temperature at \(\omega=0.2\) GeV for different values of chemical potential, from the figure we can see when quark chemical potential equals to zero, the corresponding quark number density is always zero. In the presence of finite quark chemical potential, the scaled quark number densities increase slightly until at \(T=150\) GeV and decrease again with growing temperature. It is obvious that the chemical potential enhances the quark number density in the rotation system.
### Thermodynamics in rotating system at large angular velocity
In the following, we will present a systematic analysis of the thermodynamic quantities of QCD matter under large angular velocity. The system's total pressure and energy density during rotation are simply the sum of the contributions from each quark flavor. In order to have a clearer picture of the effects of rotation on different quark flavors, we will investigate each individual contribution as well as the total contribution.
From the strong rotational behavior depicted in Fig. 16, it is evident that the bulk thermodynamic properties, such as the scaled pressure, energy density, and trace anomaly, increase with increasing angular velocity at a temperature of \(T=0.01\) GeV and quark chemical potential \(\mu=0\) GeV. Notably, both the scaled pressure, energy, and trace anomaly of the light quark and strange quark exhibit an increase as the angular velocity rises. In the mid-region of angular velocity, below approximately \(0.8\) GeV, the light quark predominantly contributes to these thermodynamic quantities. However, at sufficiently large angular velocities, the contributions from different flavors become almost the same. It can be also found that the angular momentum of the system has also a very similar character in Fig. 17. It is evident that the angular momentum in the chiral broken phase is lower than the angular momentum in the chiral restored phase. Furthermore, it is worth noting that the contribution of the light quark to the angular momentum is remarkable in the mid-region of the angular velocity, while that of the strange quark is moderate.
There is a descent for the scaled entropy density after exceeding the critical point around \(\omega=0.6\) GeV because, in this region, the rate of increase of the quantity pressure is slowing down. We can also observe a slight increase (not clearly visible in the figure) followed by a decrease in the entropy density around \(\omega=1.0\) GeV. The trace anomaly increases with increasing angular velocity, which is because we have set \(T=0.01\) GeV, in such low temperature, the strange quark is still in a phase with partly broken chiral symmetry if the temperature is high, we will see that the trace anomaly becomes small.
We show the behavior of the scaled specific heat as a function of \(\omega\) in Fig. 18(a) at \(T=0.01\) GeV for vanishing chemical potential. The evolution of the scaled specific heat increases from zero to a maximum value around \(\omega=0.6\) GeV then down to a minimum value and then gradually increases to another sub-maximum value around \(\omega=1.0\) GeV and finally tends to gradually decrease at large angular velocity. From the figure, we can clearly see that the light quark rises steeply across the chiral transition and for the strange quark only there is a flatter peak at a more relatively broad region. It is known that if one has a sharp crossover phenomenon with a rapid change in thermodynamic quantities over a small in
Figure 14: (Color online) Scaled angular momentum and moment of inertia as functions of temperature at \(\omega=0.2\) GeV for different chemical potential.
Figure 15: (Color online) Scaled quark number density as a function of temperature at \(\omega=0.2\) GeV for different values of chemical potential.
terval, there is some chance for measurable effects in experiments, so the specific heat may provide relevant signatures for phase transitions in the rotating system.
The speed of sound squared changes with the angular velocity for the light and strange quarks is plotted in Fig. 18(b). It is known that in the transition region of the QCD matter, the characteristics of the speed of sound squared undergo significant changes, it is evident that the speed of sound squared shows a (pronounced) dip near the chiral transition. There are two local minimums of the speed of sound squared becomes deeper in the vicinity of the critical angular velocity, which correspondingly to the light quark and strange quark, respectively. At small angular velocity, the speed of sound squared increases with an increase in the angular velocity. However, at a large angular velocity, the speed of sound squared subtly decreases with an increase in the angular velocity. Our numerical results indicate that the dependence of the speed of sound squared on the angular velocity can be indicative of QCD chiral transition. To probe this dependence further, we show the results of calculations for different temperatures in Fig. 19, the figure exhibits markedly behavior of the quark matter under rotation, and it can be found that the maximum value of the speed of sound is dominated by features associated with the chiral transitions. In addition, the speed of sound squared increases as angular velocity increases in the small angular velocity region, while decreases with angular velocity increases in the large angular velocity region. In the low angular velocity region, there is a significant difference in the speed of sound corresponding to different temperatures. However, in the high angular velocity region, the difference in the speed of sound becomes smaller for different temperatures and ultimately converges to the same value. Thus, a key conclusion can be made that the speed of sound exhibits a nonmonotonic behavior as the angular velocity changes.
## IV Conclusions
In order to investigate the expansion of the plasma formed in ultra-relativistic heavy-ion collisions with noncentral im
Figure 16: (Color online) Scaled pressure, energy density, entropy density and trace anomaly as functions of angular velocity at \(T=0.01\) GeV and \(\mu=0\) GeV for the light, strange and total quarks, respectively.
Figure 17: (Color online) Scaled moment of inertia as functions of angular velocity at \(T=0.01\) GeV and \(\mu=0\) GeV for the light, strange and total quarks, respectively.
pact, it is crucial to compute the thermodynamic properties within a rotating system. This paper focuses on formulating and exploring the thermodynamics of the three-flavor NJL model under rotation. We present the outcomes concerning diverse thermodynamic observables as a function of temperature, considering various angular velocities, radii, and finite quark chemical potentials. Additionally, we examine the thermodynamic behaviors of the light and strange quarks in relation to the angular velocities, respectively.
To summarize, we have presented an analytical calculation of the thermodynamics in the three-flavor NJL model in the presence of the rotational effect. We systematically analyze the equation of state in the parameter space of temperature \(T\), chemical potential \(\mu\), and the angular velocity \(\omega\) in the rotational system. The calculations provide a physical picture of the chiral transition under rotation, and our findings indicate that the effect of rotation plays an important role in thermodynamics. By studying the changes in thermodynamic quantities of a rotating system, we can gain insights into the properties and behaviors of QCD matter. In the rotating system, the scaled thermodynamic quantities are visibly influenced by rotation, an important quantity is the moment of inertia, which exhibits a strong dependence on the angular velocity even at high temperatures. The thermodynamic properties of light and heavy quarks differ with respect to different angular velocities, and this distinction strongly influences the thermodynamic quantities, however, for sufficiently strong rotation, these distinctions for each flavor vanish. The speed of sound plays a crucial role in studying the thermodynamic properties and phase transitions of QGP, as a main finding of our analysis, the speed of sound squared exhibits a nonmonotonic feature with respect to the angular velocity.
It is important to mention that here, for simplicity, we did not take into account the boundary effect of the system. Since any uniformly rotating system should be spatially bounded, it has been expected that the presence of boundaries can modify the properties of the rotating system [86; 89; 90; 91; 92], indeed, this is only true when the angular velocity is much smaller than the inverse of the system's size [93], so in our analytic derivation we ignore the finite volume boundary effect and we leave it as our further study. So far, we have developed the NJL model taking into account only fermion-antifermion scalar interactions for the chiral transition, it is necessary to note that the vector interactions [95; 25] may play an important role on the chiral transition of three-flavor NJL model in the present of rotation, and we also leave it as our further study. In addition, the Polyakov-Nambu-Jona-Lasinio (PNJL) model [96; 97; 98; 99; 28; 29; 30; 96; 97; 98; 99] incorporates the Polyakov loop integral based on the NJL model, considering the coupling between quark degrees of freedom and gluon degrees of freedom. The PNJL model shows features of both chiral symmetry restoration and deconfinement phase transition, so this may allow the PNJL model to better describe the properties of QCD matter under rotation at high temperatures and finite chemical potentials. The PNJL model under rotation has been proposed in Ref. [64], thus, it is meaningful to calculate these thermodynamics in this model. Although, there is still controversy on how rotation affects the deconfinement transition at present, and we hope the lattice QCD provides more clues on the Polyakov loop, finally, make this research available in the PNJL model.
Figure 19: (Color online) Speed of sound squared as function of angular velocity at \(\mu=0\) GeV for different temperatures.
Figure 18: (Color online) (a) Scaled specific heat as functions of angular velocity at \(T=0.01\) GeV and \(\mu=0\) GeV for the light, strange and total quarks, respectively. (b) The corresponding result of speed of sound squared for the total quarks.
## Acknowledgements
We greatly thank Mei Huang, Kun Xu and Jie Mei for useful discussions. The work has been supported by the National Natural Science Foundation of China (NSFC) with Grant No. 12375137 and 12205309, the start-up funding from University of Chinese Academy of Sciences(UCAS), and the Fundamental Research Funds for the Central Universities.
## Appendix A Thermodynamic quantities
We list the detailed expressions of the entropy density and quark number density, and the angular momentum along the \(z\)-axis:
\[s= \frac{3}{4\pi^{2}}\sum_{f}\sum_{n=-\infty}^{\infty}\int_{0}^{\infty }\mathrm{p}_{t}d\mathrm{p}_{t}\int_{-\infty}^{\infty}dp_{z}\left(\left(J_{n+1} (\mathrm{p}_{t}r)^{2}+J_{n}(\mathrm{p}_{t}r)^{2}\right)\right.\] \[\times T\left\{\frac{\mathrm{e}^{-\frac{\varepsilon_{f,n}-\mu_{ f}}{T}}\left(\varepsilon_{f,n}-\mu_{f}\right)}{\left(1+\mathrm{e}^{-\frac{ \varepsilon_{f,n}-\mu_{f}}{T}}\right)T^{2}}+\frac{\mathrm{e}^{-\frac{ \varepsilon_{f,n}+\mu_{f}}{T}}\left(\varepsilon_{f,n}+\mu_{f}\right)}{\left(1 +\mathrm{e}^{-\frac{\varepsilon_{f,n}+\mu_{f}}{T}}\right)T^{2}}\right.\] \[\left.+\log\left[1+\mathrm{e}^{-\frac{\varepsilon_{f,n}-\mu_{f}} {T}}\right]+\log\left[1+\mathrm{e}^{-\frac{\varepsilon_{f,n}+\mu_{f}}{T}} \right]\right\},\]
\[n= \frac{3}{4\pi^{2}}\sum_{f}\sum_{n=-\infty}^{\infty}\int_{0}^{ \infty}\mathrm{p}_{t}d\mathrm{p}_{t}\int_{-\infty}^{\infty}dp_{z}\left(\left( J_{n+1}(\mathrm{p}_{t}r)^{2}+J_{n}(\mathrm{p}_{t}r)^{2}\right)\right.\] \[\times\frac{\mathrm{e}^{\frac{2M_{f}+\mu_{f}}{2T}}\left(-1+ \mathrm{e}^{\frac{2\mu_{f}}{T}}\right)}{\left(\mathrm{e}^{\frac{M_{f}+\mu_{f }}{T}}+\mathrm{e}^{\frac{\left(\frac{2}{2}+n\right)-\nu}{T}}\right)\left( \mathrm{e}^{\frac{M_{f}}{T}}+\mathrm{e}^{\frac{2\mu_{f}+2n\nu}{2T}}\right)},\]
\[J= \frac{3}{4\pi^{2}}\sum_{n=-\infty}^{\infty}\int_{0}^{\Lambda} \mathrm{p}_{t}d\mathrm{p}_{t}\int_{-\sqrt{\Lambda^{2}-p_{t}^{2}}}^{\sqrt{ \Lambda^{2}-p_{t}^{2}}}dp_{z}\left(\left(J_{n+1}(\mathrm{p}_{t}r)^{2}+J_{n}( \mathrm{p}_{t}r)^{2}\right)\right.(-1-2n)\] \[+\frac{3}{4\pi^{2}}\sum_{f}\sum_{n=-\infty}^{\infty}\int_{0}^{ \infty}\mathrm{p}_{t}d\mathrm{p}_{t}\int_{-\infty}^{\infty}dp_{z}\left(\left( J_{n+1}(\mathrm{p}_{t}r)^{2}+J_{n}(\mathrm{p}_{t}r)^{2}\right)\right.\] \[\left.\times\frac{\left(\mathrm{e}^{\varepsilon_{f,n}/T}+2\mathrm{ e}^{\frac{\mu_{f}}{T}}+\mathrm{e}^{\frac{\varepsilon_{f,n}+2\mu_{f}}{T}}\right)\left(1+2n \right)}{2\left(\mathrm{e}^{\varepsilon_{f,n}/T}+\mathrm{e}^{\frac{\mu_{f}}{T} }\right)\left(1+\mathrm{e}^{\frac{\varepsilon_{f,n}+\mu_{f}}{T}}\right)}.\]
|
2306.05993 | A Bayesian Approach to Modeling Finite Element Discretization Error | In this work, the uncertainty associated with the finite element
discretization error is modeled following the Bayesian paradigm. First, a
continuous formulation is derived, where a Gaussian process prior over the
solution space is updated based on observations from a finite element
discretization. To avoid the computation of intractable integrals, a second,
finer, discretization is introduced that is assumed sufficiently dense to
represent the true solution field. A prior distribution is assumed over the
fine discretization, which is then updated based on observations from the
coarse discretization. This yields a posterior distribution with a mean that
serves as an estimate of the solution, and a covariance that models the
uncertainty associated with this estimate. Two particular choices of prior are
investigated: a prior defined implicitly by assigning a white noise
distribution to the right-hand side term, and a prior whose covariance function
is equal to the Green's function of the partial differential equation. The
former yields a posterior distribution with a mean close to the reference
solution, but a covariance that contains little information regarding the
finite element discretization error. The latter, on the other hand, yields
posterior distribution with a mean equal to the coarse finite element solution,
and a covariance with a close connection to the discretization error. For both
choices of prior a contradiction arises, since the discretization error depends
on the right-hand side term, but the posterior covariance does not. We
demonstrate how, by rescaling the eigenvalues of the posterior covariance, this
independence can be avoided. | Anne Poot, Pierre Kerfriden, Iuri Rocha, Frans van der Meer | 2023-06-09T16:01:48Z | http://arxiv.org/abs/2306.05993v2 | # A Bayesian Approach to Modeling Finite Element Discretization Error
###### Abstract
In recent years, there has been a surge of interest in the development of probabilistic approaches to problems that might appear to be purely deterministic. One example of this is the solving of partial differential equations. Since numerical solvers require some approximation of the infinite-dimensional solution space, there is an inherent uncertainty to the solution that is obtained. In this work, the uncertainty associated with the finite element discretization error is modeled following the Bayesian paradigm. First, a continuous formulation is derived, where a Gaussian process prior over the solution space is updated based on observations from a finite element discretization. Due to intractable integrals, a second, finer, discretization is introduced that is assumed sufficiently dense to represent the true solution field. The prior distribution assumed over the fine discretization is then updated based on observations from the coarse discretization. This yields a posterior distribution with a mean close to the deterministic fine-scale solution that is endowed with an uncertainty measure.
The prior distribution over the solution space is defined implicitly by assigning a white noise distribution to the right-hand side. This allows for a sparse representation of the prior distribution, and guarantees that the prior samples have the appropriate level of smoothness for the problem at hand. Special attention is paid to inhomogeneous Dirichlet and Neumann boundary conditions, and how these can be used to enhance this white noise prior distribution. For various problems, we demonstrate how regions of large discretization error are captured in the structure of the posterior standard deviation. The effects of the hyperparameters and observation noise on the quality of the posterior mean and standard deviation are investigated in detail.
Introduction
In recent years, the Bayesian paradigm has become a popular framework to perform uncertainty quantification. It has found its application in uncertainty propagation, inverse modeling [1] and data assimilation [2] contexts, among others. Commonly, given some numerical model, a prior distribution is assumed over its parameters, and the Bayesian paradigm provides a consistent framework to propagate, estimate or update these uncertain parameters. It should be noted, however, that even if complete certainty could be obtained over the model parameters, there is a remaining uncertainty to the solution due to approximations made in the numerical model. This key observation is what underpins the currently booming field of probabilistic numerics.
At the core of probabilistic numerics, the estimation of some unknown function is recast as a Bayesian inference problem, which allows for the estimation of the function with some uncertainty measure [3, 4]. Early examples of the application of this framework include optimizing functions [5], computing integrals [6] and solving ordinary differential equations [7]. More recently, following a "call to arms" from Hennig et al. [8], a large push has been made to apply this framework to a large range of problems, ranging from solving linear systems [9, 10, 11] to quadrature [12, 13, 14] to solving ordinary differential equations [15, 16, 17]. Most relevant for this work are the probabilistic numerical methods that have been developed for the solving of partial differential equations, which can be roughly divided into two categories: meshfree probabilistic solvers, and solver-perturbing error estimators.
The first category [18, 19, 20] can be seen as a way to find solutions to partial differential equations directly from the strong form. By evaluating the displacement field and its derivatives on a grid of collocation points, a solution can be obtained without needing to apply a finite element discretization over the domain. This approach to solving partial differential equations shares some similarities with Bayesian physics-informed neural networks [21, 22], the main difference lying in the function that is being fitted at the collocation points. The way in which these meshfree solvers relate to traditional collocation methods is similar to the way in which Bayesian physics-informed neural networks relate to their deterministic counterparts.
The second category [23, 24, 25] is more directly focused on estimating the discretization error of traditional solvers for differential equations. For ordinary differential equations, the usual time integration step is taken, after which the solution is perturbed by adding Gaussian noise, representing the uncertainty in the time integration result. Similarly, for partial differential equations, the traditional mesh discretization is perturbed using small support Gaussian random fields, which reflect the uncertainty introduced by the mesh discretization. In [26] and [27], a similar approach is taken, but rather than adding noise to the solution, an uncertainty is introduced by perturbing the time step size or finite element discretization. A more formal mathematical basis for probabilistic numerical methods can be found in [28], where a more rigorous definition of the term is outlined and a common framework underpinning these two seemingly separate categories is established.
It is worth noting that these probabilistic numerical methods are a strong deviation from traditional error estimators [29, 30, 31], as they embed the model error into the method itself, rather than estimate it a posteriori. This inherently affects the model output, which depending on the context can be a desirable or undesirable property. In [32], a method is presented to obtain full-field error estimates by assuming a Gaussian process prior over the discretization error, and updating it based on a set of traditional estimators of error in quantities of interest. This way, a distribution representing the finite element discretization error can be obtained in a non-intrusive manner.
The shared goal of these methods is to accurately describe the errors made due to limitations of our numerical models, though their method of modeling error differs. At the core, the meshfree probabilistic solvers model error as the result of using a finite number of observations to obtain a
solution to an infinite-dimensional problem. The solver-perturbing error estimators, on the other hand, take an existing discretization, like the one used in the finite element method, and assign some uncertainty measure to the existing solver. This prompts us to ask: what happens if the philosophy from the meshfree probabilistic solvers is applied to existing mesh-based solvers of partial differential equations? To the best of our knowledge, beside a brief remark in [33], no attempt has been made to explore the answers to this question.
In this work, we propose a probabilistic numerical method for the modeling of finite element discretization error. The solution is endowed with a Gaussian process prior, which is then updated based on observations of the right-hand side from a finite element discretization. This allows for the approximation of the true solution while including the uncertainty resulting from the finite discretization that is applied. Given the intractability of the exact solution space, it is necessary to introduce a discretization over the domain that is fine enough to represent the exact solution. We present a class of priors that naturally accounts for the smoothness of the partial differential equation at hand, and show how the assembly of large full covariance matrices can be avoided. Special attention is paid to the treatment of Dirichlet and Neumann boundary conditions, and several ways of incorporating boundary information in the prior distribution are presented. The effects of the hyperparameters of the prior distribution as well as observation noise are investigated in detail.
The underlying goal of the development of a Bayesian model for the finite element discretization error, is to enable the propagation of discretization error to quantities of interest through the computational pipelines that arise in multiscale modeling, inverse modeling and data assimilation settings. This consistent treatment of discretization error in turn allows for more informed decisions to be made about its impact on the model output. To give a concrete example, in [34], a Bayesian framework for the assimilation of measurement data and finite element models is presented. Within this framework, a model misspecification component is defined, which is endowed with a squared-exponential Gaussian process prior. The Bayesian formulation of the finite element method that we derive in this work would allow for a more informative choice of prior distribution over the model misspecification component, for example by separating out the discretization error from the error associated with other modeling assumptions.
The outline of this paper is as follows: In Section 2, we derive our Bayesian formulation of the finite element method. This is followed by a discussion on the choice of prior covariance in Section 3, where a class of suitable prior distributions is presented. Then, the results of the method are demonstrated in Section 4 through three different examples: a 1D tapered bar, a 2D L-shaped cantilever, and a 2D porous microstructure. Finally, in Section 5, the conclusions of this paper are drawn and discussed.
## 2 Bayesian Finite Element Method
In this section, our proposed Bayesian version of the finite element method is derived. Although the method is applicable to a large class of partial differential equations, for the purposes of demonstration, we will consider Poisson's equation:
\[\begin{split}-\Delta_{\mathbf{x}}u(\mathbf{x})&=f(\mathbf{x}) \quad\text{ in }\Omega\\ u(\mathbf{x})&=0\quad\quad\quad\text{ on }\partial\Omega\end{split} \tag{1}\]
Here, \(\Omega\) and \(\partial\Omega\) are the domain and its boundary, respectively. \(u(\mathbf{x})\) and \(f(\mathbf{x})\) are the solution and forcing term, which are linked through the Laplace operator \(\Delta_{\mathbf{x}}\).
### Continuous formulation
As usual, the problem is restated in its weak formulation:
\[\int_{\Omega}\nabla u(\mathbf{x})\cdot\nabla v(\mathbf{x})\,\mathrm{d}\mathbf{x}=\int_{\Omega }f(\mathbf{x})v(\mathbf{x})\,\mathrm{d}\mathbf{x}\quad\forall v(\mathbf{x})\in\mathcal{V} \tag{2}\]
Here, \(\mathcal{V}\) is a Hilbert space of functions over \(\Omega\) that are weakly once-differentiable and vanish at the boundary, and we search \(u(\mathbf{u})\in\mathcal{V}\). Now, a discretization is defined over the domain using a set of locally supported shape functions \(\{\psi_{i}(\mathbf{x})\}_{i=1}^{m}\), which span a finite-dimensional space \(\mathcal{W}^{\text{h}}\subset\mathcal{V}\). The test functions can be defined in terms of these shape functions:
\[v(\mathbf{x})=\sum_{j=1}^{m}v_{j}\psi_{j}(\mathbf{x})\quad\text{ with }\psi_{j}(\mathbf{x}) \in\mathcal{W}^{\text{h}} \tag{3}\]
Since Equation (2) has to hold for all \(v(\mathbf{x})\in\mathcal{W}^{\text{h}}\), the weights \(v_{j}\) can be chosen at will. Substituting Equation (3) into Equation (2) and setting \(v_{j}=\delta_{ij}\), where \(\delta_{ij}\) is the Kronecker delta function, yields the entries of the finite element force vector \(\mathbf{g}\):
\[g_{i}=\int_{\Omega}f(\mathbf{x})\psi_{i}(\mathbf{x})\,\mathrm{d}\mathbf{x} \tag{4}\]
A centered Gaussian process with a positive definite covariance function \(k(\mathbf{x},\mathbf{x}^{\prime})\) is now assumed over the solution \(u(\mathbf{x})\):
\[u(\mathbf{x})\sim\mathcal{GP}\left(0,k(\mathbf{x},\mathbf{x}^{\prime})\right) \tag{5}\]
From Equations (2) to (5), the following covariances between \(u(\mathbf{x})\) and \(f_{i}\) are obtained:
\[\begin{split}\operatorname{cov}\left(u(\mathbf{x}),u(\mathbf{x}^{\prime })\right)&=k(\mathbf{x},\mathbf{x}^{\prime})\\ \operatorname{cov}\left(g_{i},u(\mathbf{x}^{\prime})\right)& =\int_{\Omega}\nabla_{\mathbf{x}}k(\mathbf{x},\mathbf{x}^{\prime})\cdot\nabla _{\mathbf{x}}\psi_{i}(\mathbf{x})\,\mathrm{d}\mathbf{x}\\ \operatorname{cov}\left(u(\mathbf{x}),g_{j}\right)&= \int_{\Omega}\nabla_{\mathbf{x}^{\prime}}k(\mathbf{x},\mathbf{x}^{\prime})\cdot\nabla_{ \mathbf{x}^{\prime}}\psi_{j}(\mathbf{x}^{\prime})\,\mathrm{d}\mathbf{x}^{\prime}\\ \operatorname{cov}\left(g_{i},g_{j}\right)&=\int_{ \Omega}\int_{\Omega}\nabla_{\mathbf{x}}\left(\nabla_{\mathbf{x}^{\prime}}k(\mathbf{x},\bm {x}^{\prime})\cdot\nabla_{\mathbf{x}^{\prime}}\psi_{j}(\mathbf{x}^{\prime})\right) \cdot\nabla_{\mathbf{x}}\psi_{i}(\mathbf{x})\,\mathrm{d}\mathbf{x}\,\mathrm{d}\mathbf{x}^{ \prime}\end{split} \tag{6}\]
This allows us to define the joint distribution of the solution \(u(\mathbf{x})\) at an arbitrary set of prediction locations \(\mathbf{X}\) and the finite element force vector \(\mathbf{f}\):
\[\begin{bmatrix}\mathbf{g}\\ u(\mathbf{X})\end{bmatrix}\sim\mathcal{N}\left(\mathbf{0},\begin{bmatrix} \operatorname{cov}\left(\mathbf{g},\mathbf{g}\right)&\operatorname{cov}\left(\mathbf{g},u(\mathbf{X})\right)\\ \operatorname{cov}\left(u(\mathbf{X}),\mathbf{g}\right)&\operatorname{cov}\left(u(\mathbf{ X}),u(\mathbf{X})\right)\end{bmatrix}\right) \tag{7}\]
Conditioning now on \(\mathbf{g}\) yields the following posterior distribution:
\[u(\mathbf{X})|\mathbf{g}\sim\mathcal{N}\left(\mathbf{m}^{*},\mathbf{\Sigma}^{*}\right) \tag{8}\]
Here, the posterior mean vector \(\mathbf{m}^{*}\) and covariance matrix \(\mathbf{\Sigma}^{*}\) are given by [35]:
\[\begin{split}\mathbf{m}^{*}&=\operatorname{cov}\left(u(\mathbf{X}), \mathbf{g}\right)\operatorname{cov}\left(\mathbf{g},\mathbf{g}\right)^{-1}\mathbf{g}\\ \mathbf{\Sigma}^{*}&=\operatorname{cov}\left(u(\mathbf{X}),u(\mathbf{X}) \right)-\operatorname{cov}\left(u(\mathbf{X}),\mathbf{g}\right)\operatorname{cov} \left(\mathbf{g},\mathbf{g}\right)^{-1}\operatorname{cov}\left(\mathbf{g},u(\mathbf{X})\right) \end{split} \tag{9}\]
The posterior mean vector \(\mathbf{m}^{*}\) provides an estimate of the solution field \(u(\mathbf{x})\) at the prediction locations defined by \(\mathbf{X}\). The posterior covariance matrix \(\mathbf{\Sigma}^{*}\) indicates the uncertainty associated with this estimate due to the fact that it was obtained using only a finite set of shape functions. Since the finite discretization is the only source of uncertainty in our model, we can intuitively interpret the posterior covariance as an indicator of finite element discretization error.
Unfortunately, the integrals required to compute the covariances in Equation (6) are generally intractable. For some arbitrary covariance function \(k(\mathbf{x},\mathbf{x}^{\prime})\), the integration over the shape functions \(\psi_{i}(\mathbf{x})\) and \(\psi_{j}(\mathbf{x}^{\prime})\) cannot be performed without putting severe restrictions on which shape functions are permitted. On the other hand, we can design the covariance function such that these integrals do become tractable, for example by following [33] and setting \(k(\mathbf{x},\mathbf{x}^{\prime})=G(\mathbf{x},\mathbf{x}^{\prime})\), or following [36] and setting \(k(\mathbf{x},\mathbf{x}^{\prime})=\int_{\Omega}\int_{\Omega}G(\mathbf{x},\mathbf{z})G(\mathbf{x}^{ \prime},\mathbf{z}^{\prime})\delta(\mathbf{z}-\mathbf{z}^{\prime})\mathrm{d}\mathbf{z}\mathrm{ d}\mathbf{z}^{\prime}\), where \(\delta(\mathbf{x})\) is a Dirac delta function. In both of these expressions, the Green's function \(G(\mathbf{x},\mathbf{x}^{\prime})\) associated with the operator \(-\Delta\) is required, which is generally not available for a given partial differential equation. Since our aim is to develop a general Bayesian framework for modeling finite element discretization error, both of these limitations would be unacceptable.
### Petrov-Galerkin formulation
This motivates us to approximate \(u(x)\) in the finite-dimensional space \(\mathcal{V}^{\mathrm{h}}\) spanned by a second set of locally supported shape functions \(\{\phi_{i}(\mathbf{x})\}_{i=1}^{n}\):
\[u(\mathbf{x})=\sum_{i=1}^{n}u_{i}\phi_{i}(\mathbf{x}) \tag{10}\]
Note that this is not the same set of shape functions as the one used to define the force vector in Equation (4). In fact, since our aim is to model the discretization error between \(\mathcal{V}\) and \(\mathcal{W}^{\mathrm{h}}\), it is important that the error between \(\mathcal{V}\) and \(\mathcal{V}^{\mathrm{h}}\) is negligible compared to the error between \(\mathcal{V}^{\mathrm{h}}\) and \(\mathcal{W}^{\mathrm{h}}\) Since \(\mathcal{W}^{\mathrm{h}}\neq\mathcal{V}^{\mathrm{h}}\), our Bayesian formulation of the finite element method is a Petrov-Galerkin method, as opposed to the usual Bubnov-Galerkin method, where the test and trial functions come from the same space.
Substituting Equations (3) and (10) into Equation (2) yields the matrix formulation of the problem:
\[\mathbf{H}\mathbf{u}=\mathbf{g} \tag{11}\]
Note that the elements of the force vector \(\mathbf{g}\) are still the same as in Equation (4). The elements of the stiffness matrix \(\mathbf{H}\) are given by:
\[H_{ij}=\int_{\Omega}\nabla\phi_{i}(\mathbf{x})\cdot\nabla\psi_{j}(\mathbf{x})\, \mathrm{d}\mathbf{x} \tag{12}\]
Since the solution field \(u(x)\) has been reduced from the infinite-dimensional space \(\mathcal{V}\) to the finite-dimensional \(\mathcal{V}^{\mathrm{h}}\), the distribution assumed over the solution in Equation (5) needs to be reduced accordingly. Instead of an infinite-dimensional Gaussian process, we obtain a finite-dimensional zero-mean normal distribution with a positive definite covariance matrix \(\mathbf{\Sigma}\):
\[\mathbf{u}\sim\mathcal{N}\left(0,\mathbf{\Sigma}\right) \tag{13}\]
The joint distribution of \(\mathbf{u}\) and \(\mathbf{g}\) is now given by:
\[\begin{bmatrix}\mathbf{g}\\ \mathbf{u}\end{bmatrix}=\begin{bmatrix}\mathbf{H}\mathbf{u}\\ \mathbf{u}\end{bmatrix}\sim\mathcal{N}\left(\mathbf{0},\begin{bmatrix}\mathbf{H}\mathbf{\Sigma }\mathbf{H}^{T}&\mathbf{H}\mathbf{\Sigma}\\ \mathbf{\Sigma}\mathbf{H}^{T}&\mathbf{\Sigma}\end{bmatrix}\right) \tag{14}\]
Conditioning \(\mathbf{u}\) on \(\mathbf{g}\) yields the following posterior distribution:
\[\mathbf{u}|\mathbf{g}\sim\mathcal{N}\left(\mathbf{m}^{*},\mathbf{\Sigma}^{*}\right) \tag{15}\]
Here, the posterior mean vector \(\mathbf{m}^{*}\) and covariance matrix \(\mathbf{\Sigma}^{*}\) are given by:
\[\mathbf{m}^{*} =\mathbf{\Sigma}\mathbf{H}^{T}\left(\mathbf{H}\mathbf{\Sigma}\mathbf{H}^{T}\right)^{- 1}\mathbf{g} \tag{16}\] \[\mathbf{\Sigma}^{*} =\mathbf{\Sigma}-\mathbf{\Sigma}\mathbf{H}^{T}\left(\mathbf{H}\mathbf{\Sigma}\mathbf{H}^ {T}\right)^{-1}\mathbf{H}\mathbf{\Sigma}\]
Similar to the continuous formulation presented in section 2.1, \(\mathbf{m}^{*}\) can be interpreted as providing an estimate of the solution \(u(\mathbf{x})\) in the fine space \(\mathcal{V}^{\text{h}}\), while observing the right-hand side \(f(\mathbf{x})\) only in the coarse space \(\mathcal{W}^{\text{h}}\). The posterior covariance matrix \(\mathbf{\Sigma}^{*}\) then provides an indication of the uncertainty associated with this estimate due to the limited number of observations that are being made, which can be taken as an indicator of finite element discretization error. Note that if the Bubnov-Galerkin formulation (i.e. \(\mathcal{W}^{\text{h}}=\mathcal{V}^{\text{h}}\)) was used, \(\mathbf{\Sigma}^{*}\) would reduce to a null matrix, reflecting the fact that there no longer exists a discretization error between \(\mathcal{V}^{\text{h}}\) and \(\mathcal{W}^{\text{h}}\).
### Hierarchical shape functions
Thus far, the only requirement that has been put on the choice of \(\mathcal{V}^{\text{h}}\) and \(\mathcal{W}^{\text{h}}\), is that the error between \(\mathcal{V}\) and \(\mathcal{V}^{\text{h}}\) is negligible compared to the error between \(\mathcal{V}^{\text{h}}\) and \(\mathcal{W}^{\text{h}}\). We now add a second restriction, and assume that \(\mathcal{W}^{\text{h}}\subset\mathcal{V}^{\text{h}}\). This defines a hierarchy between these two spaces, and implies that any function defined in \(\mathcal{W}^{\text{h}}\) can be expressed in \(\mathcal{V}^{\text{h}}\). One way to ensure this hierarchy in practice is to first define a coarse mesh corresponding to \(\mathcal{W}^{\text{h}}\), and then refine it to obtain a fine mesh corresponding to \(\mathcal{V}^{\text{h}}\). Alternatively, it is possible to use only a single mesh, and use linear and quadratic shape functions to define the \(\mathcal{V}^{\text{h}}\) and \(\mathcal{W}^{\text{h}}\), respectively.
From the hierarchy between \(\mathcal{V}^{\text{h}}\) and \(\mathcal{W}^{\text{h}}\), it follows that the basis functions that span the coarse space \(\mathcal{W}^{\text{h}}\) can be written as a linear combination of the basis functions that span the fine space \(\mathcal{V}^{\text{h}}\). In other words, there exists a matrix1\(\mathbf{\Phi}^{T}\) that maps a vector of fine shape functions \(\mathbf{\phi}(\mathbf{x})=\left[\begin{smallmatrix}\phi_{1}(\mathbf{x})&\phi_{2}(\mathbf{x}) &\ldots&\phi_{n}(\mathbf{x})\end{smallmatrix}\right]^{T}\):
Footnote 1: Note that \(\mathbf{\Phi}\) has been defined in terms of its transpose in order to make expressions in later sections consistent with common notation for least squares, proper orthogonal decomposition, and so on.
\[\mathbf{\psi}(\mathbf{x})=\mathbf{\Phi}^{T}\mathbf{\phi}(\mathbf{x}) \tag{17}\]
This allows Equation (12) to be rewritten as:
\[H_{ij}=\int_{\Omega}\nabla\sum_{k=1}^{n}\Phi_{ki}\phi_{k}(\mathbf{x})\cdot\nabla \phi_{j}(\mathbf{x})\,\mathrm{d}\mathbf{x}=\sum_{k=1}^{n}\Phi_{ki}\int_{\Omega}\nabla w _{k}(\mathbf{x})\cdot\nabla\phi_{j}(\mathbf{x})\,\mathrm{d}\mathbf{x} \tag{18}\]
As a result, \(\mathbf{H}\) can be expressed as:
\[\mathbf{H}=\mathbf{\Phi}^{T}\mathbf{K} \tag{19}\]
where \(\mathbf{K}\) is the Bubnov-Galerkin stiffness matrix if both trial and test functions came from the fine space \(\mathcal{V}^{\text{h}}\):
\[K_{ij}=\int_{\Omega}\nabla\phi_{i}(\mathbf{x})\cdot\nabla\phi_{j}(\mathbf{x})\,\mathrm{ d}\mathbf{x} \tag{20}\]
In a similar way, Equation (4) can be rewritten as:
\[g_{i}=\int_{\Omega}f(x)\sum_{k=1}^{n}\Phi_{ki}\phi_{k}(\boldsymbol{x})\,\mathrm{d }\boldsymbol{x}=\sum_{k=1}^{n}\Phi_{ki}\int_{\Omega}f(x)\phi_{k}(\boldsymbol{x })\,\mathrm{d}\boldsymbol{x} \tag{21}\]
And so, \(\boldsymbol{g}\) can be expressed as:
\[\boldsymbol{g}=\boldsymbol{\Phi}^{T}\boldsymbol{f} \tag{22}\]
where \(\boldsymbol{f}\) is the force vector the test functions came from the fine space \(\mathcal{V}^{\text{h}}\):
\[f_{i}=\int_{\Omega}f(\boldsymbol{x})\phi_{i}(\boldsymbol{x})\,\mathrm{d} \boldsymbol{x} \tag{23}\]
These relationships will prove useful later on, and are in fact the main motivation for choosing the shape functions in a hierarchical manner.
Finally, we define the reference solution \(\boldsymbol{\hat{u}}\) as the solution to the fine-scale system of equations that follows from the fine-scale Bubnov-Galerkin formulation:
\[\boldsymbol{K}\boldsymbol{\hat{u}}=\boldsymbol{f} \tag{24}\]
In the remainder of this work, discretization error is defined with respect to \(\boldsymbol{\hat{u}}\), unless specified otherwise.
### Boundary conditions
It is worth considering how the application of boundary conditions in the fine space translates to the shape functions in the coarse space. To do this, \(\boldsymbol{\phi}(\boldsymbol{x})\) is split into \(\boldsymbol{\phi_{\text{i}}}(\boldsymbol{x})\) and \(\boldsymbol{\phi_{\text{d}}}(\boldsymbol{x})\). This could be considered abuse of notation, since \(\mathcal{V}^{\text{h}}\in\mathcal{V}\), which is already constrained by the Dirichlet boundary conditions, so from this point of view, \(\boldsymbol{\phi_{\text{d}}}(\boldsymbol{x})\) should not exist. However, in most practical finite element implementations, shape functions are assigned to the boundary nodes as well, in order to facilitate the inclusion of inhomogeneous boundary conditions in the model.
The boundary conditions in the coarse space follow from \(\boldsymbol{\phi_{\text{d}}}(\boldsymbol{x})\) and \(\boldsymbol{\Phi}\): \(\boldsymbol{\psi_{\text{d}}}(\boldsymbol{x})\) is defined as the elements of \(\boldsymbol{\psi}(\boldsymbol{x})\) where the rows of \(\boldsymbol{\Phi}\) belonging to \(\boldsymbol{\phi_{\text{d}}}(\boldsymbol{x})\) have non-zero entries. As a result, Equation (19) can be split as follows:
\[\begin{bmatrix}\boldsymbol{\psi_{\text{i}}}(\boldsymbol{x})\\ \boldsymbol{\psi_{\text{d}}}(\boldsymbol{x})\end{bmatrix}=\begin{bmatrix} \boldsymbol{\Phi_{\text{i}\text{i}}}^{T}&\boldsymbol{0}\\ \boldsymbol{\Phi_{\text{i}\text{d}}}^{T}&\boldsymbol{\Phi_{\text{d}\text{d}}} ^{T}\end{bmatrix}\begin{bmatrix}\boldsymbol{\phi_{\text{i}}}(\boldsymbol{x}) \\ \boldsymbol{\phi_{\text{d}}}(\boldsymbol{x})\end{bmatrix} \tag{25}\]
Note that the fact that \(\boldsymbol{\Phi_{\text{d}\text{i}}}=\boldsymbol{0}\) does not introduce any loss of generality: any non-zero element of \(\boldsymbol{\Phi_{\text{d}\text{i}}}\) would by definition of \(\boldsymbol{\psi_{\text{d}}}(\boldsymbol{x})\) be an element of \(\boldsymbol{\Phi_{\text{d}\text{d}}}\), not \(\boldsymbol{\Phi_{\text{d}\text{i}}}\). From Equations (19) and (25), it follows that:
\[\boldsymbol{H_{\text{i}\text{i}}}=\boldsymbol{\Phi_{\text{i}\text{i}}}^{T} \boldsymbol{K_{\text{i}\text{i}}} \tag{26}\]
Similarly, from Equations (22) and (25), it follows that:
\[\boldsymbol{g_{\text{i}}}=\boldsymbol{\Phi_{\text{i}\text{i}}}^{T}\boldsymbol {f_{\text{i}}} \tag{27}\]
Commonly, Dirichlet boundary conditions are enforced by eliminating the corresponding degrees of freedom, and solving the system that remains. Due to the simple relation that \(\boldsymbol{\Phi_{\text{i}\text{i}}}\) provides between \(\boldsymbol{H_{\text{i}\text{i}}}\) and \(\boldsymbol{K_{\text{i}\text{i}}}\) (Equation (26)) as well as \(\boldsymbol{g_{\text{i}}}\) and \(\boldsymbol{f_{\text{i}}}\) (Equation (27)), all relationships described in Sections 2.2 and 2.3 still hold when applied to only the internal nodes of the system. From this
point onward, we will therefore only consider the internal nodes of the system. This also means that only the part of the covariance matrix related to the internal nodes \(\mathbf{\Sigma_{ii}}\) needs to be considered, and so the requirement of positive definiteness of \(\mathbf{\Sigma}\) can be relaxed to a requirement of positive definiteness of only \(\mathbf{\Sigma_{ii}}\). The subscripts \(\mathbf{i}\) (for vectors) and \(\mathbf{ii}\) (for matrices) will be left implied in order to declutter the notation.
## 3 Choice of Prior Covariance
Thus far, the prior covariance matrix \(\mathbf{\Sigma}\) has not been specified. The choice of \(\mathbf{\Sigma}\) is subject to two main requirements. The first requirement is that \(\mathbf{\Sigma}\) needs to have a sparse representation. Since \(\mathbf{\Sigma}\) is a full \(n\times n\) matrix, where \(n\) is the number of degrees of freedom of the fine discretization, explicitly computing, storing and applying operations on it would quickly become prohibitively expensive. As a result, the traditional approach of using a kernel to directly compute all entries of \(\mathbf{\Sigma}\) would be infeasible. Instead, a stochastic partial differential equation is used to define the prior distribution, without needing to explicitly compute \(\mathbf{\Sigma}\). For certain kernel-based priors, an equivalent stochastic partial differential equation can be shown to exist, see for example [37].
The second requirement is that the choice of prior distribution needs to be appropriate for the partial differential equation at hand. For instance, if the infinitely differentiable squared exponential prior were assumed on the solution field \(u(x)\), this would imply \(C^{\infty}\) continuity on the right-hand side field \(f(x)\), which is usually an undesirable assumption to make. On the other hand, if the prior is not smooth enough, samples from the prior would exhibit unphysical discontinuities in \(u(x)\) or its gradient fields. In short, the prior needs to respect the smoothness of the partial differential equation to which it is applied.
In this section, a particular class of priors that meets both of these requirements is presented.
### A sparse right-hand side prior
Following the approach taken in [19], rather than assuming a prior measure directly on the displacement field \(u(\boldsymbol{x})\), we assume a centered Gaussian process prior with covariance function \(k_{\mathrm{f}}(\boldsymbol{x},\boldsymbol{x}^{\prime})\) over the forcing term \(f(\boldsymbol{x})\):
\[f(\boldsymbol{x})\sim\mathcal{GP}\left(0,k_{\mathrm{f}}(\boldsymbol{x}, \boldsymbol{x}^{\prime})\right) \tag{28}\]
This implicitly defines an equivalent prior on \(u(\boldsymbol{x})\):
\[u(\boldsymbol{x})\sim\mathcal{GP}\left(0,k_{\mathrm{nat}}(\boldsymbol{x}, \boldsymbol{x}^{\prime})\right) \tag{29}\]
Here, the covariance function \(k_{\mathrm{nat}}\) can be expressed in terms of \(k_{\mathrm{f}}(\boldsymbol{x},\boldsymbol{x}^{\prime})\) and the Green's function \(G(\boldsymbol{x},\boldsymbol{x}^{\prime})\) associated with the operator of the partial differential equation:
\[k_{\mathrm{nat}}(\boldsymbol{x},\boldsymbol{x}^{\prime})=\int_{\Omega}\int_{ \Omega}G(\boldsymbol{x},\boldsymbol{z})G(\boldsymbol{x}^{\prime},\boldsymbol {z}^{\prime})k_{\mathrm{f}}(\boldsymbol{z},\boldsymbol{z}^{\prime})\mathrm{d }\boldsymbol{z}\mathrm{d}\boldsymbol{z}^{\prime} \tag{30}\]
In [19], this kernel is described as "natural" in the sense that the operator \(-\Delta\) (see Equation (1)) uniquely maps from the Hilbert space associated with the forcing term covariance function \(k(\boldsymbol{x},\boldsymbol{x}^{\prime})\) to the one associated with \(k_{\mathrm{nat}}(\boldsymbol{x},\boldsymbol{x}^{\prime})\). Each sample from \(u(\boldsymbol{x})\) drawn from this natural kernel has an equivalent sample from \(f(\boldsymbol{x})\) and vice versa. Unfortunately, since the Green's function is generally not available for a given partial differential equation, in practical cases this natural kernel is then discarded in favor of a Matern or Wendland kernel with the appropriate level of smoothness.
However, an important distinction between [19] and this work is that we are concerned with the finite element method rather than collocation methods. As a result, it is not necessary to step away
from the natural prior approach. Instead, it can be approximated by applying the finite element discretization first, and only then finding the natural covariance matrix for the solution vector \(\mathbf{u}\). Given the prior distribution over \(f(\mathbf{x})\) in Equation (28) and the definition of the force vector in Equation (20), it follows that:
\[\mathbf{f}\sim\mathcal{N}\left(\mathbf{0},\mathbf{\Sigma_{f}}\right) \tag{31}\]
where the force vector covariance matrix \(\mathbf{\Sigma_{f}}\) is given by:
\[\mathbf{\Sigma_{f}}=\int_{\Omega}\int_{\Omega}\mathbf{w}(\mathbf{x})k_{\mathrm{f}}(\mathbf{x}, \mathbf{x}^{\prime})\mathbf{w}(\mathbf{x}^{\prime})^{T}\,\mathrm{d}\mathbf{x}^{\prime}\, \mathrm{d}\mathbf{x} \tag{32}\]
The resulting prior distribution over \(\mathbf{u}\) then becomes:
\[\mathbf{u}\sim\mathcal{N}\left(\mathbf{0},\mathbf{K}^{-1}\mathbf{\Sigma_{f}}\mathbf{K}^{-1}\right) \tag{33}\]
Note the similarity to the natural kernel in Equation (30), with \(\mathbf{K}^{-1}\) and \(\mathbf{\Sigma_{f}}\) taking a similar role as \(G(\mathbf{x},\mathbf{x}^{\prime})\) and \(k_{\mathrm{f}}(\mathbf{x}-\mathbf{x}^{\prime})\), respectively [38]. Also similarly, each sample from \(\mathbf{u}\) has an equivalent sample from \(\mathbf{f}\) and vice versa. Conceptually, our approach is the same as [19], except that we are working in the finite-dimensional space of the discretized system, rather than the infinite-dimensional space of the original partial differential equation. The advantage of working in the finite-dimensional space is that \(\mathbf{K}^{-1}\) is computable, and as a result the natural prior can still be used.
Given this choice of prior and using Equation (19), the posterior distribution of the displacement field is given by:
\[\mathbf{u}|\mathbf{g}\sim\mathcal{N}\left(\mathbf{m}^{*},\mathbf{\Sigma}^{*}\right) \tag{34}\]
with the following posterior mean \(\mathbf{m}^{*}\) and posterior covariance \(\mathbf{\Sigma}^{*}\):
\[\mathbf{m}^{*} =\mathbf{K}^{-1}\mathbf{\Sigma_{f}}\mathbf{\Phi}\left(\mathbf{\Phi}^{T}\mathbf{ \Sigma_{f}}\mathbf{\Phi}\right)^{-1}\mathbf{\Phi}^{T}\mathbf{f} \tag{35}\] \[\mathbf{\Sigma}^{*} =\mathbf{K}^{-1}\left(\mathbf{I}-\mathbf{\Sigma_{f}}\mathbf{\Phi}\left(\mathbf{\Phi} ^{T}\mathbf{\Sigma_{f}}\mathbf{\Phi}\right)^{-1}\mathbf{\Phi}^{T}\right)\mathbf{\Sigma_{f}} \mathbf{K}^{-1}\]
The posterior mean is similar to the reference solution \(\mathbf{\hat{u}}\), except that the force vector \(\mathbf{f}\) has been replaced by \(\mathbf{\hat{f}}\), a weighted projection onto the column space of \(\mathbf{\Phi}\):
\[\mathbf{\hat{f}}=\mathbf{P}\mathbf{f}=\mathbf{\Sigma_{f}}\mathbf{\Phi}\left(\mathbf{\Phi}^{T}\mathbf{ \Sigma_{f}}\mathbf{\Phi}\right)^{-1}\mathbf{\Phi}^{T}\mathbf{f} \tag{36}\]
This projected force vector \(\mathbf{\hat{f}}\) and associated projection matrix \(\mathbf{P}\) can also be interpreted as the solution to the following generalized least squares problem, where the error term \(\mathbf{\epsilon}\) has a precision matrix \(\mathbf{\Sigma_{f}}\):
\[\mathbf{f}=\mathbf{\Phi}\mathbf{g}+\mathbf{\epsilon} \tag{37}\]
From this point of view, we can also make sense of the posterior covariance by relating it to the residual maker matrix \(\mathbf{Q}=\mathbf{I}-\mathbf{P}\) associated with the generalized least squares problem given in Equation (37):
\[\mathbf{\Sigma}^{*}=\mathbf{K}^{-1}\mathbf{Q}\mathbf{\Sigma_{f}}\mathbf{K}^{-1}\,, \tag{38}\]
If no information was lost when projecting \(\mathbf{f}\) onto the coarse space and back, then the posterior mean \(\mathbf{m}^{*}\) would be exactly equal to the reference solution \(\mathbf{\hat{u}}\), and \(\mathbf{Q}\) and \(\mathbf{\Sigma}^{*}\) reduce to null matrices. Generally, however, this projection between the fine and coarse space will introduce loss of information, due to the fact that the coarse shape functions do not have the same expressivity as the fine shape functions. The lack of expressivity is in fact the root cause of discretization errors in the first place, so it is worth emphasizing that this is being reflected in the posterior covariance.
### White noise prior
Within the natural prior framework, the main choice that remains is what right-hand side covariance function \(k_{\text{f}}(\mathbf{x},\mathbf{x}^{\prime})\) to assume. In this paper, we will mostly follow [19] and [36], and assume \(k_{\text{f}}(\mathbf{x},\mathbf{x}^{\prime})\) to be a Dirac delta function \(\delta(\mathbf{x})\), scaled by a single hyperparameter \(\alpha\):
\[k_{\text{f}}(\mathbf{x},\mathbf{x}^{\prime})=\alpha^{2}\delta(\mathbf{x}-\mathbf{x}^{\prime}) \tag{39}\]
This defines a white noise field over \(f(\mathbf{x})\) with a standard deviation that is equal to \(\alpha\). The covariance matrices \(\mathbf{\Sigma_{f}}\) and \(\mathbf{\Sigma}\) then follow directly from Equations (32) and (33):
\[\begin{split}\mathbf{\Sigma_{f}}&=\alpha^{2}\mathbf{M}\\ \mathbf{\Sigma}&=\alpha^{2}\mathbf{K}^{-1}\mathbf{M}\mathbf{K}^{-1 }\end{split} \tag{40}\]
where \(\mathbf{M}\) is the classic Bubnov-Galerkin mass matrix, given by:
\[M_{ij}=\int_{\Omega}w_{i}(\mathbf{x})w_{j}(\mathbf{x})\,\mathrm{d}\mathbf{x} \tag{41}\]
Note that under this choice of prior covariance, the sparsity requirement that was put on \(\mathbf{\Sigma}\) has been met.
The resulting posterior mean vector and covariance matrix are then given by:
\[\begin{split}\mathbf{m}^{*}&=\mathbf{K}^{-1}\mathbf{M}\mathbf{ \Phi}\left(\mathbf{\Phi}^{T}\mathbf{M}\mathbf{\Phi}\right)^{-1}\mathbf{\Phi}^{T}\mathbf{f}\\ \mathbf{\Sigma}^{*}&=\alpha^{2}\mathbf{K}^{-1}\left(\mathbf{I}- \mathbf{M}\mathbf{\Phi}\left(\mathbf{\Phi}^{T}\mathbf{M}\mathbf{\Phi}\right)^{-1}\mathbf{\Phi}^{T} \right)\mathbf{M}\mathbf{K}^{-1}\end{split} \tag{42}\]
It can be seen that for this choice of prior, the hyperparameter \(\alpha\) does not affect the posterior mean, and directly scales the posterior covariance. Additionally, the contraction matrix \(\mathbf{C}\) presented in Equation (45) is also hyperparameter-independent.
### Hyperparameter tuning
For the tuning of the single hyperparameter \(\alpha\), several approaches could be considered. One common approach in the Bayesian framework is to maximize the marginal likelihood of the observed data. In our case, the log marginal likelihood of the observation vector \(\mathbf{g}\) is given by:
\[\log p\left(\mathbf{g}\right)=-\frac{1}{2\alpha^{2}}\mathbf{g}^{T}\left(\mathbf{\Phi}^{T }\mathbf{M}\mathbf{\Phi}\right)^{-1}\mathbf{g}-n\log\alpha-\frac{1}{2}\log\left|\mathbf{\Phi} ^{T}\mathbf{M}\mathbf{\Phi}\right|-\frac{n}{2}\log 2\pi \tag{43}\]
Taking the derivative with respect to \(\alpha\) and setting it equal to \(0\) yields the following closed-form expression for the maximum likelihood estimate:
\[\alpha_{\text{MLE}}=\sqrt{\frac{1}{n}\mathbf{g}^{T}\left(\mathbf{\Phi}^{T}\mathbf{M}\mathbf{ \Phi}\right)^{-1}\mathbf{g}} \tag{44}\]
Another option is to estimate the difference between reference solution \(\mathbf{\hat{u}}\) and the posterior mean \(\mathbf{m}^{*}\) based on the posterior distribution, and then scale \(\alpha\) accordingly. To obtain such an estimate, we first define the contraction matrix \(\mathbf{C}\):
\[\mathbf{C}=\mathbf{\Sigma}^{*}\mathbf{\Sigma}^{-1}=\mathbf{I}-\mathbf{K}^{-1}\mathbf{\Sigma_{f}}\mathbf{ \Phi}\left(\mathbf{\Phi}^{T}\mathbf{\Sigma_{f}}\mathbf{\Phi}\right)^{-1}\mathbf{\Phi}^{T}\mathbf{K} \tag{45}\]
Note that \(\mathbf{C}\) does not depend on \(\alpha\). This matrix can be interpreted as an indicator of the shrinkage of the posterior covariance relative to the prior covariance, due to the data that was observed. Postmultiplying by the reference solution \(\mathbf{\hat{u}}\), we find exactly the difference between the reference solution \(\mathbf{\hat{u}}\) and the posterior mean \(\mathbf{m}^{*}\):
\[\mathbf{C}\mathbf{\hat{u}}=\mathbf{\hat{u}}-\mathbf{K}^{-1}\mathbf{\Sigma}_{\mathbf{f}}\mathbf{\Phi}\left( \mathbf{\Phi}^{T}\mathbf{\Sigma}_{\mathbf{f}}\mathbf{\Phi}\right)^{-1}\mathbf{g}=\mathbf{\hat{u}}-\mathbf{m }^{*} \tag{46}\]
This might not appear useful, as the reference solution \(\mathbf{\hat{u}}\) is not known beforehand. However, \(\mathbf{C}\mathbf{\hat{u}}\) can cheaply be approximated, provided that \(\mathbf{\Sigma}_{\mathbf{f}}{}^{-1}\) can be cheaply approximated, for example by diagonalizing \(\mathbf{\Sigma}_{\mathbf{f}}\). In this case, \(\mathbf{\Sigma}^{*}\) can be approximated through sampling, as explained in Appendix A, and the other terms are known matrices and vectors:
\[\mathbf{C}\mathbf{\hat{u}}=\mathbf{\Sigma}^{*}\mathbf{K}\mathbf{\Sigma}_{\mathbf{f}}{}^{-1}\mathbf{f} \tag{47}\]
This estimate of the difference between \(\mathbf{\hat{u}}\) and \(\mathbf{m}^{*}\) can then be used tune \(\alpha\) appropriately [38].
### Enrichment of the prior covariance
One potential drawback of the white noise prior presented here is that its posterior covariance is load-independent. Since the locations of strain concentrations, singularities and largest discretization error often depend on the combination of geometry and load, it can be beneficial to explicitly model this dependency through an additional term in the prior covariance. In Section 2.4, only homogeneous Dirichlet boundary conditions were considered. Here, we expand on this and demonstrate how both homogeneous and inhomogeneous Dirichlet and Neumann boundary conditions can be included in the model. This is accomplished by assigning a statistically independent normal distribution to the displacement along the Dirichlet boundary \(\mathbf{u_{d}}\), as well as the force along the Neumann boundary \(\mathbf{f_{n}}\):
\[\mathbf{u_{d}}\sim\mathcal{N}\left(\mathbf{m_{d}},\beta^{2}\mathbf{\Sigma_{d}}\right) \mathbf{f_{n}}\sim\mathcal{N}\left(\mathbf{m_{n}},\gamma^{2}\mathbf{\Sigma_{n}}\right) \tag{48}\]
The assignment of these prior distributions to the Dirichlet and Neumann boundary conditions produces the following prior mean and covariance of the forcing term:
\[\mathbf{m_{f}} =\mathbf{K_{\text{id}}}\mathbf{m_{d}}+\mathbf{m_{f}} \tag{49}\] \[\mathbf{\Sigma_{f}} =\alpha^{2}\mathbf{M}+\beta^{2}\mathbf{K_{\text{id}}}\mathbf{\Sigma_{d}}\mathbf{ K_{\text{id}}}{}^{T}+\gamma^{2}\mathbf{\Sigma_{n}}\]
By adjusting \(\beta\) and \(\gamma\), the effects of particular loads can be emphasized or de-emphasized.
This generalization allows for a modeling choice when enforcing inhomogeneous Dirichlet boundary conditions. These can be strongly enforced in the prior, by setting \(\mathbf{\Sigma_{d}}=\mathbf{0}\) and making \(\mathbf{m_{d}}\) equal to the true displacement value at the boundary. Alternatively, they can be weakly enforced by setting \(\mathbf{m_{d}}=\mathbf{0}\), and instead assigning a non-zero covariance \(\mathbf{\Sigma_{d}}\). In this case, the Dirichlet boundaries are enforced in a weak sense, because their enforcement is only due to the right-hand side modifications being included in the observations. Naturally, a combination of these two approaches, where both \(\mathbf{m_{d}}\) and \(\mathbf{\Sigma_{d}}\) are non-zero is also valid. For homogeneous Dirichlet boundary conditions, setting \(\mathbf{m_{d}}=\mathbf{0}\) and \(\mathbf{\Sigma_{d}}=\mathbf{0}\) already strongly enforces the boundary conditions, but this strong enforcement can be weakened by introducing a non-zero \(\mathbf{\Sigma_{d}}\). For Neumann boundary conditions, the same modeling choice between strong and weak enforcement of the boundary conditions applies.
A final point to address is which covariance structure should be applied to the Dirichlet and Neumann covariances \(\mathbf{\Sigma_{d}}\) and \(\mathbf{\Sigma_{n}}\). For single point loads and single point constraints, the answer
to this question is trivial, namely a null matrix, except for a unit diagonal entry associated with the point load or constraint degree of freedom. If the problem contains multiple independent point loads or constraints, the covariance structure of \(\mathbf{\Sigma_{d}}\) and \(\mathbf{\Sigma_{n}}\) is still relatively straightforward: in this case, the off-diagonal terms of \(\mathbf{\Sigma_{d}}\) and \(\mathbf{\Sigma_{n}}\) can simply be set to \(0\). However, if for example an inhomogeneous Dirichlet or Neumann boundary condition is applied along an edge, this assumption of independence does not hold, and a full covariance structure needs to be obtained for \(\mathbf{\Sigma_{d}}\) and \(\mathbf{\Sigma_{n}}\).
### Observation noise
It is common in Gaussian process regression model to include a certain amount of observation noise in the model. In general, the inclusion of observation noise increases the relative importance of the prior compared to the observed data. This can be beneficial if the data has been obtained from noisy measurements, or as a form of regularization to prevent overfitting. It can also be the case that useful features of the prior distribution vanish from the posterior distributions, because they are drowned out by the observation. By introducing observation noise, the observed data is prevented from dominating the problem, and such features can still be propagated to the posterior.
If i.i.d. observation noise with magnitude \(\sigma_{e}\) is included in the model, the posterior mean vector and covariance matrix given in Equation (42) become:
\[\mathbf{m}^{*} =\alpha^{2}\mathbf{K}^{-1}\mathbf{M}\mathbf{\Phi}\left(\alpha^{2}\mathbf{\Phi}^{ T}\mathbf{M}\mathbf{\Phi}+\sigma_{e}^{2}\mathbf{I}\right)^{-1}\mathbf{g} \tag{50}\] \[\mathbf{\Sigma}^{*} =\mathbf{K}^{-1}\left(\alpha^{2}\mathbf{M}-\alpha^{4}\mathbf{M}\mathbf{\Phi} \left(\alpha^{2}\mathbf{\Phi}^{T}\mathbf{M}\mathbf{\Phi}+\sigma_{e}^{2}\mathbf{I}\right)^{-1} \mathbf{\Phi}^{T}\mathbf{M}\right)\mathbf{K}^{-1}\]
Note that several previously demonstrated relationships no longer hold if observation noise is added to the model. This includes the analogy with the generalized least squares problem in Equation (37), as well as the exact maximum likelihood estimate in Equation (44).
## 4 Results
### 1D tapered bar
The following one-dimensional mechanics problem with inhomogeneous dirichlet boundary conditions is considered:
\[-\frac{\mathrm{d}}{\mathrm{d}x}\left(k(x)\frac{\mathrm{d}u}{\mathrm{d}x}\right) =f(x)\quad\text{ in }\Omega=(0,1) \tag{51}\] \[u(x) =0\quad\quad\quad\text{ on }x=0\] \[u(x) =1\quad\quad\quad\text{ on }x=1\]
Here, \(f(x)=1\) and \(k(x)=1-0.9x\). This setup describes a tapered bar with a constant load, where the left end is clamped and a unit displacement is prescribed on the right end. The coarse and fine discretization consist of a uniform mesh of 4 linear elements and 64 linear elements, respectively. Note that this means that each coarse element is subdivided into 16 fine elements, and as a result, the hierarchy between the shape function spaces described in Section 2.2 is ensured. The white noise prior distribution presented in Section 3.2 is assumed, with \(\alpha=1\). In order to ensure that the observation covariance matrix is positive definite, a small observation noise (\(\sigma_{e}=10^{-8}\)) is added to the model. The inhomogeneous Dirichlet boundary conditions are treated deterministically, as described in Section 2.4.
In Figure 1, the resulting prior and posterior distribution is shown. Several pieces of information about the problem, in absence of knowledge of the right-hand side term, can be found encoded in the prior. The Dirichlet boundary conditions at \(x=0\) and \(x=1\) have been accounted for in the prior mean with a standard deviation of 0 at those locations. The prior mean corresponds to the displacement field of the bar under Dirichlet boundary conditions alone. Furthermore, a larger prior standard deviation is found in the region where the bar is thinner, reflecting the fact that a small perturbation in the right-hand side in this region would have a significant effect on the displacement field. Considering the posterior distribution, we see that its mean falls between the coarse- and fine-scale reference solutions. Lastly, it can be seen that the region where the posterior standard deviation is largest corresponds to the region where the discretization error is largest.
In Figure 2, we increase the number of degrees of freedom of the coarse mesh \(n_{c}\) to study its effect on the posterior distribution. As the coarse-scale solution approaches the fine-scale solution, the posterior mean approaches the fine-scale solution accordingly. Additionally, the posterior standard deviation shrinks along with the discretization error until the coarse mesh density meets the fine one at \(n_{c}=n_{f}=64\). At this point, only a small posterior standard deviation remains due to the small observation noise that was included in the model.
### 2D L-shaped cantilever
For a two-dimensional example, the L-shaped cantilever beam problem shown in Figure 3 is considered. For this particular problem, it is well known that a singularity occurs at the inner corner of the beam. This, in turn, produces a large discretization error in the strain field at this location. For this reason, this example will be focused on the strain field \(\boldsymbol{\varepsilon}(\boldsymbol{x})\). The strain field can be sampled by simply sampling the solution \(\boldsymbol{u}\) as described in Appendix A, and then computing the corresponding
Figure 1: Prior and posterior distributions of the 1D tapered bar problem. For comparison, the fine-scale and coarse-scale reference solutions have been included. From each distribution, 30 samples have been plotted. The shaded regions correspond to the 95% confidence intervals of the distributions.
Figure 2: Evolution of the posterior distribution as the number of coarse-scale degrees freedom \(n_{c}\) increases towards the number of fine-scale degrees of freedom \(n_{f}\).
strain field for each sample. Both the coarse and fine mesh consist of quadrilateral elements with linear shape functions, as shown in Figure 2(a). The elements of the coarse and fine mesh have a side length \(h_{\mathrm{c}}=0.25\) and \(h_{\mathrm{f}}=0.0625\), respectively. For this example, linear elasticity under small strains and plane stress conditions is assumed. In Figure 2(b), the strain field associated with the reference solution \(\varepsilon_{yy}^{\mathrm{f}}\) is shown. The error in the strain field between the coarse and fine discretization \(\Delta\varepsilon_{yy}=|\varepsilon_{yy}^{\mathrm{f}}-\varepsilon_{yy}^{ \mathrm{c}}|\) is shown in Figure 2(c).
Since the stress concentration arises due to a combination of loading conditions and geometry, it might be desirable to amplify the effects of the loading conditions in the prior. To this end, the white noise prior used thus far is enriched by an additional term representing the uncertainty due to the Neumann boundary condition \(\mathbf{f_{n}}\):
\[\mathbf{\Sigma_{f}}=\alpha^{2}\mathbf{M}+\gamma^{2}\mathbf{\Sigma_{n}} \tag{52}\]
Here, \(\mathbf{\Sigma_{n}}\) is the covariance matrix associated with the inhomogeneous Neumann boundary degrees of freedom. Since in this case only a single point load is applied, \(\mathbf{\Sigma_{n}}\) is a null matrix, except for a unit diagonal entry associated with the point load degree of freedom. With the definition of the prior in Equation (52), \(\alpha\) controls the effect of the geometry under general loading conditions on the discretization error, while \(\gamma\) controls the effect of the point load specifically.
Before moving to a more general setting, we first consider two extreme cases for the hyperparameter values, namely \((\alpha,\gamma)=(1,0)\) and \((\alpha,\gamma)=(0,1)\). The observation noise is set to a small value (\(\sigma_{e}=10^{-8}\)), so the prior and posterior distributions represent the limit cases of \(\sigma_{e}\to\infty\) and \(\sigma_{e}\to 0\), respectively. To ensure positive definiteness for the \((\alpha,\gamma)=(0,1)\) case, a small white noise with magnitude \(\sigma=10^{-6}\) is added to the diagonal of the covariance matrix. In Figure 4, the prior and posterior standard deviations of the vertical strain field \(\varepsilon_{yy}\) are plotted for both cases.
It can be observed that in the prior distribution, the singularity is activated in both cases, although it is more pronounced for \((\alpha,\gamma)=(0,1)\), as intended. In the posterior distributions, however, this information appears to vanish, and an essentially uniform standard deviation appears in both strain fields. For the purposes of modeling discretization error, this lack of regions of interest from the posterior distribution is undesirable. However, this problem can be mitigated by increasing the amount of observation noise \(\sigma_{e}\) in our model. From a Bayesian perspective, this can be seen as an increase of the importance of the prior distribution relative to the observed data.
In order to better understand the interplay between \(\alpha\), \(\gamma\) and \(\sigma_{e}\), it is useful to consider the ratios \(\frac{\gamma}{\alpha}\) and \(\frac{\sigma_{e}}{\alpha}\) instead of the complete set of hyperparameters. This way, the structure of the posterior standard deviation is decoupled from its magnitude: the structure can be tuned with
Figure 3: The vertical strain field and corresponding discretization error of the L-shaped cantilever problem
Figure 4: Prior and posterior standard deviations of the vertical strain field (\(\sigma_{\varepsilon_{yy}}\))
\(\frac{\gamma}{\alpha}\) and \(\frac{\sigma_{e}}{\alpha}\), whereas the magnitude can be scaled directly using only \(\alpha\). Additionally, under this reparametrization, \(\alpha\) no longer has any effect on the posterior mean. In Figure 5, the effects of \(\frac{\gamma}{\alpha}\) and \(\frac{\sigma_{e}}{\alpha}\) on the posterior mean and covariance are explored. The surface plot in the center of the figure shows the effect of the hyperparameters on the quality of the posterior mean. More specifically, the relative error between the posterior mean \(\mathbf{m}^{*}\) and reference solution \(\mathbf{\hat{u}}\) is plotted as a function of \(\frac{\gamma}{\alpha}\) and \(\frac{\sigma_{e}}{\alpha}\).
In the region around point (a), the posterior mean is already relatively close to the reference solution, but Figure 5a shows that the singularity is not captured by the posterior covariance. If the observation noise \(\sigma_{e}\) increases while keeping \(\gamma\) fixed, the posterior mean tends towards the prior mean, which is zero. On the other hand, if the \(\gamma\) hyperparameter increases while keeping \(\sigma_{e}\) small, the posterior mean improves greatly, but the covariance still does not capture the singularity, as shown in Figure 5d. By combining both effects, however, we can strike a balance, and capture the singularity in the posterior distribution without sacrificing the quality of the posterior mean. Considering Figures 5b and 5c, we find that the addition of observation noise allows us to capture the singularity in the posterior covariance, and that the intensity of this covariance can be tuned by increasing \(\gamma\) and \(\sigma_{e}\) simultaneously. Finally, as shown by Figure 5e, it is possible to further improve the posterior mean without losing the structure of the posterior covariance.
### 2D porous microstructure
Finally, we consider a microscopic domain describing the microscale behavior of a heterogeneous material with randomly generated voids, shown in Figure 6. The complex geometry of this problem is known to produce strain concentrations and regions of large discretization error at multiple locations. Similar to the previous example, the focus of this example is on the strain field \(\mathbf{\varepsilon}(\mathbf{x})\) rather than the solution field \(u(\mathbf{x})\). Along the left and bottom edge of the volume element, horizontal and vertical movement is restricted respectively. A unit horizontal displacement is prescribed along the right edge, and a unit vertical displacement is prescribed along the top edge. In multiscale modeling applications, this would represent a state of biaxial tension at the macroscale. Both meshes consist of triangular elements with linear shape functions, and are shown in Figure 6a. In Figure 6b, the strain field associated with the reference solution \(\varepsilon_{yy}^{\text{f}}\) is shown. The error in the strain field between the coarse and fine discretization \(\Delta\varepsilon_{yy}=|\varepsilon_{yy}^{\text{f}}-\varepsilon_{yy}^{\text{ c}}|\) is shown in Figure 6c.
For this example, the Dirichlet boundary conditions are treated in a probabilistic manner, following Section 3.4. It is worth noting that in this specific case, the only external forces are those implied by the prescribed displacement. The prior covariance gains an additional term representing the prescribed displacement, analogous to the treatment of Neumann boundary conditions in Section 4.2:
\[\mathbf{\Sigma_{f}}=\alpha^{2}\mathbf{M}+\beta^{2}\mathbf{K_{\text{id}}}\mathbf{\Sigma_{d}} \mathbf{K_{\text{id}}}^{T} \tag{53}\]
The \(\beta\) hyperparameter takes a similar role as the \(\gamma\) hyperparameter in Section 4.2: it increases or decreases the emphasis on the effects of the boundary conditions on the discretization error.
The structure of the boundary covariance matrix \(\mathbf{\Sigma_{d}}\) depends on the assumptions that are imposed on the relation between the different nodes along the boundary. In this case, we assume that all horizontal displacements along the right edge are equal to a single value, \(u_{r}\), and all vertical displacements along the top edge are equal to \(u_{t}\). Both \(u_{r}\) and \(u_{t}\) are assumed to follow a standard normal distribution, and to be statistically independent. From here it follows that the entries of \(\mathbf{\Sigma_{d}}\) are equal to \(1\) if the row and column both correspond to a horizontal degree of freedom on the right edge, or both correspond to a vertical degree of freedom on the top edge, and \(0\) otherwise.
To investigate the influence of the hyperparameters, we again first consider the extreme cases \((\alpha,\beta)=(1,0)\) and \((\alpha,\beta)=(0,1)\). The observation noise is set to a small value (\(\sigma_{e}=10^{-8}\)), so the
Figure 6: The vertical strain field and corresponding discretization error of the representative volume element problem
prior and posterior distributions represent the limit cases of \(\sigma_{e}\to\infty\) and \(\sigma_{e}\to 0\), respectively. In Figure 7, the standard deviation of the prior and posterior distributions of the strain field are plotted. Despite the problem having a very different geometry and loading conditions, clear similarities with the problem of Section 4.2 are apparent (see Figure 4). Again, it can be found that regions of strain concentrations and discretization error are already identified in the prior for \((\alpha,\beta)=(1,0)\), which become more pronounced for \((\alpha,\beta)=(0,1)\). In the posterior for both cases, these features are smothered by the observed data in absence of any observation noise.
In order to study the effects of the hyperparameters \(\alpha\), \(\beta\) and \(\sigma_{e}\), the same reparametrization is performed as in Section 4.2, where only the ratios \(\frac{\beta}{\alpha}\) and \(\frac{\sigma_{e}}{\alpha}\) are considered, which essentially reduces \(\alpha\) to a simple scaling parameter of the prior and posterior covariance. In Figure 8, the effects of \(\frac{\beta}{\alpha}\) and \(\frac{\sigma_{e}}{\alpha}\) on the posterior mean and covariance are shown. Although the problem being solved is rather different from the one in Section 4.2, a remarkable similarity to Figure 5 is directly apparent. The main difference between these two figures lies in the quality of the posterior mean in the region where \(\beta\ll\alpha\) and \(\sigma_{e}\ll\alpha\). This is the result of the weak enforcement of the inhomogeneous Dirichlet boundary conditions, letting them be inferred from the observed forces rather than explicitly encoded in the prior distribution. If \(\beta\) is not large enough, the displacement at the inhomogeneous Dirichlet boundary is too constrained to the prior mean. Since the prior mean is zero, they will essentially be treated as homogeneous boundary conditions, thus resulting in a large disagreement between the posterior mean and reference solution. However, given a large enough value of \(\beta\), a posterior mean
Figure 7: Prior and posterior standard deviations of the horizontal strain field (\(\varepsilon_{xx}\))
that is arbitrarily close to the reference solution can be obtained. Similar to the previous example, by increasing \(\sigma_{e}\), regions of strain concentrations and large discretization error can be captured in the posterior covariance.
## 5 Conclusions
In this work, we presented a Bayesian approach to the modeling of finite element discretization error. Two levels of discretization are applied to the domain in a hierarchical manner, which are linked through a Petrov-Galerkin formulation. A prior distribution is assumed over the solution space of the fine discretization, which is then updated using right-hand side information from the coarse discretization only. This yields a posterior distribution with a mean that is close to the fine-scale reference solution, and a covariance representing the uncertainty due to the fact that only coarse-scale information was used to perform the update to the posterior. For the presented class of sparse right-hand side priors, a formal link between the posterior mean and fine-scale reference solution was demonstrated. Additionally, we have shown how the remaining difference between these two fields can cheaply be estimated, which is useful for determining the appropriate magnitude of the |
2310.09697 | Harmonic Interpolation and a Brunn-Minkowski Theorem for Random
Determinants | We describe the harmonic interpolation of convex bodies, and prove a strong
form of the Brunn-Minkowski inequality and characterize its equality case. As
an application we improve a theorem of Berndtsson on the volume of slices of a
pseudoconvex domain. We furthermore apply this to prove subharmonicity of the
expected absolute value of the determinant of a matrix of random vectors
through the connection with zonoids. | Julius Ross, David Witt Nyström | 2023-10-15T01:28:12Z | http://arxiv.org/abs/2310.09697v1 | # Harmonic interpolation and a Brunn-Minkowski theorem for random determinants
###### Abstract.
We describe the harmonic interpolation of convex bodies, and prove a strong form of the Brunn-Minkowski inequality and characterize its equality case. As an application we improve a theorem of Berndtsson on the volume of slices of a pseudoconvex domain. We furthermore apply this to prove subharmonicity of the expected absolute value of the determinant of a matrix of random vectors through the connection with zonoids.
2020 Mathematics Subject Classification: 32J27, 52A40, 52A21 (Primary) 32U05, 14C17, 52A40 (Secondary)
## 1. Introduction
Let \(A\) and \(B\) be convex subsets of \(\mathbb{R}^{n}\). The Minkowski sum of \(A\) and \(B\) is defined as
\[A+B:=\{a+b:a\in A,b\in B\},\]
and the famous Brunn-Minkowski inequality says that
\[|A+B|^{1/n}\geq|A|^{1/n}+|B|^{1/n},\]
where \(|\cdot|\) denotes the Euclidean volume.
We wish to consider the interpolation of convex sets. Given convex \(A\) and \(B\) there is a natural interpolating family \(A_{t}:=(1-t)A+tB\), \(t\in[0,1]\), and it follows from the Brunn-Minkowski inequality that the map
\[t\mapsto|A_{t}|^{1/n}\]
is concave in \(t\in[0,1]\).
For an infinite family of convex sets, there are many possible interpolations. To consider this in more detail, suppose \(\Omega\) is a smoothly bounded domain in \(\mathbb{R}^{m}\) and that we have a continuous family of convex bodies (i.e. compact convex sets) \(A_{\tau}\subset\mathbb{R}^{n}\) parametrized by \(\tau\in\partial\Omega\). If \(\Omega\) is itself convex, a natural interpolation can be obtained by considering
\[A=\text{Convexhull}\left(\bigcup_{\tau\in\partial\Omega}A_{\tau}\times\{ \tau\}\right)\subseteq\mathbb{R}^{n+m}\]
and letting \(A_{x}\) be the fiber of \(A\) over \(x\in\Omega\). We call this the _convex interpolation_ of \(\{A_{\tau}\}\). Then directly from the Brunn-Minkowski inequality it follows that the map \(x\mapsto|A_{x}|^{1/n}\) is concave in \(x\in\Omega\).
If \(\Omega\) is not convex, the convex interpolation is not suitable since it will not necessarily agree with the given boundary data \(\{A_{\tau}\}\) on \(\partial\Omega\). For general \(\Omega\) a natural interpolation was proposed in our recent paper [3] that we now describe.
First note that if \(A_{y}\) is a continuous family of convex bodies in \(\mathbb{R}^{n}\) over some parameter set \(D\subseteq\mathbb{R}^{m}\) and \(\mu\) is a Radon measure on \(D\), then there is a set-integral
\[\int_{D}A_{y}d\mu(y)\]
which is itself a subset of \(\mathbb{R}^{n}\). To define this precisely recall that the support function of a convex set \(A\) is given by
\[h_{A}(\xi):=\sup_{\zeta\in A}(\zeta\cdot\xi)\]
with and has the property that
\[h_{A}\text{ is convex and }h_{A}(t\xi)=|t|h_{A}(\xi)\text{ for }t\in\mathbb{R}. \tag{1}\]
On the other hand, if \(h\) is a function with those two properties then \(h\) is the support function of a unique closed convex set, which we denote by \(A(h)\).
It is an elementary exercise to see that \(h_{A+B}=h_{A}+h_{B}\) and more generally
\[h_{t_{1}A_{1}+\ldots+t_{k}A_{k}}=t_{1}h_{A_{1}}+...+t_{k}h_{A_{k}}.\]
Furthermore if \(A_{t}\) are convex sets such that \(A_{t}\to A\) in the Hausdorff topology, then for each \(\xi\), \(h_{A_{t}}(\xi)\to h_{A}(\xi)\).
**Definition 1.1**.: Let \(d\mu\) be a measure on a measurable set \(D\) in \(\mathbb{R}^{m}\), and \(A_{y}\) be a convex set for each \(y\in D\). We define the _Minkowski integral_\(\int_{D}A_{y}d\mu(y)\) as
\[\int_{D}A_{y}d\mu(y):=A\left(\int_{D}h_{A_{y}}d\mu(y)\right).\]
Such set-valued integrals have been considered in various places, for example [1, 5]. As one would expect, some conditions are needed to ensure that the Minkowski integral is well-defined. For our purpose the following is sufficient: assume \(D\) is compact, \(d\mu\) is a Radon measure and \(y\mapsto A_{y}\) is a continuous family of convex bodies. Then for each \(\zeta\) the map \(y\mapsto h_{A_{y}}(\zeta)\) is continuous, so \(\int_{D}h_{A_{y}}d\mu(y)\) exists and has properties (1), and thus \(\int_{D}A_{y}d\mu(y)\) exists.
**Definition 1.2**.: Let \(\Omega\subset\mathbb{R}^{m}\) be a smoothly bounded domain. The _harmonic interpolation_ of a continuous family \(\{A_{\tau}\}_{\tau\in\partial\Omega}\) of convex bodies is defined as
\[A_{x}:=\int_{\partial\Omega}A_{\tau}d\mu_{x}(\tau),\]
where \(d\mu_{x}\) is the harmonic measure on \(\partial\Omega\) with respect to \(x\in\Omega\).
The harmonic interpolation and convex interpolation may differ, even when \(\Omega\) is convex. We argue that the former is better suited in some contexts, one of which is the theory of zonoids.
A _zonotope_ is a convex set that can be written as the Minkowski sum of line segments. Clearly any zonotope is a convex polytope, but it is easy to see that not all convex polytopes are zonotopes. A _zonoid_ is a convex set which can be approximated arbitrarily well (in the Hausdorff topology) by zonotopes, or equivalently a convex set that can be written as the Minkowski integral of line segments (see for example [4] for a introduction to zonoids). The harmonic interpolation has the property that it preserves zonoids; i.e. if each boundary set \(A_{\tau}\) is a zonoid then each member of the interpolating family \(A_{x}\) will also be a zonoid (and this is not true for the convex interpolation, even when \(\Omega\) is convex).
### Acknowledgements
This material is based upon work supported by the National Science Foundation under Grant No. DMS-1749447. The second named author is supported by the Swedish Research Council and the Goran Gustafsson Foundation for Research in Natural Sciences and Medicine. The authors thank Bo Berndtsson and Dario Cordero-Erausquin for conversations on this topic.
## 2. A Brunn-Minkowski Inequality for Harmonic Interpolation
We continue to assume \(\Omega\subset\mathbb{R}^{m}\) is a smoothly bounded domain (which by convention is also bounded), and \(A_{\tau}\) for \(\tau\in\partial\Omega\) is a continuous family of convex bodies in \(\mathbb{R}^{n}\). In [3] we proved the following weak version of a Brunn-Minkowski inequality for the harmonic interpolation.
**Theorem 2.1**.: If \(A_{x}\) is the harmonic interpolation of \(\{A_{\tau}\}\) then \(x\mapsto\log|A_{x}|\) is superharmonic in \(x\).
Our main result in this short note is a direct proof of the following stronger version:
**Theorem 2.2**.: If \(A_{x}\) is the harmonic interpolation of \(\{A_{\tau}\}\) then \(x\mapsto|A_{x}|^{1/n}\) is superharmonic in \(x\).
Proof.: Let \(B_{\epsilon}(x)\) denote the Euclidean ball of radius \(\epsilon\) centered at \(x\). We need to show that if \(B_{\epsilon}(x)\subseteq\Omega\) then
\[|A_{x}|^{1/n}\geq\int_{\partial B_{\epsilon}(x)}|A_{y}|^{1/n}dS(y),\]
where \(dS\) denotes the normalized Euclidean surface measure on \(\partial B_{\epsilon}(x)\).
A standard property of harmonic measures is that
\[\mu_{x}=\int_{\partial B_{\epsilon}(x)}\mu_{y}dS(y),\]
and this clearly implies that
\[A_{x}=\int_{\partial B_{\epsilon}(x)}A_{y}dS(y).\]
Now we approximate the surface measure \(dS\) with a sequence of atomic measure \(\nu_{k}=\sum_{i=1}^{N_{k}}\lambda_{i,k}\delta_{y_{i,k}}\) chosen so \(\nu_{k}\to dS\) weakly as \(k\to\infty\). Then for \(k\) sufficiently large \(\sum_{i=1}^{N_{k}}\lambda_{i,k}A_{y_{i}}\) is arbitrarily close (in the Hausdorff distance) to \(A_{x}\). Thus for any \(\epsilon>0\) and \(k\) sufficiently large we have
\[|A_{x}|^{1/n}+\epsilon\geq|\sum_{i=1}^{N_{k}}\lambda_{i,k}A_{y_{i,k}}|^{1/n} \geq\sum_{i=1}^{N_{k}}\lambda_{i,k}|A_{y_{i,k}}|^{1/n}\geq\int_{\partial B_{ \epsilon}(x)}|A_{y}|^{1/n}dS(y)-\epsilon,\]
where the second follows from the classical Brunn-Minkowski inequality. Letting \(\epsilon\to 0\) we have
\[|A_{x}|^{1/n}\geq\int_{\partial B_{\epsilon}(x)}|A_{y}|^{1/n}dS(y),\]
which completes the proof.
**Definition 2.3**.: We say that a continuous family of convex sets \(A_{x}\subseteq\mathbb{R}^{n}\) over some domain \(\Omega\subseteq\mathbb{R}^{m}\) is _subharmonic_ over \(\Omega\) if whenever \(B_{\epsilon}(x)\subseteq\Omega\) we have that
\[A_{x}\supseteq\int_{\partial B_{\epsilon}(x)}A_{y}dS(y).\]
The then get the following Corollary of Theorem 2.2.
**Corollary 2.4**.: If \(A_{x}\) is subharmonic over \(\Omega\) then \(x\mapsto|A_{x}|^{1/n}\) is superharmonic.
As an application we can give a strengthening of the following theorem of Berndtsson [2]
**Theorem 2.5**.: Let \(U\subseteq\mathbb{C}^{n+m}\) be a pseudoconvex domain with the property that if \((x_{1}+iy_{1},...,x_{n}+iy_{n},w)\in U\) then \((x_{1}+iy_{1}^{\prime},...,x_{n}+iy_{n}^{\prime},w)\in U\) for all \(y_{1}^{\prime},\ldots,y_{n}^{\prime}\), and let \(U_{w}:=\{x\in\mathbb{R}^{n}:(x,w)\in U\}\).
Then the map \(w\mapsto-\log|U_{w}|\) is plurisubharmonic in \(w\).
**Theorem 2.6**.: In the same setting as Theorem 2.5, the map \(w\mapsto-|U_{w}|^{1/n}\) is plurisubharmonic.
Proof.: Without loss of generality we can assume that \(m=1\). Note that the pseudo-convexity and symmetry of \(U\) implies that \(U_{w}\) is convex for all \(w\). By approximation we can also without loss of generality assume that the family \(U_{w}\) is bounded and continuous. We claim that the family \(U_{w}\) is subharmonic. Note that for two closed convex sets \(A\) and \(B\) we have that \(A\supseteq B\) if and only if \(h_{A}\geq h_{B}\), so \(U_{w}\) is subharmonic if and only if for any \(\xi\in\mathbb{R}^{n}\), \(h_{U_{w}}(\xi)=\sup_{x\in U_{w}}(x\cdot\xi)\) is superharmonic in \(w\).
Let \(\phi\) be a plurisubharmonic exhaustion function for \(U\) which we can assume to be independent of \(\operatorname{Im}(\mathbb{C}^{n})\), just as \(U\) itself. Note that \(\phi_{R}:=\max(\phi-R,0)\) is also plurisubharmonic and independent of \(\operatorname{Im}(\mathbb{C}^{n})\), and that the same is true for \(\psi_{R}(x+iy,w):=\phi_{R}(x+iy,w)-x\cdot\xi\). Thus by Kiselman's minimum principle \(\inf_{x\in U_{w}}\psi_{R}(x,w)\) is subharmonic in \(w\). We now note that
\[\sup_{x\in U_{w}}(x\cdot\xi)=-\lim_{R\to\infty}\inf_{x\in U_{w}}\psi_{R}(x,w),\]
and hence it follows that \(h_{U_{w}}(\xi)\) is superharmonic and thus \(U_{w}\) is subharmonic. That \(-|U_{w}|^{1/n}\) is subharmonic now follows from Corollary 2.4.
## 3. Characterization of the extremal case
By Corollary 2.4 we know that if \(\{A_{x}\}_{x\in\Omega}\) is a subharmonic family of convex bodies in \(\mathbb{R}^{n}\) over a domain \(\Omega\) then \(x\mapsto|A_{x}|^{1/n}\) is superharmonic. Our next result characterizes when this map is in fact harmonic.
**Theorem 3.1**.: The map \(x\mapsto|A_{x}|^{1/n}\) is harmonic if and only if we can write \(A_{x}=c_{x}B+d_{x}\) where \(B\subset\mathbb{R}^{n}\) is a fixed convex body, and \(c_{x}\) and \(d_{x}\) are harmonic functions on \(\Omega\) taking values in \(\mathbb{R}_{+}\) and \(\mathbb{R}^{n}\) respectively.
Proof.: Let \(\Omega^{\prime}\) be a relatively compact subdomain of \(\Omega\) with smooth boundary. Since \(A_{x}\) is subharmonic it must dominate the harmonic interpolation of \(A_{y}\) restricted to \(\partial\Omega^{\prime}\), but since \(|A_{x}|^{1/n}\) is assumed to be harmonic we must have that \(A_{x}\) is equal to the harmonic interpolation.
Write \(\partial\Omega^{\prime}\) as the disjoint union of a finite number of measurable subsets \(D_{i}\) and let
\[B_{i}:=\int_{D_{i}}A_{y}d\mu_{x}(y),\]
where \(\mu_{x}\) is the harmonic measure on \(\partial\Omega^{\prime}\) with respect to \(x\). Then \(A_{x}=\sum_{i}B_{i}\), and by the Brunn-Minkowski inequality we have
\[|A_{x}|^{1/n}\geq\sum_{i}|B_{i}|^{1/n}.\]
On the other hand, as in the proof of Theorem 2.2 one sees that
\[|B_{i}|^{1/n}\geq\int_{D_{i}}|A_{y}|^{1/n}d\mu_{x}(y).\]
But \(|A_{x}|^{1/n}\) being harmonic then implies the equality
\[|A_{x}|^{1/n}=\sum_{i}|B_{i}|^{1/n}.\]
The well-known characterization of the equality case of the Brunn-Minkowski inequality then says that we can write \(B_{i}=c_{i}B+d_{i}\), where \(B\) is some fixed convex set, and some \(c_{i}\in\mathbb{R}_{+}\) and \(d_{i}\in\mathbb{R}^{n}\). We may normalize \(B\) to have volume one and center of gravity at the origin. We thus also see that \(A_{x}=c_{x}B+d_{x}\) where \(c_{x}=\sum_{i}c_{i}\) and \(d_{x}=\sum_{i}d_{i}\).
Now if we decompose a fixed \(D_{i}\) further into disjoint pieces \(E_{j}\) the same argument yields that for each \(j\) there are \(c_{j}^{\prime}\in\mathbb{R}_{+}\) and \(d_{j}^{\prime}\in\mathbb{R}^{n}\) such that \(\int_{E_{j}}A_{y}dS(y)=c_{j}^{\prime}B+d_{j}^{\prime}\). As we can make the decomposition arbitrarily fine the continuity of \(A_{y}\) implies that there are continuous functions \(c_{y}\) and \(d_{y}\) on \(\partial\Omega^{\prime}\) such that \(A_{y}=c_{y}B+d_{y}\). It follows that \(A_{x}=c_{x}B+d_{x}\) where \(c_{x}\) is the harmonic extension of \(c_{y}\) and \(d_{x}\) is the harmonic extension of \(d_{y}\) to \(\Omega^{\prime}\).
As this can be done for any for relatively compact subdomain of \(\Omega\) with smooth boundary, the result follows.
## 4. A Brunn-Minkowski theorem for expected absolute random determinants
Consider now a random \((n,n)\) matrix \(M_{Y}\) whose columns are iid copies of a random vector \(Y\), corresponding to a Borel probability measure \(\nu_{v}\) on \(\mathbb{R}^{n}\). We are then interested in the expected absolute value of the determinant (ead) \(E|\det M_{Y}|\) of \(M_{Y}\).
Suppose \(Y_{\tau}\) is a family of random vectors parametrized by the boundary of a smoothly bounded domain \(\Omega\subseteq\mathbb{R}^{m}\). We assume that each \(Y_{\tau}\) has finite expectation. Then a natural interpolating family \(Y_{x}\) over \(\Omega\) is given by letting
\[\nu_{Y_{x}}:=\int_{\partial\Omega}\nu_{Y_{\tau}}d\mu_{x}(\tau),\]
where as before \(d\mu_{x}\) denotes the harmonic measure with respect to \(x\).
**Theorem 4.1**.: The map \(x\mapsto(E|\det M_{Y_{x}}|)^{1/n}\) is superharmonic in \(x\).
Our proof relies on the connection between ends and a special class of convex sets called zonoids which was established in [5, Thm 3.1]: to any random vector \(Y\) with finite expectation we may associate a zonoid
\[Z(Y):=\int_{\mathbb{R}^{n}}[0,y]d\nu_{v}(y).\]
Then the main result [5, Thm. 3.2] says that
\[E|\det M_{Y}|=n!|Z(Y)|. \tag{2}\]
Proof of Theorem 4.1.: Note that
\[Z(Y_{x}) =\int_{\mathbb{R}^{n}}[0,y]d\nu_{Y_{x}}(y)=\int_{\mathbb{R}^{n}} \int_{\partial\Omega}[0,y]d\mu_{x}(\tau)d\nu_{Y_{\tau}}(y)=\] \[=\int_{\partial\Omega}\int_{\mathbb{R}^{n}}[0,y]d\nu_{Y_{\tau}}(y )d\mu_{x}(\tau)=\int_{\partial\Omega}Z(Y_{\tau})d\mu_{x}(\tau),\]
i.e. \(Z(Y_{x})\) is the harmonic interpolation of \(Z(Y_{\tau})\). Thanks to the volume equality (2) the result follows immediately from Theorem 2.2.
|
2304.01354 | Functional Knowledge Transfer with Self-supervised Representation
Learning | This work investigates the unexplored usability of self-supervised
representation learning in the direction of functional knowledge transfer. In
this work, functional knowledge transfer is achieved by joint optimization of
self-supervised learning pseudo task and supervised learning task, improving
supervised learning task performance. Recent progress in self-supervised
learning uses a large volume of data, which becomes a constraint for its
applications on small-scale datasets. This work shares a simple yet effective
joint training framework that reinforces human-supervised task learning by
learning self-supervised representations just-in-time and vice versa.
Experiments on three public datasets from different visual domains, Intel
Image, CIFAR, and APTOS, reveal a consistent track of performance improvements
on classification tasks during joint optimization. Qualitative analysis also
supports the robustness of learnt representations. Source code and trained
models are available on GitHub. | Prakash Chandra Chhipa, Muskaan Chopra, Gopal Mengi, Varun Gupta, Richa Upadhyay, Meenakshi Subhash Chippa, Kanjar De, Rajkumar Saini, Seiichi Uchida, Marcus Liwicki | 2023-03-12T21:14:59Z | http://arxiv.org/abs/2304.01354v2 | # Functional Knowledge Transfer with Self-supervised Representation Learning
###### Abstract
This work investigates the unexplored usability of self-supervised representation learning in the direction of functional knowledge transfer. In this work, functional knowledge transfer is achieved by joint optimization of self-supervised learning pseudo task and supervised learning task, improving supervised learning task performance. Recent progress in self-supervised learning uses a large volume of data, which becomes a constraint for its applications on small-scale datasets. This work shares a simple yet effective joint training framework that reinforces human-supervised task learning by learning self-supervised representations just-in-time and vice versa. Experiments on three public datasets from different visual domains, Intel Image, CIFAR, and APTOS, reveal a consistent track of performance improvements on classification tasks during joint optimization. Qualitative analysis also supports the robustness of learnt representations. Source code and trained models shall be made available on GitHub 1.
self-supervised learning, functional knowledge transfer, joint training, representation learning, computer vision
Footnote 1: [https://github.com/prnkashchhipa/Functional_Knowledge_Transfer_SSL](https://github.com/prnkashchhipa/Functional_Knowledge_Transfer_SSL).
## I Introduction
The concept of functional knowledge transfer [1] has been explored for multi-task learning problems in computer vision [2, 3, 4] in the context of simultaneous training and joint optimization of multiple tasks. Typically functional knowledge transfer is employed for end-to-end joint training and optimization of multiple supervised learning tasks. Representational knowledge transfer, where pretraining and downstream task learning is done sequentially, has been thoroughly investigated and shown success in self-supervised learning. So far, functional knowledge transfer in self-supervised learning has not been studied, leaving a research gap.
This study uses functional knowledge transfer between self-supervised representation learning and other supervised downstream tasks. Figure 1 compares both knowledge transfer approaches. The proposed method jointly optimizes contrastive self-supervised learning with classification task learning on ResNet-50 [8] backbone, explored on three public datasets of different visual domains, CIFAR10 [15], Intel Image [17], and Aptos [16]. The proposed approach enhances supervised task performance on all three datasets, supporting the hypothesis. Quantitative and qualitative comparisons are made between the proposed and conventional knowledge transfer approach. The following are the main contributions of this work:
1. Explored functional knowledge transfer with self-supervised representation learning towards making it applicable to the small-batch size and small-scale dataset.
2. Hypothesizes that self-supervised learning reinforces supervised task learning and vice versa.
With these contributions, proposed approach improves supervised task performance on all three datasets, supported by qualitative results and provide preliminary empirical support for the hypothesis.
Fig. 1: Figure compares the proposed functional knowledge transfer approach in context of self-supervised learning and supervised task learning with conventional representational knowledge transfer approach where self-supervised pretraining and supervised task learning is performed in sequential manner.
## II Related Work
Joint embedding architecture and method based self-supervised learning has shown significant advances in label-free representation learning paradigm. It is based on learning similarity in transformed views of input images and the way it learns robust features by avoiding collapsed representation it is divided into several categories, e.g., i) Contrastive Methods (SimCLR [10], MoCo [5]), ii) Distillation (BYOL [12], SimSiam [11]), iii) Clustering (SwAV [6]), and (iv) Information Maximization (Barlow Twins [13], VICReg [14]). All these methods have explored the representational knowledge transfer approach, where pretraining is performed, and learned parameters are transferred as knowledge to enable downstream tasks. However, functional knowledge transfer and simultaneous training are unexplored. Although some work has been carried out to exploit the label details in self-supervised methods [7], especially contrastive learning.
On the other side, multi-task learning [2, 3, 4] has explored Functional knowledge transfer by simultaneous training procedures is their natural requirement and has shown progress toward improved performance and computational efficiency. Self-supervised learning approaches for functional knowledge transfer are unexplored. It could make self-supervised algorithms computationally efficient and adaptable to small datasets by integrating with other learning tasks.
## III Method
The proposed method enables a specific type of inductive transfer, called functional knowledge transfer [1] on self-supervised representation learning approach by incorporating simultaneous training with downstream task learning. Specifically, the proposed method employs the contrastive learning method [10] for self-supervised representation learning and classification as downstream tasks on multiple datasets, CIFAR10 [15], Aptos [16], and Intel Image [17]. The following section describes the method in detail.
Data \(D:(X,Y)\) is set of input sample pair of \((x,y)\) where \(x\in\mathbb{R}^{d}\), is the input image data of \(d\) dimensions and \(y\) is corresponding human-annotation from annotation space \(\mathcal{C}\). The data is defined as \(D:\{(x_{1},y_{1}),...(x_{n},y_{n})\}\subseteq\mathbb{R}^{d}\times\mathcal{C}\).
### _Contrastive Self-supervised Learning_
To define the joint embedding architecture and method based self-supervised learning objective, followed in contrastive learning (SimCLR [10]), a set of \(K\) non-learnable transformations \(\mathcal{T}:\{t_{k}\}_{k\in K}\) is defined, which are image processing based augmentations, provides transformed views of input image \((x\ ^{\prime},x\ ^{\prime\prime})\), to retain the invariant feature learning. Further, learnable function \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}^{m}\) parameterized by learnable parameters \(\Theta_{f}\) which is Convolutional Neural Network (CNN) backbone and another learnable function \(g:\mathbb{R}^{m}\rightarrow\mathbb{R}^{\tilde{m}}\) parameterized by learnable parameters \(\Theta_{g}\) which is projector network is defined. With that, Noise contrastive estimation [9] based self-supervised contrastive learning objective, NT-Xent (Normalized Temperature Scaled Cross Entropy) loss is defined in Eq. 1.
\[\mathcal{L}_{SSL}=\sum_{(x\ ^{\prime},x\ ^{\prime\prime})\in \mathcal{T}(X)}\log\frac{\mathrm{e}^{\mathcal{A}}}{\sum_{k=1}^{|X|}1_{[k\neq x \ ^{\prime}]}\mathrm{e}^{\mathcal{B}}} \tag{1}\] \[\mathcal{A}=(sim(g(\Theta_{g};f(\Theta_{f};x\ ^{\prime})),g(\Theta_{g};f(\Theta_{f};x ^{\prime\prime})))/\tau)\] (2) \[\mathcal{B}=(sim(g(\Theta_{g};f(\Theta_{f};x\ ^{\prime})),g( \Theta_{g};f(\Theta_{f};x^{k})))/\tau) \tag{3}\]
where, \(\mathcal{A}\) defines similarity for positive pairs, \(\mathcal{B}\) constitute similarity for negative pairs with denominator part of Eq. 1, and \(sim\) is cosine similarity and \(\tau\) is temperature scale parameter, and \(\mathcal{L}_{SSL}\) is contrastive loss.
### _Supervised Task Learning_
Supervised learning objective for mentioned downstream task of classification can be mentioned in terms of cross entropy loss \(\mathcal{L}_{CE}\), defined in Eq. 4.
\[\mathcal{L}_{CE}=\frac{1}{|D|}\sum_{(x,y)\in D}\sum_{c\in\mathcal{C}}y_{c}\log( f(\Theta;x_{c})) \tag{4}\]
### _Representational Knowledge Transfer_
Representational Knowledge Transfer is extensively explored in self-supervised learning, not only in contrastive learning but also in other self-supervised paradigms, i.e., distillation [11, 12] and information maximization [13, 14]. This type of knowledge transfer comprises two stages;
1. First stage is self-supervised pretraining of CNN backbone without requiring labels which learns invariant representations of underlying visual concepts by similarity learning, described in Eq. 1
2. Second stage is downstream supervised tasks learning in which learnt representations from stage one is used by initializing the learning parameters of CNN encoder, and supervised training is performed accordance to the task, e.g., classification, described in Eq. 4
The first part of Figure 1 symbolically depicts the process.
### _Functional Knowledge Transfer_
Functional Knowledge Transfer in the context of self-supervised learning is defined by jointly optimizing the self-supervised learning objective with supervised task learning objective. \(\mathcal{L}_{FKT}\) loss described in Eq. 5 is single stage process where parameters learning is simultaneous and influenced by both loss objectives in just-in-time manner. \(\lambda\) is parameter for balancing losses, however kept \(1\) in all experiments.
\[\mathcal{L}_{FKT}=\mathcal{L}_{CE}+\lambda\ \mathcal{L}_{SSL} \tag{5}\]
The second part of Figure 1 symbolically demonstrate the process.
_Analytical Reasoning_: Functional Knowledge Transfer in context of self-supervised learning with supervised task learning is based on reinforced effects of tasks to each other. More concretely, it is shown in Figure 3 and defined as:
* Self-supervised learning objective shares invariant generalized discriminative features,
which reinforces the task specific feature learning for given human annotation
* Supervised task learning shares robust semantic information (e.g., categorization, clusters of similar concepts) of underlying visual concepts of image backed by human knowledge, which reinforces similarity learning of semantically similar visual concepts
This bi-directional constructive reinforcements improves learning of both the tasks, which can enable to learn contrastive learning on relatively smaller batch sizes and smaller datasets and improved performance for supervised downstream task, shown in Figure 3.
## IV Datasets
This study uses public datasets of natural geographic scenes, atomic objects, and medical images to investigate functional knowledge transfer on self-supervised representation learning in diverse visual concepts. The Table I summarizes the three datasets used in this work.
## V Experimental Details
To evaluate the applicability of contrastive self-supervised learning method in functional knowledge transfer approach, detailed experimentation was performed on three public datasets, CIFAR10 [15], Intel Image [17], and Aptos [16] from diverse visual domains. Functional knowledge transfer is employed by joint training of self-supervised (simCLR [10]) and supervised task learning (classification) as mentioned in the Section III. A comparative study is performed by benchmarking the proposed approach to conventional approach of representational knowledge transfer, where the model is pretrained and then trained for the downstream task.
Methodological investigations are preferred; hence, common hyperparameters are configured for all three datasets with both transfer knowledge approaches. To emphasize a less compute-intensive approach, single GPU implementation is preferred with ResNet-50 [8] backbone and batch size of 256 for contrastive learning, which is much smaller than the original work. Due to this very reason, contrastive pretraining on CIFAR is perfomed with batch size 256, which was not available elsewhere. Pretraining, downstream task, and joint training are configured for 100 epochs. Self-supervised pretraining in both approaches uses LARS optimizer with learning rate 0.001 temperature scale 0.5 and employs standard augmentations suggested in the original work simCLR [10]. Supervised learning classification tasks in both approaches use SGD optimizer with a learning rate 0.025. All the experiments are repeated three times and the mean value of the performance metric is reported with standard deviation.
## VI Results and Discussions
Table VI describes the multi-class classification performance of the proposed approach by comparing it with the conventional approach for all three datasets. A consistent improvement, up to \(1.40\%\), is observed in accuracy for all three datasets for the proposed functional knowledge transfer approach. It is worth noting that all the results show negligible standard deviation across several trial. The proposed approach has improved performance over previous work on APTOS and intel image datasets, also supported by qualitative analysis. Important observations are briefly described as follows -
**Functional Knowledge Transfer improves performance**: Results comparisons in Table VI clearly show the inspiring trend that functional transfer has improved the downstream
Fig. 3: Demonstrate the bi-directional constructive reinforcements for self-supervised learning and supervised task learning which enables self-supervision on relatively smaller batch size and small-scale datasets and improves classification performance.
Fig. 2: Illustrates the Functional Knowledge Transfer where contrastive loss and cross entropy loss is computed on self-supervised and supervised tasks respectively and jointly backpropagated, which enables simultaneous training.
task performance regardless of the dataset over the conventional approach. It also outperforms previous works, using the same ResNet-50 architecture and beyond, as shown in Figure 5 for APTOS and Intel Image datasets.
**Functional Knowledge Transfer enables efficient self-supervision**: Enabling self-supervised learning on small-scale datasets and smaller batch size is another significant outcome for the functional knowledge transfer approach. It supports the hypothesis mentioned in Figure 3 where both tasks reinforced the efficiency to each other. However, more investigation is required in efficiently fusing self-supervised and supervised learning task loss objectives for even further improved performance.
**Functional Knowledge Transfer demonstrates computational efficiency**: Representational knowledge transfer requires 100 epochs of pretraining followed by 100 epochs of downstream supervised task learning. In contrast, the functional knowledge transfer approach performs better by joint training for 100 epochs. Effectively, functional knowledge transfer requires roughly half of computations or at-least saves downstream task computation costs. It is also essential to evaluate the functional knowledge transfer for domain adaptation and other transfer learning scenarios in future work because self-supervised learning in representation knowledge transfer intends to do the transfer learning.
**Qualitative Robustness**: Quantitative results and performance are also supported by qualitative analysis shown through class activation maps in Figure 4 for two datasets, Intel Image and Aptos, where attention regions are displayed. It clearly shows the ability to attend to the region of interest to capture the essence of visual concepts in the images. When compared to the pretrained and representational knowledge transfer approaches, functional knowledge transfer demonstrated very competitive and even more focused attention region.
**Ablation**: Ablation study is performed on ResNet-18 backbone on Intel Image dataset (Table VI), which shows marginal improvement, which gives motivation to investigate further in this direction.
## VII Conclusion
Functional knowledge transfer is explored on contrastive self-supervised learning with classification tasks where exciting performance improvement is depicted across multiple public datasets. It has shown preliminary empirical support for enabling contrastive self-supervised learning on small batches and small-scale datasets by reinforcing the task during joint training. This study strongly encourages further investigation of functional knowledge transfer using different self-supervised learning paradigms and supervised learning tasks.
Fig. 4: Pretrained model, representational knowledge transfer, and functional transfer approaches are compared for class activation maps (CAM). First instance is from building category from intel image dataset, and second instance is mild DR category from APTOS dataset. CAM not produced for CIFAR10 dataset due to very small size of input.
Fig. 5: Comparison with previous works: Intel Image (top), Aptos retinopathy fundus (bottom). |
2307.10709 | Gravitational baryogenesis in non-minimal kinetic coupling model | In this work, we consider the gravitational baryogenesis in the framework of
non-minimal derivative coupling model. A mechanism to generate the baryon
asymmetry based on the coupling between the derivative of the Ricci scalar
curvature and the baryon current in context of non-minimal derivative coupling
model is investigated. We show that, in this model, the temperature increases
during the reheating periods to the end of reheating period or beginning of
radiation dominated era. Therefore the reheating temperature is larger then
decoupling temperature. It can be demonstrated that, the evaluation of baryon
asymmetry is not depends on coupling constant. In this model we can generate
baryon asymmetry at low and high reheating temperature, by considering the high
friction constraint. | Parviz Goodarzi | 2023-07-20T09:05:09Z | http://arxiv.org/abs/2307.10709v2 | # Gravitational baryogenesis in non-minimal kinetic coupling model
###### Abstract
In this work, we consider the gravitational baryogenesis in the framework of non-minimal derivative coupling model. A mechanism to generate the baryon asymmetry based on the coupling between the derivative of the Ricci scalar curvature and the baryon current in context of non-minimal derivative coupling model is investigated. We show that, in this model, the temperature increases during the reheating periods to the end of reheating period or beginning of radiation dominated era. Therefore the reheating temperature is larger then decoupling temperature. We show that the evaluation of baryon asymmetry is not depends on coupling constant. In this model we can generate baryon asymmetry at low and high reheating temperature, by considering the high friction constraint.
## 1 Introduction
One of the greatest puzzles in the standard model of cosmology and astro-particle physics is the dominance of matter against anti-matter. In the other words the number of baryons in the universe is larger then the number of anti-baryons. The cosmic microwave background radiation [1] and the abundance of the primordial light elements from big bang nucleosynthesis (BBN) [2] shows that the ratio of baryon number density \(n_{s}\) to the entropy density \(s\) is
\[Y_{B}\equiv\frac{n_{B}}{s}=(0.864\pm 0.016)\times 10^{-10}. \tag{1}\]
As A.Sakharov in Ref. [3] showed that baryon asymmetry may be dynamically generated from the following conditions. (i) processes that violate baryon number; (ii)violation of charge (C) and charge parity (CP) symmetry; (iii) out of the thermal equilibrium. This three assumptions are now known as Sakharov conditions.
Several interesting and applicable mechanisms for generation of baryon asymmetry are proposed. The first suggestion for this topic used on the out of equilibrium decay of a massive particle such as a super heavy GUT gauge of Higgs boson where dubbed GUT baryogenesis [4]. The other mechanism involving the decay of flat directions in super symmetric models is known as the Affleck-Dine scenario [5]. Also the possibility of generating the baryon asymmetry at the electro-weak scale has been considered, these interactions conserve the sum of baryon and lepton number, which is converted to a baryon asymmetry at the electro-weak scale This mechanism is known as lepto-baryogenesis [6]. The spontaneous baryogenesis has been proposed with the characteristic generation of baryon asymmetry in thermal equilibrium without the necessity of \(C\) and \(CP\) violation [7, 8].
Davoudiasl et al in Supergravity have proposed a mechanism for generating of the baryon asymmetry on the basis of spontaneous baryogenesis during the expansion of the universe [9]. This mechanism is known as gravitational baryogensis. In this approach they introduced an interaction between derivative of the Ricci scalar curvature and baryon current \(J^{\mu}\partial_{\mu}R\) which dynamically violate CPT and CP symmetries in expanding universe. During the last years, other scenarios to extend this coupling, has been receiving a great amount of attention from the many authors. In [10] the effect of time dependence of the equation of state parameter on gravitational baryogenesis has been considered. Gravitational baryogenesis in \(f(R)\) theories in [11] has been considered. Also gravitational baryogenesis in \(f(T)\) theory of gravity has been considered where \(T\) is the torsion scalar [12]. Some variant forms of gravitational baryogenesis containing the partial derivative of Gauss-Bonnet scalar coupled to baryon current are investigated in [13]. Generalized gravitational baryogenesis of \(f(R,T)\), \(f(Q,\tau)\), \(f(T,T_{G})\) and \(f(T,B)\) where \(\tau\) denote the trace of energy-momentum tensor, \(Q\) is the nonmetricity, \(T_{G}\) is the teleparallel equivalent to the Gauss-Bonnet term, \(B\) denotes boundary term between torsion and Ricci scalar are discussed in [14, 15, 16, 17, 18]. In [19] the baryon asymmetry is generated dynamically during an inflationary epoch powered by ultra-relativistic particle production. In [20, 21, 22] has been suggested that anisotropy of the universe can enhance the generation of the baryon asymmetry. The authors of [22] clarified, if we into account the gravitino problem (\(T_{RD}<10^{9}GeV\)), gravitational baryogenesis [9] is incapable to explain generation of sufficient baryon asymmetry. They show that if there exists a huge
shear in the radiation dominated era there is a little possibility of gravitational baryogenesis.
In this paper, we examine gravitational baryogenesis in context of non-minimal derivative coupling model and the effect of "gravitationally enhanced friction" on the evolution of the scalar field on the baryon asymmetry has been considered.
Non-minimal kinetic coupling \(G_{\mu\nu}\partial^{\mu}\varphi\partial^{\nu}\varphi\) is one of the operators of Horndeski's scalar tensor theory where primordially introduced in [23, 24, 25] and used to explain Higgs inflation.
In this coupling the inflaton field evolves more slowly relative to the case of standard inflation due to a gravitationally enhanced friction which its capacity to explain the Higgs inflation.
An emphasize feature of the non-minimal derivative coupling with the Einstein tensor is that the mechanism of the gravitationally enhanced friction during inflation, by which even steep potentials with theoretically natural model parameters can drive cosmic acceleration [26, 27]. Thus it is well motivated to propose the gravitational baryogenesis in context of non-minimal derivative coupling model. The oscillatory inflation, reheating process after the slow roll and warm inflation in the presence of a non-minimal kinetic coupling, was studied in [28, 29, 30, 31, 32].
In the present work, inspired by the above mentioned models, we will consider gravitational baryogenesis mechanism in non-minimal derivative coupling model. The paper is organized as follows: In the section 2 we briefly introduce non-minimal derivative coupling model. In the section 3 we examine conditions for oscillatory inflation and study the reheating phase in this model and the temperature at the end of warm inflation is calculated. In the section 4 we discuss gravitational baryogenesis scenarios during oscillatory inflaton dominated in the context of non-minimal derivative coupling model and also we briefly investigate gravitational baryogenesis scenarios during radiation dominated phase. In the section 5 we consider the qualitative implication of non-minimal derivative coupling by calculating the corresponding baryon asymmetry and compare our results with the observation. In the last section we conclude our results.
We use units \(\hbar=c=1\) through the paper.
## 2 The model
In this section, we will introduce reheating of universe after inflation in non-minimal kinetic coupling model where rapid oscillatory inflaton decaying to radiation. Let us consider the total action of non-minimal derivative coupling
model [25, 26]
\[S=\int\Big{(}\frac{M_{P}^{2}}{2}R-\frac{1}{2}\Delta^{\mu\nu}\partial_{\mu}\varphi \partial_{\nu}\varphi-V(\varphi)\Big{)}\sqrt{-g}d^{4}x+S_{int}+S_{r}+S_{B}, \tag{2}\]
where \(\Delta^{\mu\nu}=g^{\mu\nu}+\frac{1}{M^{2}}G^{\mu\nu}\), \(G^{\mu\nu}=R^{\mu\nu}-\frac{1}{2}Rg^{\mu\nu}\) is the Einstein tensor, \(M\) is a coupling constant with mass dimension, \(M_{P}=2.4\times 10^{18}GeV\) is the reduced Planck mass, \(S_{r}\) is the radiation action and \(S_{int}\) describes the interaction of the scalar field with radiation. In order to describe gravitational baryogenesis, we define action \(S_{B}\) by interaction between derivative of Ricci scalar curvature \(\partial_{\mu}R\) and baryon current \(J^{\mu}\) as [9]
\[S_{B}=\frac{1}{M_{*}^{2}}\int d^{4}x\sqrt{-g}(\partial_{\mu}R)J^{\mu}, \tag{3}\]
where \(M_{*}\) is the cutoff scale of the effective theory. We can obtain energy momentum tensor by variation of the action (1) with respect to the metric,
\[T_{\mu\nu}=T_{\mu\nu}^{(\varphi)}+T_{\mu\nu}^{(r)}. \tag{4}\]
Where \(T_{\mu\nu}^{(r)}\) is the energy momentum tensor for radiation described as
\[T_{\mu\nu}^{(r)}=(\rho_{r}+P_{r})u_{\mu}u_{\nu}+P_{r}g_{\mu\nu}, \tag{5}\]
\(u^{\mu}\) is the four-velocity of the radiation and \(T_{\mu\nu}^{(\varphi)}\) is the energy momentum tensor for minimal and non-minimal coupling counterparts of scalar field as follows
\[T_{\mu\nu}^{(\varphi)} = \nabla_{\mu}\varphi\nabla_{\nu}\varphi-\frac{1}{2}g_{\mu\nu}{( \nabla\varphi)}^{2}-g_{\mu\nu}V(\varphi)\] \[-\frac{1}{2}G_{\mu\nu}{(\nabla\varphi)}^{2}-\frac{1}{2}R\nabla_{ \mu}\varphi\nabla_{\nu}\varphi+R_{\mu}^{\alpha}\nabla_{\alpha}\varphi\nabla_{ \nu}\varphi\] \[+R_{\nu}^{\alpha}\nabla_{\alpha}\varphi\nabla_{\mu}\varphi+R_{ \mu\alpha\nu\beta}\nabla^{\alpha}\varphi\nabla^{\beta}\varphi+\nabla_{\mu} \nabla^{\alpha}\varphi\nabla_{\nu}\nabla_{\alpha}\varphi\] \[-\nabla_{\mu}\nabla^{\nu}\varphi\Box\varphi-\frac{1}{2}g_{\mu \nu}\nabla^{\alpha}\nabla^{\beta}\varphi\nabla_{\alpha}\nabla_{\beta}\varphi+ \frac{1}{2}g_{\mu\nu}{(\Box\varphi)}^{2}\] \[-g_{\mu\nu}\nabla_{\alpha}\varphi\nabla_{\beta}\varphi R^{ \alpha\beta}.\]
Energy transfer between the scalar field and radiation is assumed to be [33, 34, 35, 36, 37, 38]
\[Q_{\mu}=-\Gamma u^{\nu}\partial_{\mu}\varphi\partial_{\nu}\varphi. \tag{7}\]
\(\Gamma\), is decay rate of scalar field, where in general, is a function of \(\varphi\) and temperature [34, 39]. The covariant derivative of energy momentum tensor becomes
\[\nabla^{\mu}T_{\mu\nu}^{(r)}=Q_{\nu}\qquad and\qquad\nabla^{\mu}T_{\mu\nu}^{( \varphi)}=-Q_{\nu}. \tag{8}\]
The equation of motion, for the spatially flat FLRW Universe, and in presence of the dissipative term becomes
\[(1+\frac{3H^{2}}{M^{2}})\ddot{\varphi}+3H(1+\frac{3H^{2}}{M^{2}}+\frac{2\dot{ H}}{M^{2}})\dot{\varphi}+V^{\prime}(\varphi)+\Gamma\dot{\varphi}=0, \tag{9}\]
where \(H=\dot{a}/a\) is the Hubble parameter, overdot sign is derivative with respect to cosmic time \(t\), prime is derivative with respect to the scalar field \(\varphi\), \(\Gamma\dot{\varphi}\) is the friction term which describes the decay of the scalar field to radiation. The Friedmann equations are given by
\[H^{2} = \frac{1}{3M_{P}^{2}}(\rho_{\varphi}+\rho_{r}), \tag{10}\] \[\dot{H} = -\frac{1}{2M_{P}^{2}}(\rho_{\varphi}+\rho_{r}+P_{\varphi}+P_{r}). \tag{11}\]
Where \(\rho_{r}\) and \(P_{r}\) are the energy density and the pressure of radiation, respectively. In the non-minimal derivative coupling model, the energy density \(\rho_{\varphi}\) and the pressure of inflaton field \(P_{\varphi}\) in FRW metric can be expressed as [25, 26]
\[\rho_{\varphi} = (1+\frac{9H^{2}}{M^{2}})\frac{\dot{\varphi}^{2}}{2}+V(\varphi), \tag{12}\] \[P_{\varphi} = (1-\frac{3H^{2}}{M^{2}}-\frac{2\dot{H}}{M^{2}})\frac{\dot{\varphi }^{2}}{2}-V(\varphi)-\frac{2H\dot{\varphi}\ddot{\varphi}}{M^{2}}. \tag{13}\]
We can write energy density of radiation as a function of temperature \(T\) and entropy density \(s\), \(\rho_{r}=(3/4)Ts\)[34]. Using the equation of state parameter for radiation \(\omega_{r}=1/3\), we obtain the rate of radiation production as
\[\dot{\rho_{r}}+4H\rho_{r} = \Gamma\dot{\varphi}^{2}, \tag{14}\] \[\dot{\rho_{\varphi}}+3H(\rho_{\varphi}+P_{\varphi}) = -\Gamma\dot{\varphi}^{2}. \tag{15}\]
## 3 Reheating after inflation
In this section we will consider reheating of the Universe after the end of slow-roll inflation. At the end of inflation, oscillation of the scalar field
about the bottom of potential begins. We assume that the potential is even, \(V(-\varphi)=V(\varphi)\), and consider rapid oscillating solution to equation (9) around \(\varphi=0\). The inflaton energy density may estimated as \(\rho_{\varphi}=V(\Phi(t))\), where \(\Phi(t)\) is the amplitude of inflaton oscillation. In this epoch \(\rho_{\varphi}\) and \(H\) change insignificantly during a period of oscillation [29, 30].
In the rapid oscillation period of scalar field, the time average of adiabatic index, defined by \(\gamma=(\rho_{\varphi}+P_{\varphi})/\rho_{\varphi}\) is given by \(\gamma=(\frac{\rho_{\varphi}+P_{\varphi}}{\rho_{\varphi}})\), where bracket denotes time averaging. For a power law potential
\[V(\varphi)=\lambda\varphi^{q}, \tag{16}\]
at the high friction limit (\(H^{2}/M^{2}\gg 1\)), adiabatic index becomes [30]
\[\gamma\approx\frac{2q}{3q+6}. \tag{17}\]
By averaging the continuity equation, we obtain [32]
\[\frac{d}{dt}\langle\rho_{\varphi}\rangle+3H\gamma\langle\rho_{\varphi}\rangle +\frac{\gamma\Gamma M^{2}}{3H^{2}}\langle\rho_{\varphi}\rangle=0. \tag{18}\]
We can simply derive the average of energy density of scalar field at the high friction limit and \(\Gamma M^{2}\ll 3H^{3}\) constrain as
\[\langle\rho_{\varphi}\rangle\propto a(t)^{-3\gamma}. \tag{19}\]
Figure 1: The admissible region of decoupling temperature \(T_{D}\) and reheating temperature \(T_{RD}\) from high friction condition, in the case that decoupling take place during the reheating era for quadratic inflationary potential (\(q=2\)) and dimension-6 B-violating interaction (\(n=2\)). We assumes coupling constant \(M=10^{-8}M_{p}\) and different values of \(C\). The allowed regions (\(C>10\)) are restricted to the dark brown. The bright colors of the region represent low values while the dark colors represent high values of \(C\).
By relation (19) and Friedmann equation, in the scalar field dominated era, we can obtain
\[a(t)\propto t^{\frac{q+2}{q}}\propto t^{\frac{2}{3\gamma}}. \tag{20}\]
Therefore the Hubble parameter in the energy density of scalar field dominated era, can be estimated as \(H\approx 2/(3\gamma t)\). In the rapid oscillation phase and with the power law potential (16), we can write the amplitude of the oscillation as
\[\phi(t)\propto a(t)^{-\frac{2}{q+2}}\propto t^{-\frac{2}{q}}. \tag{21}\]
We have seen at high friction limit \(\langle\dot{\varphi}^{2}\rangle\approx\gamma M_{P}^{2}M^{2}\) is nearly constant. Therefore from equations (14) and (16) we can calculate evolution of radiation and scalar field energy density as
\[\rho_{r} = \frac{3\Gamma\gamma^{2}M^{2}M_{P}^{2}}{(8+3\gamma)}t\Big{[}1-( \frac{t_{o}}{t})^{(1+\frac{8}{3\gamma})}\Big{]}, \tag{22}\] \[\rho_{\varphi} = \rho_{o}\Big{(}\frac{t_{o}}{t}\Big{)}^{2}\exp\Big{[}-\Big{(} \frac{\Gamma\gamma^{3}M^{2}}{4}\Big{)}(t^{3}-t_{o}^{3})\Big{]}, \tag{23}\]
where \(t_{o}\) is the beginning of scalar field rapid oscillation, which \(\rho_{r}(t=t_{o})=0\) and \(\rho_{o}=\rho_{\varphi}(t_{o})\). In the rapid oscillation phase, energy density of radiation increases slowly, so that at the time \(t_{RD}\) the energy density of radiation becomes equal to the scalar field \(\rho_{r}(t_{RD})\approx\rho_{\varphi}(t_{RD})\). From equations (22) and (10) we can calculate \(t_{RD}\) as [30, 32]
Figure 2: Comparison high friction condition for different values of coupling constant \(M\) from equation (48). In the case that decoupling take place during the reheating era for quadratic inflationary potential (\(q=2\)), dimension-6 B-violating interaction (\(n=2\)) and \(C=1\).
\[{t_{RD}}^{3}\approx\frac{4(8+3\gamma)}{9\Gamma\gamma^{4}M^{2}}. \tag{24}\]
We can write energy density of radiation as a function of reheating temperature \(T_{RD}\) as [33]
\[\rho_{r}(t_{RD})=g_{\star}\frac{\pi^{2}}{30}T_{RD}^{4}, \tag{25}\]
where \(g_{\star}\) is the number of degree of freedom at the reheating temperature and \(T_{RD}\) is the temperature of radiation at the beginning of radiation dominated era. Therefore the temperature of the universe at the beginning of radiation dominated universe becomes
\[{T_{RD}}^{4}\approx\frac{30M_{P}^{2}}{\pi^{2}g_{\star}}\Big{[}\frac{12\Gamma^{ 2}\gamma^{2}M^{4}}{\left(8+3\gamma\right)^{2}}\Big{]}^{\frac{1}{3}}. \tag{26}\]
Figure 3: decoupling temperature \(T_{D}\) in terms of reheating temperature \(T_{RD}\), for coupling constant \(M=10^{-8}M_{p}\) and high reheating temperature. The blue, green and red dashed curves correspond to different values of cutoff scale \(M_{\star}\), to explain the observed baryon asymmetry (\(Y_{B}=8.64\times 10^{-11}\)). The solid black curve correspond to ”decoupling of B-violating processes” with \(M_{B}=10^{-4}M_{p}\). In the case that decoupling take place during the reheating era for quadratic inflationary potential (\(q=2\)) and dimension-6 B-violating interaction (\(n=2\)).The admissible regions for decoupling temperature and reheating temperature are displayed with a brown color spectrum, from high friction condition.Intersection of the dashed curves and solid curve are the points where defines, \(T_{RD}\) and \(T_{D}\).
## 4 Gravitational baryogenesis
### Reheating period
During the reheating era, the scalar field oscillates about minimum of potential and decays to ultra relativistic particles. In this period the energy density of oscillatory scalar field is dominated and the Universe expansion is accelerated. The equation (20) is equivalent to
\[a(t)=a_{RD}\Big{(}\frac{t}{t_{RD}}\Big{)}^{\frac{2}{3\gamma}}, \tag{27}\]
where \(a_{RD}\) is scale factor at the beginning of radiation dominated era. The evolution of the energy density of radiation during the rapid oscillation of scalar field from relation (14) becomes
\[\frac{d}{dt}(a^{4}\rho_{r})\approx-3\Gamma\gamma M^{2}M_{P}^{2}a^{4}\Rightarrow \rho_{r}\propto a^{\frac{3\gamma}{2}}. \tag{28}\]
Figure 4: Decoupling temperature \(T_{D}\) in terms of reheating temperature \(T_{RD}\), for coupling constant \(M=10^{-14}M_{p}\) and low reheating temperature. The blue, green and red dashed curves correspond to different values of cutoff scale \(M_{*}\), to explain the observed baryon asymmetry (\(Y_{B}=8.64\times 10^{-11}\)) in relation (49). The solid black curve correspond to ”decoupling of B-violating processes” with \(M_{B}=10^{-12}M_{p}\). In the case that decoupling take place during the reheating era for quadratic inflationary potential (\(q=2\)) and dimension-6 B-violating interaction (\(n=2\)). The admissible regions for decoupling temperature and reheating temperature are displayed with a brown color spectrum, from high friction condition. Intersection of the dashed curves and solid curve are the points where defines, \(T_{RD}\) and \(T_{D}\).
Therefore we can write the energy density of radiation as a function of scale factor as
\[\rho_{r}=g_{\star}\frac{\pi^{2}}{30}T_{RD}^{4}\Big{(}\frac{a}{a_{RD}}\Big{)}^{ \frac{3\gamma}{2}}. \tag{29}\]
From relation (19) the energy density of scalar field is given by
\[\rho_{\varphi}=g_{\star}\frac{\pi^{2}}{30}T_{RD}^{4}\Big{(}\frac{a}{a_{RD}} \Big{)}^{-3\gamma}. \tag{30}\]
From relation (29) and Fridmann equation (10) we can calculate evolution of scale factor as a function of temperature, during reheating period as
\[a=a_{RD}\Big{(}\frac{T}{T_{RD}}\Big{)}^{\frac{8}{3\gamma}}. \tag{31}\]
This relation show that in the non-minimal derivative coupling model, the temperature of Universe increases by expansion of the Universe until beginning of radiation dominated period. While in the standard thermal history of the Universe, temperature had opposite evolution. By replacement of relation (31) into equation (30) the energy density of inflaton field becomes
\[\rho_{\varphi}=g_{\star}\frac{\pi^{2}}{30}\Big{(}\frac{T_{RD}^{12}}{T^{8}} \Big{)}. \tag{32}\]
If there exist B-violating interaction in thermal equilibrium then it can generate net baryon asymmetry. In the expanding Universe From action (3) we have [9, 22]
\[\frac{1}{M_{\star}}(\partial_{\mu})J^{\mu}=\frac{\dot{R}}{M_{\star}^{2}}(g_{b }n_{b}+g_{\bar{b}}n_{\bar{b}}), \tag{33}\]
where \(g_{b}=-g_{\bar{b}}\) denotes the number of intrinsic degree of freedom of baryons. \(n_{b}\) and \(n_{\bar{b}}\) are the number densities of baryon and antibaryon respectively. An effective chemical potential follow as \(\mu_{b}=-\mu_{\bar{b}}=g_{b}\dot{R}/M_{\star}^{2}\), the entropy density of the Universe is given by \(s=2\pi^{2}g_{\star}T^{3}/45\), the baryon number density, in thermal equilibrium, becomes \(n_{B}=(g_{b}n_{b}+g_{\bar{b}}n_{\bar{b}})=-g_{b}\mu_{b}T^{2}/6\)[40]. As a result, we can write the baryon to entropy ratio (baryon asymmetry) in an accelerating universe as
\[Y_{B}\equiv\frac{n_{B}}{s}\approx-\frac{15g_{b}^{2}}{4\pi^{2}g_{\star}}\frac{ \dot{R}}{M_{\star}^{2}T}\Big{|}_{T=T_{D}}, \tag{34}\]
where temperature \(T_{D}\) is the temperature of the Universe at which the baron current violation decouples. In the spatially flat FLRW metric, Ricci scalar curvature \(R\) is equal to
\[R=-6(\dot{H}+2H^{2}), \tag{35}\]
by using equation (31) for the scale factor, easily we can calculate Ricci scalar curvature as a function of cosmic time as
\[R=4\Big{(}\frac{4-3\gamma}{3\gamma^{2}}\Big{)}t^{-2}. \tag{36}\]
Now, with the time derivative of Ricci scalar curvature (36) and Fridmann equation \(H^{2}\approx\rho_{\varphi}/3M_{p}^{2}\) in the scalar field domination period, we have
\[\dot{R}\approx-\sqrt{3}\gamma(4-3\gamma)\frac{\rho_{\varphi}^{\frac{3}{2}}}{M_ {p}^{3}}. \tag{37}\]
Finally, by substituting \(\dot{R}\) and \(\rho_{\varphi}\) from equations (37) and (32) into equation (34), we obtain baryon asymmetry \(Y_{B}\) as a function of Universe temperature
\[Y_{B}\equiv\frac{n_{B}}{s}\approx\frac{\pi\gamma(4-3\gamma)g_{b}^{2}\sqrt{g_{ \star}}}{8\sqrt{10}}\frac{T_{RD}^{18}}{M_{p}^{3}M_{\star}^{2}T^{13}}\Big{|}_{ T=T_{D}}. \tag{38}\]
We continue with a brief mention of the origin of the B-violating interaction that is indispensable for any baryogenesis scenario. We assume that B-violating interactions, which are given by an operator \(\mathcal{O}_{B}\) of mass dimension \(D=4+n\)[9, 22]. We need \(n>0\) for the B-violating interaction. In the B-violating interactions, coupling constants are proportional to \(M_{B}^{-n}\), where \(M_{B}\) is the mass scale, the rate of generation of B-violating interaction in thermal equilibrium with the temperature \(T\) can be cast in the form [9]
\[\Gamma_{B}=\frac{T^{2n+1}}{M_{B}^{2n}}. \tag{39}\]
Decoupling of B-violating processes occurs at \(T=T_{D}\), when \(\Gamma\) falls below \(H=2/3\gamma t\). Therefore we can obtain temperature of decoupling from equation (32) as
\[T_{D}=\left(\frac{\pi\sqrt{g_{\star}}M_{B}^{2n}}{3\sqrt{10}M_{p}}\right)^{ \frac{1}{2n+5}}T_{RD}^{\frac{6}{2n+5}}. \tag{40}\]
Therefore by substituting of relation (40) into equation (38) we have
\[Y_{B}\approx\Big{(}\frac{3\gamma(4-3\gamma)g_{b}^{2}}{8}\Big{)}\Big{(}\frac{\pi^{2 }g_{\star}}{90}\Big{)}^{\left(\frac{n-4}{2n+5}\right)}\frac{T_{RD}^{12\left( \frac{3n+1}{2n+5}\right)}}{M_{\star}^{2}M_{p}^{2\left(\frac{3n+1}{2n+5}\right)} M_{B}^{\left(\frac{26n}{2n+5}\right)}}. \tag{41}\]
From high friction condition \(H^{2}/M^{2}\gg 1\) we can constrain the decoupling temperatures \(T_{D}\) as
\[MM_{p}T_{D}^{4}\ll\sqrt{\frac{g_{\star}\pi^{2}}{90}}T_{RD}^{6}. \tag{42}\]
Contrary to minimal coupling model, we have seen during the oscillatory scalar field dominated universe for \(\gamma=1/3\) the temperature of the universe increases as \(T\approx T_{RD}(a(t)/a_{RD})^{(1/8)}\). Therefore, the decoupling temperature \(T_{D}\) is smaller than the reheating temperature \(T_{RD}\), in pleasant accordance with the condition (42).
### Radiation dominated period
In this section we consider the gravitational baryogenesis during the radiation dominated era after the reheating period. The challenge of gravitational
Figure 5: decoupling temperature \(T_{D}\) in terms of reheating temperature \(T_{RD}\), for coupling constant \(M=10^{-8}M_{p}\) and high reheating temperature. The blue and red dashed curves correspond to different values of \(M_{B}\) in ”decoupling of B-violating processes”. The solid black curve correspond to cutoff scale \(M_{\star}=10^{-1}M_{p}\), to explain the observed baryon asymmetry (\(Y_{B}=8.64\times 10^{-11}\)). In the case that decoupling take place during the reheating era for quadratic inflationary potential (\(q=2\)) and dimension-6 B-violating interaction (\(n=2\)). The admissible regions for decoupling temperature and reheating temperature are displayed with a brown color spectrum, from high friction condition. Intersection of the dashed curves and solid curve are the points where defines, \(T_{RD}\) and \(T_{D}\).
baryogenesis during radiation dominated universe is characterised by equation of state \(\omega\approx 1/3\). If \(\omega\) were equal to \(1/3\) exactly, then \(R=3(1-3\omega)H^{2}\) would vanish and \(Y_{B}=0\). Then the baryon asymmetry effect would never be generated during the radiation dominated era. If we look more closely at the problem, it is not so serious, so that if we take into account the effect of interactions among massless particles lead to trace anomaly that make \(T_{\mu}^{\mu}\neq 0\). Therefore the equation of state is given by \(1-3\omega\sim 10^{-2}-10^{-1}\)[9, 22].
In the radiation dominated era \(a\propto\sqrt{t}\), \(\rho_{r}\propto a^{-4}\), \(H\approx 1/2t\) and by relation \(\rho_{r}=(\pi^{2}g_{\star}/30)T^{4}\) we arrive at
\[T^{2}=\frac{3\sqrt{10}M_{p}}{2\pi\sqrt{g_{\star}}}t^{-1}. \tag{43}\]
We can calculate the time of decoupling by equality \(\Gamma=H\), as \(t_{D}\approx M_{B}^{2n}/2T_{D}^{2n+1}\), and the temperature of decoupling becomes
\[T_{D}^{2n-1}\approx\frac{\pi M_{B}^{2n}\sqrt{g_{\star}}}{3\sqrt{10}M_{p}}. \tag{44}\]
By using Fridmann equation during of radiation dominated era \(H^{2}\approx\rho_{r}/3M_{p}^{2}\) we have \(R=3(1-3\omega)(1/4t^{2})\), therefore baryon asymmetry reads
Figure 6: decoupling temperature \(T_{D}\) in terms of reheating temperature \(T_{RD}\), for coupling constant \(M=10^{-14}M_{p}\) and low reheating temperature. The blue and red dashed curves correspond to different values of \(M_{B}\) in ”decoupling of B-violating processes”. The solid black curve correspond to cutoff scale \(M_{\star}=10^{-9}M_{p}\), to explain the observed baryon asymmetry (\(Y_{B}=8.64\times 10^{-11}\)). In the case that decoupling take place during the reheating era for quadratic inflationary potential (\(q=2\)) and dimension-6 B-violating interaction (\(n=2\)). The admissible regions for decoupling temperature and reheating temperature are displayed with a brown color spectrum, from high friction condition. Intersection of the dashed curves and solid curve are the points where defines, \(T_{RD}\) and \(T_{D}\).
\[Y_{B}\approx\frac{g_{b}^{2}}{2}\Big{(}\frac{\pi^{2}g_{\star}}{90}\Big{)}^{\left( \frac{n+2}{2n-1}\right)}(1-3\omega)M_{\star}^{-2}M_{p}^{-\left(\frac{6n+2}{2n-1} \right)}M_{B}^{\left(\frac{10n}{2n-1}\right)}. \tag{45}\]
Hence,by choosing dimension-6 B-violating interaction (\(n=2\)) and \(g_{\star}=106\) we have
\[Y_{B}\approx 13(1-3\omega)M_{\star}^{-2}M_{p}^{-\left(\frac{14}{3}\right)}M_{B }^{\left(\frac{20}{3}\right)}. \tag{46}\]
similarly, by choosing dimension-5 B-violating interaction (\(n=1\)) we have
\[Y_{B}\approx 785(1-3\omega)M_{\star}^{-2}M_{p}^{-8}M_{B}^{10}. \tag{47}\]
## 5 numerical analyses
As we have shown in the previous section, the value of reheating temperature \(T_{RD}\) is depends on coupling constant \(M\). On the other hand, high friction condition, constrain reheating temperature. To display high friction condition \(H^{2}/M^{2}\gg 1\) in our plots, we can parameterize relation (42) as
\[1\ll C=\sqrt{\frac{g_{\star}\pi^{2}}{90}}\frac{T_{RD}^{6}}{MM_{p}T_{D}^{4}}. \tag{48}\]
Figure 7: The acceptable range for \(M_{\star}\) and \(M_{B}\) to explain the observed baryon asymmetry (\(Y_{B}=8.64\times 10^{-11}\)). The solid blue curve, correspond to dimension-6 B-violating interaction (\(n=2\)) and red dashed curve dimension-5 B-violating interaction (\(n=1\)). We assume that reheating temperature is \(T_{RD}=10^{-9}M_{p}\) and the golden region show the \(T_{RD}<10^{-9}M_{p}\).
As we can see, this constrain does not depend on \(q\) and \(n\) parameters, but it depends on coupling constant \(M\). Therefore, we must chose \(T_{D}\) and \(T_{RD}\) where satisfy in this condition, for any value of \(M\). In all of the plots Figure 1-Figure 6, decoupling take place during the reheating era for quadratic inflationary potential (\(q=2\) or \(\gamma=1/3\)). Also, we assume that dimension-6 B-violating interaction (\(n=2\)), the ultra relativistic degrees of freedom at the electroweak energy scale \(g_{\star}=106.75\), the number of intrinsic degree of freedom of baryons \(g_{b}\approx\mathcal{O}(1)\).
In Figure 1 the admissible region of decoupling temperature \(T_{D}\) and reheating temperature \(T_{RD}\) from high friction condition, in the case that decoupling take place during the reheating era for quadratic inflationary potential (\(q=2\)), dimension-6 B-violating interaction (\(n=2\)),coupling constant \(M=10^{-8}M_{p}\), and different values of \(C\) has been depicted. As a reference, the acceptable regions are \(C>10\) where displayed by the brown color. So the region \(C<1\) is out of bounds and the values of reheating temperature and decoupling temperature in this region violate the high friction condition. The bright colors represent low values while the dark colors represent high values for \(C\).
We can rewrite relation (38), (40) and (41) for \(\gamma=1/3,n=2\), in the form of
\[Y_{B}\approx\frac{3}{8}\bigg{(}\frac{\pi\sqrt{g_{\star}}}{3\sqrt{10}}\bigg{)} \frac{T_{RD}^{18}}{M_{p}^{3}M_{\star}^{2}T_{D}^{13}}, \tag{49}\]
\[T_{D}\approx\bigg{(}\frac{\pi\sqrt{g_{\star}}}{3\sqrt{10}}\bigg{)}^{1/9}\frac {M_{B}^{4/9}}{M_{p}^{1/9}}T_{RD}^{2/3}, \tag{50}\]
\[Y_{B}\approx\frac{3}{8}\bigg{(}\frac{\pi\sqrt{g_{\star}}}{3\sqrt{10}}\bigg{)} ^{(-4/9)}\frac{T_{RD}^{28/3}}{M_{\star}^{2}M_{p}^{14/9}M_{B}^{52/9}}. \tag{51}\]
In Figure 2 high friction condition (48) for different values of coupling constant \(M\) has been depicted. Clearly, as the coupling constant \(M\) is decreased, the acceptable region for \(T_{RD}\) and \(T_{D}\) associated with high temperature.
Figure 3 shows decoupling temperature \(T_{D}\) in terms of reheating temperature \(T_{RD}\), for coupling constant \(M=10^{-8}M_{p}\). The blue, green and red dashed curves correspond to different values of cutoff scale \(M_{\star}\), to explain the observed baryon asymmetry (\(Y_{B}=8.64\times 10^{-11}\)) in relation (49). The solid black curve correspond to "decoupling of B-violating processes" with \(M_{B}=10^{-4}M_{p}\) in relation (50). The admissible regions for decoupling temperature and reheating temperature are displayed with a brown color spectrum, from high friction condition. Intersection of the dashed curves and solid curve are the points where defines, \(T_{RD}\) and \(T_{D}\). We have seen, intersection points
take place in admissible regions \(C>100\) and \(C>1000\). Therefore, if we want to explain, generation of baryon asymmetry in high reheating temperature, then we have to choose higher values of \(M_{\star}\) and \(M_{B}\).
Therefore more appropriate choices, are larger cutoff scale \(M_{\star}\) which fall into the dark eras.
The Figure 4 is similar to Figure 3, but we drawing this plot for low reheating temperature and different parameters values. We have seen, if we want to explain, generation of baryon asymmetry in low reheating temperature, then we have to choose smaller values of \(M_{\star}\), \(M_{B}\) and \(M\).
The Figure 6 shows decoupling temperature \(T_{D}\) in terms of reheating temperature \(T_{RD}\), for coupling constant \(M=10^{-8}M_{p}\) and high reheating temperature. The blue and red dashed curves correspond to different values of \(M_{B}\) in "decoupling of B-violating processes" in relation (50). The solid black curve correspond to cutoff scale \(M_{\star}=10^{-1}M_{p}\), to explain the observed baryon asymmetry (\(Y_{B}=8.64\times 10^{-11}\)) in relation (49). The admissible regions for decoupling temperature and reheating temperature are displayed with a brown color spectrum, from high friction condition. Intersection of the dashed curves and solid curve are the points where defines, \(T_{RD}\) and \(T_{D}\) where satisfy in relations (49), (50) and high friction condition. We have seen, intersection points take place in admissible regions \(C>100\).
The Figure 6 is similar to Figure 6, but we drawing this plot for low reheating temperature and different parameters values. We have seen, if we want to explain, generation of baryon asymmetry in low reheating temperature, then we have to choose smaller values of \(M_{\star}\), \(M_{B}\) and \(M\).
The Figure 7, the acceptable range for \(M_{\star}\) and \(M_{B}\) to explain the observed baryon asymmetry (\(Y_{B}=8.64\times 10^{-11}\)) has been depicted. The solid blue curve, correspond to dimension-6 B-violating interaction (\(n=2\)) and red dashed curve dimension-5 B-violating interaction (\(n=1\)). We assume that reheating temperature is \(T_{RD}=10^{-9}M_{p}\) and the golden region show the \(T_{RD}<10^{-9}M_{p}\).
We conclude that in non-minimal derivative coupling model, sufficient baryon asymmetry has been generated in low and high reheating temperature during reheating phase.
## 6 Conclusion
In this paper, We investigated the gravitational baryogenesis mechanism in the non-minimal derivative coupling model in high friction regime. We used coupling between derivative of Ricci scalar curvature and baryon current to describe baryon asymmetry. In this model, inflaton begins a coherent rapid
oscillation, after the slow roll inflation. During this stage, inflaton decays to radiation and reheats the Universe. We calculated the baryon to entropy ratio in the case that reheating period described by coherent rapid oscillation in the non-minimal derivative coupling model. As we demonstrated, in contrast to the standard gravitation baryogenesis where could not explain baryon asymmetry in low reheating temperature, in the non-minimal derivative coupling model we can describe baron asymmetry in low and high reheating temperature.
|
2301.05934 | Quantum Brownian motion induced by an inhomogeneous tridimensional space
and a $S^1\times R^3$ topological space-time | In this paper we investigate the Quantum Brownian motion of a point particle
induced by quantum vacuum fluctuations of a massless scalar field in (3 +
1)-dimensional Minkowski spacetime with distinct conditions (Dirichlet,
Neumann, mixed and quasiperiodic). The modes of the field are confined and
compactified to a finite length region, which consequently provides a natural
measure scale for the system. Useful expressions for the Wightman function have
been obtained, which allow us to calculate analytical expressions for the
velocity dispersion in all condition cases considered. We also obtain
expressions for the velocity dispersion in the short and late time regimes.
Finally, we exhibit some graphs in order to show the behavior of the velocity
dispersions, discussing important divergencies that are present in our results. | Ãwerton J. B. Ferreira, Eliza M. B. Guedes, Herondy F. Santana Mota | 2023-01-14T15:25:20Z | http://arxiv.org/abs/2301.05934v2 | Quantum Brownian motion induced by an inhomogeneous tridimensional space and a \(S^{1}\times R^{3}\) topological space-time
###### Abstract
In this paper we investigate the Quantum Brownian motion of a point particle induced by quantum vacuum fluctuations of a massless scalar field in \((3+1)\)-dimensional Minkowski spacetime with distinct conditions (Dirichlet, Neumann, mixed and quasiperiodic). The modes of the field are confined and compactified to a finite length region, which consequently provides a natural measure scale for the system. Useful expressions for the Wightman function have been obtained, which allow us to calculate analytical expressions for the velocity dispersion in all condition cases considered. We also obtain expressions for the velocity dispersion in the short and late time regimes. Finally, we exhibit some graphs in order to show the behavior of the velocity dispersions, discussing important divergencies that are present in our results.
## I Introduction
The stochastic motion performed by a point particle when interacting with the quantum vacuum fluctuations of a relativistic field, e.g., scalar or electromagnetic, is also known as Quantum Brownian motion (QBM). This is an example of a phenomena class which arise from quantum vacuum fluctuations and that, over the past several years, has been studied in different scenarios and with different approaches [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17]. The quantum vacuum fluctuations are always present but only become observable when the vacuum is somehow perturbed, for instance, by considering elements such as boundary conditions, temperature, nontrivial topology and so on.
Similarly to the classical Brownian motion, the typical quantities for the quantum version that should be investigated are the position and velocity dispersions. However, the analogy between the classical and quantum Brownian motion is limited given that in the quantum scenario the dispersions can assume negative values, something that does not occur in the classical case. In the latter, dispersions are quantities positively defined, that is, \((\langle\Delta A\rangle^{2})>0\), where \(A\) is some physical observable to be measured. So, negative values for the dispersions in the classical scenario does not make sense. On the other hand, in the quantum context it is possible that \(\langle(\Delta A)^{2}\rangle<0\), which can be interpreted as due to quantum uncertainty reduction [2; 3], subvacuum effects [11; 12; 13] and failure in the renormalization process as a consequence of boundary conditions imposed on the field [10].
In the quantum context, the basic idea is that a point particle (structureless) interacting with quantum vacuum fluctuations of a field has an induced stochastic motion. In the electromagnetic case, for instance, it has been analyzed in Refs. [2] and [3] the QBM as a consequence of one and two perfectly reflecting parallel planes, respectively. In both cases the position and velocity dispersions are calculated. Moreover, the study of thermal effects for the QBM in the electromagnetic case with one perfectly reflecting plane was also developed in Ref. [4], where the magnitude of thermal and quantum contributions are discussed. Thereby, it is shown that for well defined temperature regimes one contribution can be more significant than the other. By seeking to investigate more realistic systems, a wave packet like structure for the particle has been proposed in Ref. [6]. In addition, switching time effects associated with the interaction between a point particle and the quantum vacuum fluctuations of the field are considered in Refs. [5; 8]. Also, switching time effects at finite temperature are taken into account in Ref. [9].
Regarding the QBM induced by vacuum fluctuations of a quantum scalar field, the investigations conducted follow similarly to the electromagnetic case. In Ref. [10], for instance, it is studied the induced QBM due to a massless scalar field in the presence of a perfectly reflecting plane and in Ref. [11] switching time effects are taken into account. The massive scalar field case in \((D+1)\)-dimensions, with one perfectly reflecting plane, is studied in Ref. [12] and thermal effects in Ref. [13], both also considering switching time effects. All scalar field cases just mentioned use Dirichlet boundary condition, which set a null value for the field modes on the boundary. As a complement to the study of the QBM, it is worth mentioning that it has also been investigated in the cosmological context, in particular, considering dark matter detection. According to this scenario, in principle, dark matter may induce a stochastic motion in a
test particle of ordinary matter, whose observation would offer new insights into the understanding of dark matter properties [18].
As a contribution to all cases considered in literature so far, for the massless scalar field, we intend to take into consideration two elements, as far as we know, not yet explored in the study of the QBM. The first of them is to consider, in analogy to the electromagnetic case, two perfectly reflecting parallel planes where the scalar field satisfies not only Dirichlet but also Neumann and mixed boundary conditions (BC's). This way, we confine the modes of the field in one direction, something that naturally leads to momentum discretization in the same direction, providing a natural scale for the system. The second element we would like to consider is the effect of a quasiperiodic condition on the QBM of a scalar point particle.
The conditions mentioned in the previous paragraph can also be seen as possible ways of alter the topology of the spatial section of the Minkowski spacetime, which is the background where we are performing our investigations. The consideration of two planes, for instance, breaks the homogeneity and isotropy of space, which can be interpreted as a way of simulating a topological modification in the spatial section of the spacetime. In the case of the induced inhomogeneity, we note that, in the presence of planes, the spatial directions \(y\) and \(z\) are similar, but differ from the \(x\) direction, where the planes are located. In fact, an observer in the \(yz\) plane will perceive an infinity bidimensional space, but the same observer on the \(xy\) or \(xz\) planes will perceive a semi-infinity space, that is, infinity in the \(y\) and \(z\) directions, but finite in the \(x\) direction. On the other hand, the anisotropy, as we shall see, it is shown by the distinction between the velocity dispersions, which is the observable investigated in this work.
It is important to mention that the investigation of the induced Brownian motion considering nontrivial topologies for the spatial section of the Minkowski spacetime is a topic that has been explored for the past several years. Recently, in Ref. [19], it was investigated the Brownian motion of a point particle induced by quantum vacuum fluctuations of an electromagnetic field in a flat spacetime whose spatial section has nontrivial topologies. In principle, it is suggested that this effect can be used to indicate the global inhomogeneity of space. For similar and more recent discussions see also Refs. [20], [21] and [22], where these effects as a function of their time evolution are used as a supposed indicator of spatial orientability. See also Ref. [23] for an example in a conformally expanding flat spacetime. In addition, we would like to point out that, in this context, the Casimir effect has also been investigated; for more details see for instance Ref. [24] and references therein. Therefore, taking into consideration the current status of the subject just described, in the case of the massless scalar field, the present work aims to complement the investigations conducted so far for the QBM.
It is also worth to emphasize that BC's are not merely technical and mathematical details of academic interest, they also can be related to physical properties of the studied systems. Dirichlet and Neumann BC's, for instance, specify the field value and its normal derivate on the boundary, respectively. Typically, we found these conditions in electrostatic systems where either an electrical potential is fixed on the surface (Dirichlet BC) or the corresponding electric field (\(\nabla\phi\)) is the one fixed on the surface (Neumann BC) [25; 26]. Also, there exists mixed BC's, in which case both the field and its normal derivative are specified on the boundary. In Ref. [27], for instance, the spontaneous emission of a two-level system between two parallel plates has been investigated taking into consideration that the electromagnetic vector potential obeys BC's similar to the mixed one on the plates. In this case, one of the plates is perfectly conducting and the other one is perfectly permeable. Hence, we can say that mixed BC's simulates plates with distinct physical properties. As to the quasiperiodic condition, it can indicate the existence of an interaction present in the system. As an example, we can mention the Aharonov-Bohm effect [28; 29].
Regarding the structure of this paper, in Section II we describe the system to be investigated, indicating some useful simplifications. Then, we exhibit the complete set of normalized solutions for the scalar field for each condition used in this work. This allows us to obtain the positive frequency Wightman function for each case, which is a fundamental element in our calculations. We also obtain a general form for the Wightman function representing Dirichlet, Neumann and mixed BC's in a single expression. In Section III we calculate the particle velocity dispersion. Finally, in Section IV, we present our conclusions summarizing the main results obtained. Note that we have also dedicated Appendices A and B to obtain important expressions used to investigate asymptotic limits for the velocity dispersions. In this work we use natural units such that \(c=\hbar=1\).
## II Wightman functions
### Model, general field solution and the expression to calculate the Wightman function
In this section we want to establish some important results that will be used later on in the velocity dispersion computation, namely, the complete set of normalized solutions for the scalar field and the corresponding Wightman functions. In other words, we are interested in investigating the induced QBM of a point particle coupled to a fluctuating quantum massless scalar field, considering different conditions. As it is known, this stochastic motion is
induced by the quantum vacuum fluctuations of the field. The classical action that describes this system is written as
\[S_{\rm tot}=S_{\rm f}+S_{\rm p}+S_{\rm int}, \tag{1}\]
where
\[S_{\rm f}\ =\ \int dt\int dV\frac{(\partial_{\mu}\phi)(\partial^{\mu}\phi)}{2}, \tag{2}\]
is the massless scalar field part of the action,
\[S_{\rm p}\ =\ \int dt\frac{m\dot{\bf x}^{2}}{2} \tag{3}\]
corresponds to the action describing a point particle of mass \(m\) and
\[S_{\rm int}\ =\ -g\int dt\int dV\delta^{3}({\bf x}-{\bf x}^{\prime})\phi \tag{4}\]
stands for the interaction between the particle and the massless scalar field \(\phi\), \(dV\) is the volume element of the spatial section of the spacetime and \(\delta^{3}({\bf x}-{\bf x}^{\prime})\) is the spatial three dimensional Dirac delta function. Note that the measure of the strength of the interaction, denoted by \(g\), is the charge of the point particle. This is a model widely known in the literature and has been considered in different scenarios [1; 9; 10; 11; 12; 13].
The variation of the action (1) with respect to field, \(\phi\), provides the massless Klein-Gordon equation with a three dimensional Dirac delta function as a source, that is,
\[\Box\phi({\bf x},t)=-g\delta^{3}({\bf x}-{\bf x}^{\prime}), \tag{5}\]
where \(\Box=\partial_{\nu}\partial^{\mu}\) is the d'Alembertian differential operator to be considered in Minkowski spacetime described by the line element
\[ds^{2}=dt^{2}-dx^{2}-dy^{2}-dz^{2}. \tag{6}\]
Thereby, Eq. (5) is the equation of motion for a massless scalar field coupled to a point particle with mass \(m\) and charge \(g\). Although this is a non-homogeneous differential equation, we wish to consider that the point particle's influence on the field is negligible [10]. This allows us to write Eq. (5) as
\[\Box\phi({\bf x},t)\approx 0, \tag{7}\]
which gives a general nonnormalized solution in terms of plane waves, i.e.,
\[\phi_{\sigma}({\bf x},t)=Ne^{-i\omega t+ik_{x}x+ik_{y}y+ik_{z}z}, \tag{8}\]
where \(\omega^{2}=k_{x}^{2}+k_{y}^{2}+k_{z}^{2}\) are the eigenfrequencies of the field, with \(k_{i}\) being the momentum in each spatial direction, and \(\sigma=(k_{x},k_{y},k_{z})\) stands for the set of quantum numbers. The constant \(N\) can be obtained via normalization condition
\[2\omega\int dV\phi_{\sigma}(w)\phi_{\sigma^{\prime}}^{*}(w)=\delta_{\sigma \sigma^{\prime}}, \tag{9}\]
where the delta symbol in the r.h.s is understood as Kronecker delta for discrete quantum numbers and Dirac delta function for continuous quantum numbers. Note that we have introduced the notation \(w=({\bf x},t)\) to specify spacetime coordinates. As we shall see later, the solution in Eq. (8) is modified when subjected to both the boundary conditions and the quasiperiodic condition, leading to discretization of one of the momenta.
Once subjecting the solution in Eq. (8) to the conditions considered, we must find the normalization constant \(N\) by making use of Eq. (9). This process makes possible to write the complete set of normalized solution and use it to calculate the Wightman function which is a crucial element to our computations. In order to construct the Wightman function, we may first promote the field to an operator and write it in terms of the positive and negative frequency normalized solutions, with coefficients of the expansion being the creation \(a_{\sigma}^{\dagger}\) and annihilation \(a_{\sigma}\) operators. Mathematically, we use the standard construction [30]
\[\hat{\phi}(w)=\sum_{\sigma}[a_{\sigma}\phi_{\sigma}(w)+a_{\sigma}^{\dagger} \phi_{\sigma}^{*}(w)], \tag{10}\]
where the creation and annihilation operators obey the commutation relation \([a_{\sigma},a^{\dagger}_{\sigma^{\prime}}]=\delta_{\sigma\sigma^{\prime}}\). We, thus, are able to obtain the Wightman function by taking into consideration the definition
\[W(w,w^{\prime})=\langle 0|\hat{\phi}(w)\hat{\phi}(w^{\prime})|0\rangle=\sum_{ \sigma}\phi_{\sigma}(w)\phi^{*}_{\sigma}(w^{\prime}), \tag{11}\]
where \(|0\rangle\) is the vacuum state of the scalar field. Hence, the above equation provides the positive frequency Wightman function for the scalar field. In addition, the summation symbol in (11) stands for either integrals in the continuous quantum numbers or possible sums over discrete ones.
### Dirichlet boundary condition
Firstly we are interested in considering Dirichlet boundary condition on the massless scalar field solution (8). This means that by confining the field in a region of length \(a\) between two perfectly reflecting parallel planes, perpendicular to the \(x\)-direction, we must have the condition
\[\phi(\mathbf{x},t)|_{x=0}=\phi(\mathbf{x},t)|_{x=a}=0. \tag{12}\]
Therefore, from Eqs. (8), (12) and (9) we find that the complete set of normalized solutions in this case is given by
\[\phi_{\sigma}(\mathbf{x},t)=\frac{1}{\sqrt{4\pi^{2}a\omega_{n}}}\sin(k_{n}x)e ^{-i\omega_{n}t+ik_{y}y+ik_{z}z}, \tag{13}\]
where \(\omega_{n}^{2}=k_{n}^{2}+k_{y}^{2}+k_{z}^{2}\) are the eigenfrequencies of the field, with momentum in the \(x\)-direction now discretized, that is, \(k_{n}=\frac{n\pi}{a}\) (\(n=1,2,3,\ldots\)). The set of quantum numbers in this case is \(\sigma=(n,k_{y},k_{z})\). In Fig.1 we give an illustration of the setup described above. This configuration will also be used for the cases of Neumann and mixed boundary conditions later on.
In order to obtain the corresponding Wightman function we make use of Eq. (11) with the summation symbol defined as
\[\sum_{\sigma}\equiv\sum_{n=1}^{\infty}\int_{-\infty}^{\infty}dk_{y}\int_{- \infty}^{\infty}dk_{z}. \tag{14}\]
Consequently, the Wightman function becomes
\[W^{\rm(D)}=\frac{1}{2\pi a}\sum_{n=1}^{\infty}\int_{0}^{\infty}dkkJ_{0}( \Delta tk)\sin(k_{n}x)\sin(k_{n}x^{\prime})\frac{e^{-i\omega_{n}\Delta t}}{ \omega_{n}}, \tag{15}\]
where \(J_{\mu}(z)\) is the Bessel function [31], \(\Delta\ell=\sqrt{\Delta y^{2}+\Delta z^{2}}\), \(\Delta y=y-y^{\prime}\), \(\Delta z=z-z^{\prime}\) and \(\Delta t=t-t^{\prime}\). Note that in the above expression we have used polar coordinates for the plane defined by the momentum variables \(k_{y}\) and \(k_{z}\), such that \(k^{2}=k_{y}^{2}+k_{z}^{2}\) and \(dk_{y}dk_{z}\to kdkd\theta\), which made possible to perform the angular integral leading to the Bessel function.
Figure 1: A point particle with mass \(m\) and charge \(g\) in the presence of two identical and perfectly reflecting parallel planes \(p\) placed at \(x=0\) and \(x=a\), confining the field modes of a massless quantum scalar field.
The sum in \(n\) present in the Wightman function expression in Eq. (15) can be worked out by making use of the Abel-Plana formula [32]
\[\sum_{n=0}^{\infty}F(n)=\frac{1}{2}F(0)+\int_{0}^{\infty}d\xi F(\xi)+i\int_{0}^{ \infty}d\xi\frac{[F(i\xi)-F(-i\xi)]}{e^{2\pi\xi}-1}. \tag{16}\]
This is a very useful expression and it is often used, for example, in the Casimir energy computations (see Ref. [32] for more details). The function \(F(n)\) in the present case is taken to be
\[F(n)=\sin(k_{n}x)\sin(k_{n}x^{\prime})\frac{e^{-i\omega_{n}\Delta t}}{\omega_{ n}}, \tag{17}\]
where \(F(0)=0\) and, consequently, the contribution from the first term in the r.h.s. of (16) vanishes. Hence, by using the above expression in Eq. (16), after some algebraic manipulations, Eq. (15) can be written as
\[W^{\rm(D)}=W^{\rm(D)}_{1}+W^{\rm(D)}_{2}, \tag{18}\]
where for mathematical clarity and convenience, after the change of variables \(s=\frac{\pi\xi}{a}\), we have defined
\[W^{\rm(D)}_{1}=\frac{1}{2\pi^{2}}\int_{0}^{\infty}ds\sin(sx)\sin(sx^{\prime}) \int_{0}^{\infty}dk\frac{kJ_{0}(\Delta\ell k)e^{-i\Delta t\sqrt{k^{2}+s^{2}}} }{\sqrt{k^{2}+s^{2}}}, \tag{19}\]
and
\[W^{\rm(D)}_{2}=\frac{1}{\pi^{2}}\int_{0}^{\infty}dkkJ_{0}(\Delta\ell k)\int_{ k}^{\infty}ds\frac{\sin(isx)\sin(isx^{\prime})}{e^{2as}-1}\frac{\cosh(\Delta t \sqrt{s^{2}-k^{2}})}{\sqrt{s^{2}-k^{2}}}. \tag{20}\]
Note that the expression in Eq. (19) stems from the integral in the second term in the r.h.s of the Abel-Plana formula (16), while Eq. (20) stems from the third term. In the latter, we have also used the identity
\[\sqrt{(\pm is)^{2}+k^{2}}=\left\{\begin{array}{cc}\pm i\sqrt{s^{2}-k^{2}},& \quad\mbox{for $s>k$}\,,\\ \sqrt{k^{2}-s^{2}},&\quad\mbox{for $s<k$}\,.\end{array}\right. \tag{21}\]
The integrals in Eqs. (19) and (20) can be solved with the help of Refs. [31; 33], providing the expressions
\[W^{\rm(D)}_{1}=\frac{1}{4\pi^{2}}\left\{\frac{1}{[\Delta x^{2}+\alpha^{2}]}- \frac{1}{[\Delta\bar{x}^{2}+\alpha^{2}]}\right\}, \tag{22}\]
and
\[W^{\rm(D)}_{2}=-W^{\rm(D)}_{1}+\frac{1}{8\pi a\alpha}\left\{\frac{\sinh\left( \frac{\pi\alpha}{a}\right)}{\left[\cosh\left(\frac{\pi\alpha}{a}\right)-\cos \left(\frac{\pi\Delta x}{a}\right)\right]}-\frac{\sinh\left(\frac{\pi\alpha} {a}\right)}{\left[\cosh\left(\frac{\pi\alpha}{a}\right)-\cos\left(\frac{\pi \Delta\bar{x}}{a}\right)\right]}\right\}. \tag{23}\]
Consequently, by substituting the two results above in Eq. (18) we obtain
\[W^{\rm(D)}=\frac{1}{8\pi a\alpha}\left\{\frac{\sinh\left(\frac{\pi\alpha}{a} \right)}{\left[\cosh\left(\frac{\pi\alpha}{a}\right)-\cos\left(\frac{\pi\Delta x }{a}\right)\right]}-\frac{\sinh\left(\frac{\pi\alpha}{a}\right)}{\left[\cosh \left(\frac{\pi\alpha}{a}\right)-\cos\left(\frac{\pi\Delta\bar{x}}{a} \right)\right]}\right\}, \tag{24}\]
where \(\alpha^{2}=\Delta y^{2}+\Delta z^{2}-\Delta t^{2}\), with \(\Delta x=x-x^{\prime}\) and \(\Delta\bar{x}=x+x^{\prime}\). For our purposes, we can further simplify Eq. (24) by using the identity [31]
\[\frac{\sinh\left(\frac{\pi\alpha}{a}\right)}{\left[\cosh\left(\frac{\pi\alpha} {a}\right)-\cos\left(\frac{\pi\Delta x}{a}\right)\right]}=\frac{2a\alpha}{\pi }\sum_{n=-\infty}^{\infty}\frac{1}{[(\Delta x-2an)^{2}+\alpha^{2}]}. \tag{25}\]
This is particularly useful since we can separate the Minkowski contribution in a clearer way. This contribution is the term \(n=0\) of the sum which is divergent in the coincidence limit \(w^{\prime}\to w\). As it is known, this divergent contribution must be subtracted from the the calculation of the velocity dispersion in order to obtain a renormalized quantity. Therefore, Eq. (24) takes the form
\[W^{\rm(D)}(w,w^{\prime})=\frac{1}{4\pi^{2}}\sum_{n=-\infty}^{\infty}\left[f_{n }(\Delta r)-f_{n}(\Delta\bar{r})\right], \tag{26}\]
where
\[f_{n}(\Delta r) = \frac{1}{(\Delta x-2an)^{2}+\Delta y^{2}+\Delta z^{2}-\Delta t^{2}},\] \[f_{n}(\Delta\bar{r}) = \frac{1}{(\Delta\bar{x}-2an)^{2}+\Delta y^{2}+\Delta z^{2}-\Delta t ^{2}}. \tag{27}\]
Hence, Eq. (26) correspond to the positive frequency Wightman function in cartesian coordinates for the massless scalar field whose modes are restrict to obey Dirichlet boundary condition on the two perfectly reflecting parallel planes, placed at \(x=0\) and \(x=a\). Note that the Minkowski contribution comes from the term \(n=0\) in the function \(f_{n}(\Delta r)\). In contrast, the term \(n=0\) is finite in the coincidence limit \(w^{\prime}\to w\) for the function \(f_{n}(\Delta\bar{r})\). It is in fact the one plane contribution of the Wightman function for the Dirichlet boundary condition.
### Neumann boundary condition
In the case we use Neumann boundary condition, the normal derivative of the field must vanish in the boundary. In this sense, considering two perfectly reflecting parallel planes, placed at \(x=0\) and \(x=a\), we have
\[\left[\partial_{x}\phi(\mathbf{x},t)\right]|_{x=0}=\left[\partial_{x}\phi( \mathbf{x},t)\right]|_{x=a}=0. \tag{28}\]
So, from Eqs. (8), (28) and (9) we obtain the complete set of normalized solutions as follows
\[\phi(\mathbf{x},t)=c_{n}\cos(k_{n}x)e^{-i\omega_{n}t+ik_{y}y+ik_{z}z}, \tag{29}\]
where the eigenfrequencies are given by \(\omega_{n}^{2}=k_{n}^{2}+k_{y}^{2}+k_{z}^{2}\), with \(k_{n}=\frac{n\pi}{a}\) (\(n=0,1,2,\ldots\)) being the discretized momentum in the \(x\)-direction, and \(\sigma=(n,k_{y},k_{z})\) is the set of quantum numbers. The normalization constante \(c_{n}\) is written as
\[c_{n}=\begin{cases}\dfrac{1}{\sqrt{8\pi^{2}a\omega_{n}}},&n=0,\\ \dfrac{1}{\sqrt{4\pi^{2}a\omega_{n}}},&n\geq 1.\end{cases} \tag{30}\]
Similarly to the previous case, we can calculate the Wightman function by making use of Eq. (11). The summation symbol in Eq. (11) is now defined as
\[\sum_{\sigma}\equiv\sum_{n=0}^{\infty}\int_{-\infty}^{\infty}dk_{y}\int_{- \infty}^{\infty}dk_{z}. \tag{31}\]
Consequently, the Wightman function takes the form
\[W^{\rm(N)} = W_{1}^{\rm(N)}+W^{\rm(D)} \tag{32}\] \[= \frac{1}{4\pi^{2}a}\sum_{n=0}^{\infty}c_{n}^{*}\int_{-\infty}^{ \infty}dk_{y}\int_{-\infty}^{\infty}dk_{z}\frac{\cos(k_{n}\Delta\bar{x})e^{-i \omega_{n}\Delta t+ik_{y}\Delta y+ik_{z}\Delta z}}{\omega_{n}}+W^{(D)},\]
where \(c_{0}^{*}=1/2\) and \(c_{n\geq 1}^{*}=1\). In the above expression, we have used trigonometric identities in order to be possible to identify two contributions, i.e., the one in the first term in the r.h.s and the one in the second term corresponding to the Wightman function for Dirichlet boundary condition, given by Eq. (26). Since the Dirichlet part has previously been calculated, we only need to focus in the first term in the r.h.s. of Eq. (32). In the end, the Wightman function for the Neumann boundary condition case takes into consideration the sum of both terms in Eq. (32).
Let us then work out the first term in the r.h.s. of Eq. (32). This is possible with the help of the identity
\[\frac{e^{-\omega_{n}\Delta\tau}}{\omega_{n}}=\frac{2}{\sqrt{\pi}}\int_{0}^{ \infty}dse^{-\omega_{n}^{2}s^{2}-\frac{\Delta\tau^{2}}{4\pi^{2}}}, \tag{33}\]
where we have performed the Wick rotation, \(\Delta\tau=i\Delta t\). The use of the above identity in \(W_{1}^{(N)}\), along with the help of Ref. [31], leads to
\[W_{1}^{\rm(N)} = \frac{1}{2\pi a\alpha}\sum_{n=0}^{\infty}c_{n}^{*}\cos\left( \frac{\pi\Delta\bar{x}}{a}\right)e^{-\left(\frac{\pi a}{a}\right)n} \tag{34}\] \[= \frac{1}{4\pi a\alpha}\frac{\sinh\left(\frac{\pi\alpha}{a}\right) }{\left[\cosh\left(\frac{\pi\alpha}{a}\right)-\cos\left(\frac{\pi\Delta\bar{x} }{a}\right)\right]}.\]
Hence, in view of Eqs. (32), (34) and (24), we conclude that
\[W^{\rm(N)}(w,w^{\prime})=\frac{1}{4\pi^{2}}\sum_{n=-\infty}^{\infty}\left[f_{n}( \Delta r)+f_{n}(\Delta\bar{r})\right], \tag{35}\]
where the functions defined in (27) have been used. This is the positive frequency Wightman function for the massless scalar field obeying Neumann boundary condition on the two perfectly reflecting parallel planes. Note that the difference between the expressions for the Wightman function in the Dirichlet and Neumann boundary condition cases consists of only a changing of sign in the second term of Eq. (26). Again, the contribution \(n=0\) of the sum in the first term in the r.h.s. of Eq. (35) is the divergent Minkowski contribution in the coincidence limit \(w^{\prime}\to w\), while in the second term is the one plane finite contribution. The latter has the opposite sign when compared to the Dirichlet boundary condition case.
### Mixed boundary condition
In the mixed boundary condition case, the general solution of the field in Eq. (8) must obey Dirichlet condition in one plane and Neumann condition in the other. Thus, two configurations are possible on the first and second planes, that is, Dirichlet and Neumann (DN) as well as Neumann and Dirichlet (ND). For the configuration DN, respectively at \(x=0\) and \(x=a\), the condition obeyed by the field is given by
\[\phi({\bf x},t)|_{x=0}=\left[\partial_{x}\phi({\bf x},t)\right]|_{x=a}=0. \tag{36}\]
By applying the condition (36) on Eq. (8), with the use of Eq. (9) afterwards, we obtain the complete set of normalized solutions
\[\phi_{\sigma}({\bf x},t)=\frac{1}{\sqrt{4\pi^{2}a\omega_{n}}}\sin(k_{n}x)e^{- i\omega_{n}t+ik_{y}y+ik_{z}z}, \tag{37}\]
where the eigenfrequencies are now written as \(\omega_{n}^{2}=k_{n}^{2}+k_{y}^{2}+k_{z}^{2}\), with \(k_{n}=\frac{\pi(2n+1)}{2a}\) (\(n=0,1,2,\ldots\)). Again, the momentum in the \(x\)-direction has been discretized as a consequence of Eq. (36) and the set of quantum numbers is specified by \(\sigma=(n,k_{y},k_{z})\).
The Wightman function is computed through Eq. (11), by making use of the normalized solution in Eq. (37) and
\[\sum_{\sigma}\equiv\sum_{n=0}^{\infty}\int_{-\infty}^{\infty}dk_{y}\int_{- \infty}^{\infty}dk_{z}. \tag{38}\]
Thereby, similarly to the Dirichlet condition case, it is possible to write the Wightman function in the form
\[W^{\rm(M)}=\frac{1}{2\pi a}\sum_{n=0}^{\infty}\int_{0}^{\infty}dkkJ_{0}( \Delta\ell k)\sin(k_{n}x)\sin(k_{n}x^{\prime})\frac{e^{-i\omega_{n}\Delta t}} {\omega_{n}}, \tag{39}\]
where we have used again polar coordinates for the plane defined by the momentum variables \(k_{y}\) and \(k_{z}\), following the same steps as in the Dirichlet condition case. By taking into account the structure of the allowed values for \(k_{n}\) it is more convenient to use the Abel-Plana formula written in the form [32]
\[\sum_{n=0}^{\infty}F\left(n+\frac{1}{2}\right)=\int_{0}^{\infty}d\xi F(\xi)-i \int_{0}^{\infty}d\xi\frac{\left[F(i\xi)-F(-i\xi)\right]}{e^{2\pi\xi}+1}, \tag{40}\]
where the function \(F\left(n+\frac{1}{2}\right)\) is defined as in Eq. (17) but now with \(k_{n}=\frac{\pi(2n+1)}{2a}\). The Abel-Plana formula above allows us to write the Wightman function as
\[W^{\rm(M)} = W_{1}^{\rm(D)}+W_{1}^{\rm(M)} \tag{41}\] \[= W_{1}^{\rm(D)}-\frac{1}{\pi^{2}}\int_{0}^{\infty}dkkJ_{0}(\Delta \ell k)\int_{k}^{\infty}ds\frac{\sin(isx)\sin(isx^{\prime})}{e^{2as}+1}\frac{ \cosh(\Delta t\sqrt{s^{2}-k^{2}})}{\sqrt{s^{2}-k^{2}}},\]
where \(W_{1}^{\rm(D)}\) is given by Eq. (22) and we have again made use of the identity (21). Furthermore, the contribution in the second term in the r.h.s. of the above expression is found to be
\[W_{1}^{\rm(M)}=-W_{1}^{\rm(D)}+\frac{1}{4\pi a\alpha}\left\{\frac{\sinh\left( \frac{\pi\alpha}{2a}\right)\cos\left(\frac{\Delta x\pi}{2a}\right)}{\left[ \cosh\left(\frac{\pi\alpha}{a}\right)-\cos\left(\frac{\pi\Delta x}{2a}\right) \right]}-\frac{\sinh\left(\frac{\pi\alpha}{2a}\right)\cos\left(\frac{\Delta x \pi}{2a}\right)}{\left[\cosh\left(\frac{\pi\alpha}{a}\right)-\cos\left(\frac{ \pi\Delta x}{a}\right)\right]}\right\}, \tag{42}\]
where we have also again used the help of Refs. [31; 33] to solve the integrals in \(k\) and in \(s\).
The complete Wightman function for the mixed boundary condition case is obtained from Eq. (41), by using the expressions in Eqs. (22) and (42). This gives
\[W^{(\rm M)}=\frac{1}{4\pi a\alpha}\left\{\frac{\sinh\left(\frac{\pi\alpha}{2a} \right)\cos\left(\frac{\pi\Delta x}{2a}\right)}{\left[\cosh\left(\frac{\pi \alpha}{a}\right)-\cos\left(\frac{\pi\Delta x}{a}\right)\right]}-\frac{\sinh \left(\frac{\pi\alpha}{2a}\right)\cos\left(\frac{\pi\Delta x}{2a}\right)}{ \left[\cosh\left(\frac{\pi\alpha}{a}\right)-\cos\left(\frac{\pi\Delta x}{a} \right)\right]}\right\}. \tag{43}\]
We can still put the above expression in a more convenient form, analogously to what has been done to the Dirichlet and Neumann condition cases. So, let us make use of the identity [31]
\[\frac{\sinh\left(\frac{\pi a}{2a}\right)\cos\left(\frac{\pi\Delta x}{2a} \right)}{\left[\cosh\left(\frac{\pi a}{a}\right)-\cos\left(\frac{\pi\Delta x} {a}\right)\right]}=\frac{\alpha a}{\pi}\sum_{n=-\infty}^{\infty}\frac{e^{i\pi n }}{[(\Delta x-2an)^{2}+\alpha^{2}]}. \tag{44}\]
Consequently,
\[W^{(\rm M)}(w,w^{\prime})=\frac{1}{4\pi^{2}}\sum_{n=-\infty}^{\infty}(-1)^{n} \left[f_{n}(\Delta r)-f_{n}(\Delta\bar{r})\right], \tag{45}\]
where the functions introduced in (27) have been used. It is important to observe that in Eq. (36) we use a configuration of boundary conditions such that Dirichlet and Neumann conditions are applied to the planes at \(x=0\) and \(x=a\), respectively. If the reverse configuration is used, that is, Neumann and Dirichlet such that \((\partial_{x}\phi)|_{x=0}=\phi|_{x=a}=0\), proceeding in a similar way as above, we obtain the same result shown in Eq. (45), but with the opposite sign in the second term in the r.h.s., which becomes positive.
The results in Eqs. (26), (35) and (45) obtained for the Wightman function in the cases of Dirichlet, Neumann and mixed boundary conditions can be written as a general and compact expression, i.e.,
\[W^{(\rm i)}(w,w^{\prime})=\frac{1}{4\pi^{2}}\sum_{n=-\infty}^{\infty}\left[ \gamma_{n}^{(\rm i)}f_{n}(\Delta r)+\delta_{n}^{(\rm i)}f_{n}(\Delta\bar{r}) \right], \tag{46}\]
where we have conveniently defined
\[\gamma_{n}^{(\rm i)} = \left[\gamma_{n}^{(\rm D)},\gamma_{n}^{(\rm N)},\gamma_{n}^{(\rm DN )},\gamma_{n}^{(\rm ND)}\right]=[+1,+1,(-1)^{n},(-1)^{n}],\] \[\delta_{n}^{(\rm i)} = \left[\delta_{n}^{(\rm D)},\delta_{n}^{(\rm N)},\delta_{n}^{(\rm DN )},\delta_{n}^{(\rm ND)}\right]=[-1,+1,(-1)^{n+1},(-1)^{n}]. \tag{47}\]
We can note that for all three boundary condition cases analyzed so far the contribution \(n=0\) in the sum present in the first term in the r.h.s. of Eq. (46) correspond to the Minkowski contribution, which as we have already remarked, is divergent in the coincidence limit \(w^{\prime}\to w\). This term, as usual, must be subtracted from the physical observables. Moreover, the contribution \(n=0\) coming from the second term in the r.h.s. of Eq. (46) provides the known expression for only one plane, placed at position \(x=0\). The way we have organized the obtained Wightman functions in only one compact expression in Eq. (46) is very useful in the sense that it allows us to calculate at once the velocity dispersion for all three boundary condition cases since the derivative and integration operations in Eq. (56), necessary to calculate the velocity dispersion, will only affect the functions \(f_{n}\). Hence, after solving the successive operations acting on \(f_{n}\) to obtain the velocity dispersion, we may just select the appropriate coefficients \(\gamma_{n}^{(i)}\) and \(\delta_{n}^{(i)}\) in order to specify which boundary condition result we are interested in.
### Quasiperiodic condition
Finally, we now wish to consider a quasiperiodic condition, which generalizes the well known periodic and antiperiodic conditions by introducing a constant phase \(\beta\), that is,
\[\phi(x,y,z,t)=e^{-2\pi\beta i}\phi(x+a,y,z,t). \tag{48}\]
The quasiperiodic parameter \(\beta\) assumes values in the range \(0\leq\beta<1\). Note that, if \(\beta=0\) we restore the periodic condition whereas if \(\beta=1/2\) we recover the antiperiodic one. Hence, the boundary condition above allows us to obtain a solution for the scalar field which includes besides the well known periodic and antiperiodic condition particular cases, also the cases for which \(\beta\neq 0,1/2\). As it is clear from Eq. (48) we consider that the compactification, of length
\(a\), is in the \(x\)-direction. An illustrative representation of this four-dimensional spacetime configuration is shown in Fig.2. The introduction of the quasiperiodic parameter \(\beta\) may be thought of representing possible interactions in the system, as in the case of the well known Aharonov-Bohm effect [28; 29].
By requiring the solution in Eq. (8) to obey the condition (48), after making use of the normalization condition (9), we find
\[\phi_{\sigma}(\mathbf{x},t)=\frac{1}{\sqrt{8\pi^{2}a\omega_{n}}}e^{-i\omega_{n }t+ik_{n}x+ik_{y}y+ik_{z}z}, \tag{49}\]
where the eigenfrequencies are written as \(\omega_{n}^{2}=k_{n}^{2}+k_{y}^{2}+k_{z}^{2}\), \(k_{n}=\frac{2\pi(n+\beta)}{a}\) (\(n=0,\pm 1,\pm 2,\ldots\)) and the set of quantum numbers is represented by \(\sigma=(n,k_{y},k_{z})\). Similarly to the previous computations, one is able to calculate the Wightman function through Eq. (11), with
\[\sum_{\sigma}\equiv\sum_{n=-\infty}^{\infty}\int_{-\infty}^{\infty}dk_{y}\int_ {-\infty}^{\infty}dk_{z}. \tag{50}\]
Next, we again adopt polar coordinates in the \((k_{y},k_{z})\)-plane, such that \(dk_{y}dk_{z}\to kdkd\theta\) and \(k^{2}=k_{y}^{2}+k_{z}^{2}\). After solving the angular part we found
\[W(w,w^{\prime}) = \frac{1}{4\pi a}\sum_{n=-\infty}^{\infty}e^{ik_{n}\Delta x}\int _{0}^{\infty}dk\frac{kJ_{0}(\Delta rk)e^{-i\omega_{n}\Delta t}}{\omega_{n}} \tag{51}\] \[= \frac{1}{4\pi a}\sum_{n=-\infty}^{\infty}e^{ik_{n}\Delta x- \alpha|k_{n}|},\]
where we have used the help of Ref. [33] to solve the integral in \(k\). By splitting the summation in \(n\) in two parts in order to eliminate the modulus in \(k_{n}\), with the help of Ref. [31], we can further simplify the expression in the second line of Eq. (51) and write it in the convenient form
\[W(w,w^{\prime})=\frac{1}{4\pi^{2}}\sum_{n=-\infty}^{\infty}e^{2\pi\beta ni}g_{ n}(\Delta r), \tag{52}\]
where
\[g_{n}(\Delta r)=\frac{1}{\left[\left(\Delta x-an\right)^{2}+\Delta y^{2}+ \Delta z^{2}-\Delta t^{2}\right]}. \tag{53}\]
This is the positive frequency Wightman function for the massless scalar field subjected to a quasiperiodic condition. The Minkowski divergent contribution can now be easily separated to be subtracted in the renormalization process, leading to finite renormalized velocity dispersions. This term again arises from the \(n=0\) contribution of the above sum. Note that in the periodic case, \(\beta=0\), Eq. (52) corresponds to a spacetime of topology \(S^{1}\times R^{3}\), that is, a compactified direction in a circle and a three-dimensional space of coordinates \((t,y,z)\) with \(t\geq 0\), \(-\infty<y<\infty\)
Figure 2: Illustrative representation of four-dimensional spacetime with a compactified spatial dimension. The spacetime is composed of a compactified spatial dimension \(x\), \(S_{1}\), and the tridimensional space \(R^{3}\) of coordinates \(t,y,z\).
and \(-\infty<z<\infty\). This spacetime configuration is shown in Fig.2. Then, the case \(\beta\neq 0\) can be thought of as a generalization, which we called modified \(S^{1}\times R^{3}\) spacetime, because of the phase introduced by the quasiperiodic parameter \(\beta\).
With the convenient form for the Wightman functions obtained in this section we can proceed to the next section to calculate the renormalized velocity dispersion in each condition scenario.
## III Velocity Despersions
### General expression
Let us now analyze the dynamics of the point particle coupled to the massless scalar field. Thus, by varying the action (1) with respect to the position, we obtain the following expression for particle's velocity [10; 11; 12; 13]:
\[v_{i}(\tau,\mathbf{x})=-\frac{g}{m}\int_{0}^{\tau}dt\frac{\partial\phi( \mathbf{x},t)}{\partial x_{i}}, \tag{54}\]
where \(i=(x,y,z)\) and we have considered a null initial velocity, that is, \(v_{i}(t=0)=0\). This is a classical equation, but if we are interested in studying the QBM induced by quantum vacuum fluctuations we should promote the scalar field to an operator as in Eq. (10) which, consequently, leads to the quantization of Eq. (54) as well. As a result, we note that \(\langle 0|v_{i}|0\rangle\equiv\langle v_{i}\rangle=0\), that is, the velocity mean value of the particle due the quantum vacuum fluctuations vanishes since, by definition, \(a|0\rangle=0\) and \(\langle 0|a^{\dagger}=0\).
Although the velocity mean value vanishes, the quantum vacuum fluctuations on the velocity can be calculated through the following expression for the renormalized velocity dispersion [14; 16]:
\[\langle(\Delta v_{i})^{2}\rangle_{\rm ren}=\lim_{x\to x^{\prime}}\left[ \langle v_{i}(x)v_{i}(x^{\prime})\rangle-\langle v_{i}(x)v_{i}(x^{\prime}) \rangle_{\rm div}\right], \tag{55}\]
where we have introduced the notation \(\langle 0|(\dots)|0\rangle\equiv\langle(\dots)\rangle\). Note that the Minkowski divergent contribution has been subtracted from the velocity dispersion, something that is standard in the renormalization process.
From Eqs. (54) and (55) the renormalized velocity dispersion is formally given by
\[\langle(\Delta v_{i})^{2}\rangle_{\rm ren}=\frac{g^{2}}{2m^{2}}\int_{0}^{ \tau}dt^{\prime}\int_{0}^{\tau}dt\frac{\partial^{2}G^{(1)}_{\rm ren}(x,x^{ \prime})}{\partial x^{\prime}_{i}\partial x_{i}}, \tag{56}\]
where \(G^{(1)}(x,x^{\prime})=\langle\{\hat{\phi}(x),\hat{\phi}(x^{\prime})\}\rangle\) is the Hadamard function that can be obtained from the positive frequency Wightman function by the relation \(G^{(1)}(x,x^{\prime})=2{\rm Re}\,W(x,x^{\prime})\)[34]. We should point out that the renormalized Hadamard function in Eq. (56) is obtained by subtracting the divergent Minkowski contribution present in the Wightman function already discussed in the previous section. We should also point out that in order to establish the above expression we have symmetrized the fields, a common procedure adopted in quantum field theory [1; 11].
Next, we shall use the Wightman functions obtained in the previous section jointly with Eq. (56) to calculate the renormalized particle velocity dispersion corresponding to each boundary condition.
### Dirichlet, Neumann and mixed boundary conditions
Let us start by taking into consideration the velocity dispersion induced by Dirichlet, Neumann and mixed boundary conditions. To do this, we first consider the direction perpendicular to the planes, i.e., the \(x\)-direction. Thereby, from Eqs. (46) and (56), after carrying out the integrals and derivatives operations, we find
\[\langle(\Delta v_{x})^{2}\rangle^{({\rm i})}_{\rm ren} = -\frac{g^{2}}{16\pi^{2}m^{2}a^{2}}\left[2\sum_{n=1}^{\infty}\gamma ^{({\rm i})}_{n}R(n,\tau_{a})-\sum_{n=-\infty}^{\infty}\delta^{({\rm i})}_{n }R(x_{a}-n,\tau_{a})\right], \tag{57}\]
where we have conveniently defined the dimensionless parameters \(x_{a}=x/a\), \(\tau_{a}=\tau/a\) and the function
\[R(r,\tau_{a})=P(r,\tau_{a})+Q(r,\tau_{a}), \tag{58}\]
with
\[P(r,\tau_{a}) = \frac{2\tau_{a}^{2}}{r^{2}(4r^{2}-\tau_{a}^{2})},\] \[Q(r,\tau_{a}) = \frac{\tau_{a}}{2r^{3}}\ln\left(\frac{2r+\tau_{a}}{2r-\tau_{a}} \right)^{2}. \tag{59}\]
Note that in order to evaluate the integrals in Eq. (56) we have used the identity [10; 11]
\[\int_{0}^{\tau}dt^{\prime}\int_{0}^{\tau}dtf(|t-t^{\prime}|)=2\int_{0}^{\tau} d\xi(\tau-\xi)f(\xi). \tag{60}\]
The plot for Eq. (57) is shown in Fig.3, for distinct boundary conditions. In particular, the plot for mixed boundary condition of types DN and ND coincide when one takes the value \(x_{a}=0.5\) and differ for other values.
Similarly, for the velocity dispersion parallel to the planes we obtain
\[\langle(\Delta v_{y})^{2}\rangle_{\rm ren}^{({\rm i})} = \frac{g^{2}}{32\pi^{2}m^{2}a^{2}}\left\{2\sum_{n=1}^{\infty}\gamma _{n}^{({\rm i})}Q(n,\tau_{a})+\sum_{n=-\infty}^{\infty}\delta_{n}^{({\rm i})}Q (x_{a}-n,\tau_{a})\right\}, \tag{61}\]
where we have used the function \(Q(r,\tau_{a})\) defined in Eq. (59). The same result is obtained for the \(z\) component of the velocity dispersion, also parallel to the planes. The behavior of this expression is depicted in Fig.4. Again, in the case of mixed boundary condition, the plot shows that the curves for DN and ND coincide for \(x_{a}=0.5\) and differ when taking other values.
Figure 3: Graph behavior of the perpendicular velocity dispersion for (a) Dirichlet (D), (b) Neumann (N) and (c) mixed (DN, ND) boundary conditions. Here we have considered the curves in units of \(\langle(\Delta v_{x})^{2}\rangle^{(0)}=\langle(\Delta v_{x})^{2}\rangle_{\rm ren }^{({\rm i})}\left(\frac{ma}{g}\right)^{2}\).
It should be observed that, for Dirichlet boundary condition, the \(n=0\) term of the expressions (57) and (61) corresponds to the one plane contribution for the velocity dispersions which has already been investigated in Ref. [11]. This contribution is obtained from the second term in the r.h.s of Eq. (26) for \(n=0\). The latter, of course, is the Wightman function for Dirichlet boundary condition considering only one plane. Note that the one plane contribution for the velocity dispersions in the case of Neumann boundary condition is the same as the one for Dirichlet boundary condition, but with the opposite sign. Note also that the mixed boundary condition is not applicable for one single plane.
We now want to discuss the divergencies present in the expressions (57) and (61). The first of them are the usual divergencies for points on the planes, at \(x_{a}=0\) and \(x_{a}=1\). They come from the second term in the r.h.s of Eqs. (57) and (61) when \(n=0\) and \(n=1\), respectively. In addition, for \(x_{a}\neq 0,1\), there also exist divergencies associated with the time, in a round trip, a light signal takes to travel from the planes to a point located at \(x_{a}\)[2; 3]. Mathematically this is given by \(\tau_{a}=2|x_{a}-n|\), which tells us that each mode of the field contributes with a divergency. Finally, there are also position independent divergencies in the form of \(\tau=2na\) coming from the first term in the r.h.s of Eqs. (57) and (61). These divergencies represent an increasing number (with the field modes) of round trips from one plane to the other taken by a light signal. All these divergencies can be seen in the plots present in Figs.3 and 4 for each boundary condition considered so far. For instance, in the plot for Dirichlet boundary condition shown in Fig.3, the position independent divergencies takes place for \(\tau_{a}=2\) when \(n=1\), while the position dependent divergencies take place for \(\tau_{a}=1.4\) (when \(n=0\)) and \(\tau_{a}=0.6\) (when \(n=1\)). Note that a larger range for \(\tau_{a}\) would show additional divergencies. Note also that the same analysis can be reached for other values of \(x_{a}\). In Ref. [10], similar divergences have been studied in a one-dimensional model. The authors have shown that by assuming that the particle position fluctuates according to a Gaussian distribution the divergencies are smeared out. It has also been shown in Refs. [8] and [11] that implementation of switching functions can eliminate these typical divergences.
Figure 4: Graph behavior of the parallel velocity dispersion curves for (a) Dirichlet (D), (b) Neumann (N) and (c) mixed (DN, ND) boundary conditions. Here we have considered the curves in units of \(\langle(\Delta v_{y})^{2}\rangle^{(i)}=\langle(\Delta v_{y})^{2}\rangle^{(i)}_ {\text{sen}}\left(\frac{ma}{g}\right)^{2}\). Note that the shown peaks represent divergent points.
Let us now turn to the investigation of the behavior of the expressions (57) and (61) when \(\tau_{a}\gg 1\) and \(\tau_{a}\ll 1\), that is, for late and short time regimes, respectively. We start with the short time regime which indicates the behavior of the system in its initial moments of observation. In this sense, by considering the results of Appendix B.1, from Eqs. (57) and (61), we obtain, for the perpendicular direction,
\[\langle(\Delta v_{x})^{2}\rangle^{(\rm J)}_{\rm ren}\simeq-\frac{g^{2}\tau_{a }^{2}}{16\pi^{2}m^{2}a^{2}}\left\{3\zeta(4)-\delta^{(J)}\frac{\pi^{4}}{2}[2+ \cos(2\pi x_{a})]\csc^{4}(\pi x_{a})\right\} \tag{62}\]
and
\[\langle(\Delta v_{x})^{2}\rangle^{(\rm M)}_{\rm ren}\simeq\frac{g^{2}\tau_{a }^{2}}{128\pi^{2}m^{2}a^{2}}\left\{21\zeta(4)-\delta^{(M)}\pi^{4}[11+\cos(2\pi x _{a})]\cot(\pi x_{a})\csc^{3}(\pi x_{a})\right\}, \tag{63}\]
while for the parallel direction we have
\[\langle(\Delta v_{y})^{2}\rangle^{(\rm J)}_{\rm ren}\simeq\frac{g^{2}\tau_{a }^{2}}{32\pi^{2}m^{2}a^{2}}\left\{2\zeta(4)+\delta^{(J)}\frac{\pi^{4}}{3}[2+ \cos(2\pi x_{a})]\csc^{4}(\pi x_{a})\right\} \tag{64}\]
and
\[\langle(\Delta v_{y})^{2}\rangle^{(\rm M)}_{\rm ren}\simeq-\frac{g^{2}\tau_{a }^{2}}{128\pi^{2}m^{2}a^{2}}\left\{7\zeta(4)+\delta^{(M)}\frac{\pi^{4}}{3}[11 +\cos(2\pi x_{a})]\cot(\pi x_{a})\csc^{3}(\pi x_{a})\right\}, \tag{65}\]
where \(\delta^{(\rm J)}=[\delta^{(\rm D)},\delta^{(\rm N)}]=[-1,+1]\) and \(\delta^{(\rm M)}=[\delta^{(\rm DN)},\delta^{(\rm ND)}]=[+1,-1]\). We can see that all the expressions above for the short time regime are of order \(\tau_{a}^{2}\), the leading order of Eqs. (B9), (B11), (B23) and (B25) considered to perform the analysis. Note that in the expressions above only the divergencies on the planes, at \(x_{a}=(0,1)\), are preserved.
On the other hand, similarly to the classical Brownian motion, the late time regime, that is, \(\tau_{a}\gg 1\), give us an idea about the behavior of the system close to an equilibrium state reached between the particle and the environment, which in our case is the quantum fluctuating vacuum of the scalar field. Thereby, based on the results of Appendix A.1 we obtain, for the perpendicular direction,
\[\langle(\Delta v_{x})^{2}\rangle^{(\rm J)}_{\rm ren}\simeq-\frac{g^{2}}{8\pi^ {2}m^{2}a^{2}}\left[\frac{\pi^{2}}{3}+\frac{4}{3\tau_{a}^{2}}-\delta^{(J)}\pi ^{2}\csc^{2}(\pi x_{a})\right] \tag{66}\]
and
\[\langle(\Delta v_{x})^{2}\rangle^{(\rm M)}_{\rm ren}\simeq\frac{g^{2}}{8\pi^ {2}m^{2}a^{2}}\left[\frac{\pi^{2}}{6}-\frac{4}{3\tau_{a}^{2}}-\delta^{(M)}\pi ^{2}\cot(\pi x_{a})\csc(\pi x_{a})\right], \tag{67}\]
while for the parallel direction we have
\[\langle(\Delta v_{y})^{2}\rangle^{(\rm J)}_{\rm ren}\simeq\frac{g^{2}}{8\pi^ {2}m^{2}a^{2}}\left[\frac{\pi^{2}}{3}-\frac{4}{3\tau_{a}^{2}}+\delta^{(J)}\pi ^{2}\csc^{2}(\pi x_{a})\right] \tag{68}\]
and
\[\langle(\Delta v_{y})^{2}\rangle^{(\rm M)}_{\rm ren}\simeq-\frac{g^{2}}{8\pi ^{2}m^{2}a^{2}}\left[\frac{\pi^{2}}{6}+\frac{4}{3\tau_{a}^{2}}+\delta^{(M)}\pi ^{2}\cot(\pi x_{a})\csc(\pi x_{a})\right], \tag{69}\]
where the coefficients \(\delta^{(\rm J)}\) and \(\delta^{(\rm M)}\) have already been previously defined below Eq. (65). From Eqs. (66), (67), (68) and (69), we see that all the expressions have a term of order \(4/3\tau_{a}^{2}\), which is negligible for large time values so that the remainder terms are the dominant ones. In particular, the position dependent term depends on the boundary condition used and also preserves the divergencies on the planes located at \(x_{a}=(0,1)\). The velocity dispersions in Eqs. (57) and (61) becoming time independent at much later times is analogous to what happens in the classical Brownian motion for a point particle immersed in a fluid at finite temperature, which also becomes time independent in this regime.
We can show that the expressions for the late time regime obtained above, for Dirichlet boundary condition, when \(x_{a}\ll 1\), is consistent with the result presented in Ref. [11] where the authors considered a single plane. Thereby, expanding Eqs. (66) and (68) for \(x_{a}\ll 1\) we found
\[\langle(\Delta v_{x})^{2}\rangle^{(\rm D)}_{\rm ren}=\langle(\Delta v_{y})^{2 }\rangle^{(\rm D)}_{\rm ren}\simeq-\frac{g^{2}}{8\pi^{2}m^{2}x^{2}}, \tag{70}\]
which is exactly Eq. (4.3) of Ref. [11]. The limit \(x_{a}\ll 1\) is equivalent to say that the plane placed at \(x=a\) is moved far away from the plane at \(x=0\), ideally to infinity (see Fig.5). Consequently, the infinitely distant plane has no effect on the particle. Thus, the resulting scenario is a point particle in the presence of a single plane, placed at \(x=0\), which is one of the configurations studied in Ref. [11] for the late time regime.
In the case of mixed boundary condition we can observe a similar situation in the limit \(x_{a}\ll 1\), that is, \(a\to\infty\). In this case, we can show that the expressions for the velocity dispersion, Eqs. (63), (65), (67) and (69), correspond to either Dirichlet or Neumann boundary condition only, depending whether we consider the DN or ND configuration on the planes. For instance, let us consider the DN configuration, where \(\delta^{(M)}\equiv\delta^{(DN)}=+1\). This is the configuration in which Dirichlet and Neumann boundary condition are applied to the planes placed at \(x=0\) and \(x=a\), respectively. In this sense, taking the limit \(a\to\infty\) in the aforementioned expressions we obtain the result for Dirichlet boundary condition in the corresponding limit, namely, Eqs. (62), (64), (66) and (68), with \(\delta^{(J)}\equiv\delta^{(D)}=-1\). The explanation is that once we move the plane placed at \(x=a\) to infinity, only the plane with Dirichlet boundary condition, at \(x=0\), produces some effect on the particle. The argument for the ND configuration is similar.
Finally, to end this subsection we would like to make a brief comment about possible negative values that the velocity dispersions can take. This can be seen from Eq. (55), which consists of a diference between the dispersion in the presence of two parallel planes and the dispersion without planes, which is divergent. So, a negative value indicates that the presence of the planes creates a reduction in the velocity dispersion, as argued in Ref. [2].
### Quasiperiodic condition
In order to obtain velocity dispersions corresponding to the quasiperiodic condition in Eq. (48) we make use of Eqs. (52) and (56). So, for the velocity dispersion in the \(x\)-direction, that is, the compactified direction, we find
\[\langle(\Delta v_{x})^{2}\rangle_{\rm ren}^{\beta}=-\frac{g^{2}}{\pi^{2}m^{2} a^{2}}\sum_{n=1}^{\infty}U(n,\beta,\tau_{a}), \tag{71}\]
while for the \(y\)-direction (or \(z\)), the uncompactified direction, we have
\[\langle(\Delta v_{y})^{2}\rangle_{\rm ren}^{\beta}=\frac{g^{2}}{2\pi^{2}m^{2} a^{2}}\sum_{n=1}^{\infty}T(n,\beta,\tau_{a}), \tag{72}\]
where we have defined the function
\[U(n,\beta,\tau_{a})=S(n,\beta,\tau_{a})+T(n,\beta,\tau_{a}), \tag{73}\]
with
\[S(n,\beta,\tau_{a}) = \frac{\tau_{a}^{2}\cos(2\pi\beta n)}{n^{2}(n^{2}-\tau_{a}^{2})} \tag{74}\]
and
\[T(n,\beta,\tau_{a}) = \frac{\tau_{a}\cos(2\pi\beta n)}{2n^{3}}\ln\left(\frac{n+\tau_{a} }{n-\tau_{a}}\right)^{2}. \tag{75}\]
Figure 5: If we move the plane (\(p\)) at \(x=a\) to infinity everything works like if the plane at \(x=a\) did no exist, that is, the resulting configuration is equivalent to a point particle in the presence of a single plane.
Note that to perform the integrals that have lead to the above expressions we have used again the identity (60). Similar to the previous cases, the compactification parameter \(a\) provides a natural scale to the system, so that we are able to define the dimensionless time parameter \(\tau_{a}\). It is important to call attention to the fact that the quasiperiodic condition has the particular periodic and antiperiodic condition cases given by, respectively, \(\beta=0\) and \(\beta=1/2\). From Eqs. (71) and (72) we observe that the expressions depend exclusively on the quasiperiodic parameter \(\beta\), dimensionless time \(\tau_{a}\) and length \(a\). The graph behavior for these expressions is shown in the Fig.6. A similar result has been obtained in Ref. [19] for a point particle in the presence of a quantized electromagnetic field in a spacetime with spatial section of nontrivial topology, known as \(E_{16}\) or slab topology, which is essentially defined by Eqs. (52) and (53) for the periodic case (\(\beta=0\)).
Differently from Dirichlet, Neumann and mixed boundary conditions, the expressions (71) and (72) do not have any dependency with the spatial coordinate \(x\). The reason is that the quasiperiodic condition does not restrict the modes to a particular region as it happens to the parallel planes case. A spacetime in which one of the directions has a finite length \(a\), as it is our case, makes possible to the modes to extend themselves throughout the whole \(x\)-coordinate. In contrast, in the Dirichlet, Neumann and mixed boundary condition cases the modes are also confined into a region of length \(a\), but the \(x\)-component of the field does not exist outside this finite region. As we have already mentioned, the parallel planes break the homogeneity of the spatial section of the spacetime.
Our expressions reveal that for some values of the time parameter \(\tau_{a}\) we obtain divergent results to the velocity dispersion. Although the quasiperiodic system is different, the interpretation of these singularities have similarity to those of the parallel planes case. Specifically, these divergencies occur for integer values of time, that is, \(\tau_{a}=n\), as we can see from Eqs. (71) and (72). The latter are plotted in Fig.6 where, for the time range considered, there exist divergencies at \(\tau_{a}=1\) and \(\tau_{a}=2\). These divergencies are similar to the ones arising from the time a light signal takes to travel from a point \(x_{a}\) to the planes in a round trip. However, in the quasiperiodic condition case, it is more
Figure 6: Graph of the velocity dispersion in the (a)–(b) compactified and (c) uncompactified direction for the Quasiperiodic condition. Here we have considered the curves in units of \(\langle(\Delta v_{x,y})^{2}\rangle^{\beta}=\langle(\Delta v_{x,y})^{2}\rangle _{\text{ren}}^{\beta}\left(\frac{ma}{g}\right)^{2}\). Note that the shown peaks represent divergent points.
intuitive to imagine circumferences of length \(a\), so that \(n\) values represent complete turns in the ciclic path. Then, we may understand the divergences in this case as due to the time taken by a light signal to travel an increasing number of ciclic paths of length \(a\). As reported in Ref. [20], where the authors analyzed the periodic case, the origin of such integer divergencies is a consequences of the spacetime topology, namely, \(S^{1}\times R^{3}\).
Similarly to what has been done in the previous subsection, let us obtain the expressions for the short and late time regimes, that is, the velocity dispersions for the asymptotic time limits \(\tau_{a}\ll 1\) and \(\tau_{a}\gg 1\), respectively. From the results of Appendix B.2, for the short time regime, the velocity dispersion in the \(x\)-direction is written as
\[\langle(\Delta v_{x})^{2}\rangle_{\rm ren}^{\beta}\simeq\frac{g^{2}\tau_{a}^ {2}\pi^{2}}{m^{2}a^{2}}B_{4}(\beta), \tag{76}\]
where \(B_{n}(z)\) is the Bernoulli polynomial of order \(n\) in the \(z\) variable [31]. The periodic (p) and antiperiodic (ap) cases are obtained as special cases of Eq. (76) for \(\beta=0\) and \(\beta=1/2\), respectively. These are given by
\[\langle(\Delta v_{x})^{2}\rangle_{\rm ren}^{\rm(p)}\simeq-\frac{g^{2}\tau_{a} ^{2}\pi^{2}}{30m^{2}a^{2}} \tag{77}\]
and
\[\langle(\Delta v_{x})^{2}\rangle_{\rm ren}^{\rm(ap)}\simeq\frac{7g^{2}\tau_{a }^{2}\pi^{2}}{240m^{2}a^{2}}. \tag{78}\]
Likewise, for the velocity dispersion in the \(y\) (or \(z\)) direction, we find
\[\langle(\Delta v_{y})^{2}\rangle_{\rm ren}\simeq-\frac{g^{2}\tau_{a}^{2}\pi^{ 2}}{3m^{2}a^{2}}B_{4}(\beta), \tag{79}\]
with
\[\langle(\Delta v_{y})^{2}\rangle_{\rm ren}^{\rm(p)}\simeq\frac{g^{2}\tau_{a} ^{2}\pi^{2}}{90m^{2}a^{2}} \tag{80}\]
and
\[\langle(\Delta v_{y})^{2}\rangle_{\rm ren}^{\rm(ap)}\simeq-\frac{7g^{2}\tau_{ a}^{2}\pi^{2}}{720m^{2}a^{2}}, \tag{81}\]
for the periodic and antiperiodic cases. It is interesting to note that, similar to the cases studied in Section III.2, our expressions here also show a second order time dependency. From Eqs. (76) and (79), we observe that the dispersion for the uncompactified direction is \(-1/3\) of the result for the compactified one. Furthermore, the sign of the velocity dispersions in the short time regime is defined by the Bernoulli polynomials. In fact, as we can see in Fig.7, \(B_{4}(\beta)\) assumes positive values in the range \(r_{-}\leq\beta\leq r_{+}\), but it is negative for any other values of \(\beta\), where \(r_{\pm}=[1\pm(1-4n)^{1/2})]/2\), with \(n=1/\sqrt{30}\), are the physical roots taking into consideration the condition \(0\leq\beta<1\). For the periodic case (\(\beta=0\)), the compactified and uncompactified velocity dispersions achieve their minimum and maximum value, respectively, Eqs. (77) and (80). On the other hand, in the antiperiodic case, Eqs. (78) and (81), the opposite occurs.
We turn now to the analysis of the late time regime, that is, \(\tau_{a}\gg 1\). Hence, by making use of the results in Appendix A.2, for the \(x\)-direction, we have
\[\langle(\Delta v_{x})^{2}\rangle_{\rm ren}^{\beta}\simeq-\frac{g^{2}}{\pi^{2} m^{2}a^{2}}\left[\pi^{2}B_{2}(\beta)+\frac{1}{6\tau_{a}^{2}}\right], \tag{82}\]
with
\[\langle(\Delta v_{x})^{2}\rangle_{\rm ren}^{\rm(p)}\simeq-\frac{g^{2}}{6\pi^ {2}m^{2}a^{2}}\left[\pi^{2}+\frac{1}{\tau_{a}^{2}}\right] \tag{83}\]
and
\[\langle(\Delta v_{x})^{2}\rangle_{\rm ren}^{\rm(ap)}\simeq-\frac{g^{2}}{6\pi^ {2}m^{2}a^{2}}\left[-\frac{\pi^{2}}{2}+\frac{1}{\tau_{a}^{2}}\right]. \tag{84}\]
Additionally, for the \(y\) (or \(z\)) direction, the velocity dispersion, in the later time regime, is given by
\[\langle(\Delta v_{y})^{2}\rangle_{\rm ren}^{\beta}\simeq\frac{g^{2}}{\pi^{2}m^{2 }a^{2}}\left[\pi^{2}B_{2}(\beta)-\frac{1}{6\tau_{a}^{2}}\right], \tag{85}\]
with
\[\langle(\Delta v_{y})^{2}\rangle_{\rm ren}^{\rm(p)}\simeq\frac{g^{2}}{6\pi^{2 }m^{2}a^{2}}\left[\pi^{2}-\frac{1}{\tau_{a}^{2}}\right] \tag{86}\]
and
\[\langle(\Delta v_{y})^{2}\rangle_{\rm ren}^{\rm(ap)}\simeq-\frac{g^{2}}{6\pi^ {2}m^{2}a^{2}}\left[\frac{\pi^{2}}{2}+\frac{1}{\tau_{a}^{2}}\right], \tag{87}\]
for the periodic and antiperiodic cases, respectively. The results above for the late time regime show that, from the last term in Eqs. (82) and (85), the dispersions tend to a constant value. As already mentioned, this characteristic is similar to the classical Brownian motion and indicates a possible equilibrium between the particle and the surrounding medium, which in this case corresponde to a vacuum filled by the quantum vacuum fluctuations of the massless scalar field subjected to the condition in Eq. (48).
Note that the contribution arising from the second term in the r.h.s of the expressions above for the late time regime is independent of the parameter \(a\) and is identical for both compactified and uncompactified directions. Possibly, this suggests some kind of physical process which is independent of the compactification. In the compactified case, Eq. (82), this small contribution tend to strengthen the dispersions whereas in the uncompactified case, Eq. (85), it tends to weaken.
In the late time regime the quasiperiodic velocity dispersions can have a change of sign in the compactified and uncompactified cases. This is due to the behavior of \(B_{2}(\beta)\) function shown in Fig.7. Moreover, only in the case when \(\beta=(3\pm\sqrt{3})/6\) the time dependent small contribution define the sign of the velocity dispersions. Finally, we emphasize that the negative results for the velocity dispersions can be understood according to the interpretation given at the end of the previous subsection.
## IV Conclusions
In this paper we have studied the QBM of a point particle induced by the quantum vacuum fluctuations of a massless scalar field, which are modified by both the presence of two reflecting parallel planes and a quasiperiodic condition that causes the \(x\)-direction to be compactified. We have considered three distinct boundary conditions for the field modes to obey on the planes placed perpendicular to the \(x\)-direction at \(x=0\) and \(x=a\). The boundary conditions are Dirichlet, Neumann, and mixed which lead to the discretization of the momentum in the \(x\)-direction. Similarly, the quasiperiodic condition also leads to the discretization of the momentum in \(x\)-direction in the form \(k_{x}=k_{n}=\frac{2\pi}{a}(n+\beta)\), with \(0\leq\beta<1\). In all cases, the parameter \(a\), related to confinement, provide a natural scale
Figure 7: Bernoulli polinomials \(B_{2}(\beta)\), solid line, and \(B_{4}(\beta)\), dashed line, as functions of the quasiperiodic parameter \(\beta\).
for the system, which enable us to analyze the resulting expressions in asymptotic regimes of interest, namely, short time (\(\tau_{a}\ll 1\)) and late time (\(\tau_{a}\gg 1\)) regimes. In cases of Dirichlet, Neumann and mixed boundary conditions this parameter is the distance between the planes and for the quasiperiodic condition it is the quasiperiodicity length of the space or, in other words, the length of the compactification in the \(x\) direction.
In the short time regime, for all conditions, we have seen that the most significant contributions for the velocity dispersions are of second order in time. For the late time regime, on the other hand, we have found that the velocity dispersions tend to a constant value, which depends on the conditions imposed on the field. This constante value for the velocity dispersion in the late time regime is a characteristic similar to the classical Brownian motion and suggests an equilibrium value between the particle and quantum vacuum fluctuations of the massless scalar field.
Divergent results for the velocity dispersions have also been identified, which are related to the usual divergencies on the planes at \(x=0\) and \(x=a\), to the time a light signal takes to travel in a round trip from one of the planes to a point \(x\) and to the time a light signal takes to travel throughout the compactified direction. We have also indicated a position independent divergency in the parallel planes case that are related to an increasing number of round trips that a light signal takes to go from one plane to the other. Furthermore, negative velocity dispersions have also been shown to be possible and, based on discussions found in the literature, this can be understood as a reduction in the particle velocity dispersion due to the presence of the planes and the compactification mechanism.
We would like to stress that two planes configuration has been considered in order to complement the investigations for the electromagnetic and scalar fields found in the literature. Hence, the more remarkable contribution of this work has been the analysis of the QBM induced by the massless scalar field with distinct boundary conditions on the two parallel planes, which until now had not been done, besides Dirichlet conditions adopted only for one plane. In fact, all the works so far had focused on Dirichlet boundary condition. Also, the Dirichlet boundary condition has only been considered on two parallel planes in a system considering the electromagnetic field [3].
The compact form for the positive frequency Wightman function in cartesian coordinates presented in Eq. (46) is very interesting because its structure makes possible to write the result for three boundary conditions into a single expression, namely, Dirichlet, Neumann and mixed boundary conditions. This structure is very useful since it allows to extract the divergent Minkowski contribution and, consequently, obtain other finite physical observables besides the velocity dispersion considered here. For instance, the mean value of field squared, \(\langle\phi^{2}\rangle=\lim_{x^{\prime}\to x}\langle\phi(x)\phi(x^{ \prime})\rangle\), and the mean value of force squared that acts on the particle, \(\langle F^{2}\rangle=\lim_{x^{\prime}\to x}\langle F(x)F(x^{\prime})\rangle\). In fact, as it can be easily checked, all these mentioned quantities depends on the Wightman function.
###### Acknowledgements.
E.J.B.F would like to thank the Brazilian agency Coordination for the Improvement of Higher Education Personnel (CAPES) for financial support. E.M.B.G thanks financial support from the Brazilian agency National Council for Scientific and Technological Development (CNPq). H.F.S.M is partially supported by CNPq under grant N\({}^{\rm o}\) 311031/2020-0.
## Appendix A Late time regime
### Dirichlet, Neumann and mixed boundary conditions
In this first part of the appendix we go to investigate the expression \(R(r,\tau_{a})\), Eq. (58), on late time regime, that is, \(\tau_{a}\gg 1\). For the sake of clarity and in view of the fact that the parallel dispersion, Eq. (61), is written only in terms of the \(Q(r,t)\) function, Eq. (59), we shall develop each contribution from \(R(r,\tau_{a})\) separately. Before proceeding it is useful and practical to define the following quantities:
\[R^{(i)}:=P^{(i)}+Q^{(i)}, \tag{101}\]
and
\[R^{(i)}_{x_{a}}:=P^{(i)}_{x_{a}}+Q^{(i)}_{x_{a}}, \tag{102}\]
with
\[P^{(i)}:=\sum_{n=1}^{\infty}\gamma^{(i)}_{n}P(n,\tau_{a}), \tag{103}\]
\[Q^{(\rm i)}:=\sum_{n=1}^{\infty}\gamma_{n}^{(\rm i)}Q(n,\tau_{a}), \tag{100}\]
\[P_{x_{a}}^{(\rm i)}:=\sum_{n=-\infty}^{\infty}\delta_{n}^{(\rm i)}P(x_{a}-n,\tau _{a}), \tag{101}\]
\[Q_{x_{a}}^{(\rm i)}:=\sum_{n=-\infty}^{\infty}\delta_{n}^{(\rm i)}Q(x_{a}-n, \tau_{a}), \tag{102}\]
where the index 'i' indicates the boundary conditions, namely, i=(D, N, DN, ND). The functions as defined above separates the position independent contributions from the position dependent ones. Also, the functions \(P(r,\tau_{a})\) and \(Q(r,\tau_{a})\) are defined in Eq. (59), with the coefficients \(\gamma_{n}^{(\rm i)}\) and \(\delta_{n}^{(\rm i)}\) defined in Eq. (47). In order to ensure the organization and make clearer the method used in our calculations, let us dedicate one subsection for the quantity \(R^{\rm i}\), which is position independent, and other for \(R_{x_{a}}^{(\rm i)}\), which is particle position dependent. In addition, to avoid overloading the descriptive text with excessive repetition of references, we emphasize that all relations used in manipulations of the expressions can be found in Refs. [31; 35].
#### a.1.1 Position independent term
For the late time regime, i.e., \(\tau_{a}\gg 1\), Eq. (100) can be appropriately written in the form
\[P^{(\rm i)}=-2\sum_{k=0}^{\infty}\left(\frac{2}{\tau_{a}}\right)^{2k}\sum_{n= 1}^{\infty}\frac{\gamma_{n}^{(\rm i)}}{n^{2-2k}}, \tag{103}\]
where we have used a series expansion for the denominator of \(P^{(\rm i)}\).
In the case of Dirichlet and Neumann boundary conditions \(\gamma_{n}^{(\rm D)}=\gamma_{n}^{(\rm N)}=1\). Then, since these coefficients are independent of the summation index, we obtain that
\[P^{(\rm J)}=-\frac{\pi^{2}}{3}+\frac{4}{\tau_{a}^{2}}, \tag{104}\]
for the dominant terms, with J=(D,N). To establish the above result we have also used the relation
\[\sum_{k=1}^{\infty}\frac{1}{k^{p}}=\zeta(p),\qquad\qquad{\rm Re}(p)>1, \tag{105}\]
and the fact that \(\zeta(-2m)=0\), where \(m\) is a natural number.
For the mixed boundary conditions, observing that \(\gamma_{n}^{(\rm DN)}=\gamma_{n}^{(\rm ND)}=(-1)^{n}\), from the Eq. (103), we obtain
\[P^{(\rm M)}=\frac{\pi^{2}}{6}+\frac{4}{\tau_{a}^{2}}, \tag{106}\]
for the dominant terms, with M=(DN,ND). To achieve the previous result we have used the relation
\[\sum_{k=1}^{\infty}\frac{(-1)^{k+1}}{k^{p}}=\left(1-2^{1-p}\right)\zeta(p), \qquad\qquad{\rm Re}(p)>0, \tag{107}\]
in addition to the fact that \(\zeta(-2m)=0\), where \(m\) is a natural number.
A similar procedure can be applied to the \(Q^{(\rm i)}\) function. First, we rewrite Eq. (100) in the form
\[Q^{(\rm i)}=\tau_{a}^{2}\sum_{k=1}^{\infty}\frac{1}{(2k-1)}\left(\frac{2}{\tau _{a}}\right)^{2k}\sum_{n=1}^{\infty}\frac{\gamma_{n}^{(\rm i)}}{n^{4-2k}}, \tag{108}\]
where we have used the series expansion
\[\ln\left(\frac{1+x}{1-x}\right)=2\sum_{k=1}^{\infty}\frac{x^{2k-1}}{(2k-1)}, \qquad\qquad x^{2}<1. \tag{113}\]
Now, by using the same relations and properties introduced previously in the computations of \(P^{\rm(i)}\), namely Eqs. (109) and (111), we can easily obtain for the Dirichlet and Neumann boundary conditions
\[Q^{\rm(J)}=\frac{2\pi^{2}}{3}-\frac{8}{3\tau_{a}^{2}} \tag{114}\]
and
\[Q^{\rm(M)}=-\frac{\pi^{2}}{3}-\frac{8}{3\tau_{a}^{2}} \tag{115}\]
for the mixed boundary conditions.
From Eq. (108) and the results (110), (110), (114), (115), we can establish that
\[R^{\rm(J)}=\frac{\pi^{2}}{3}+\frac{4}{3\tau_{a}^{2}} \tag{116}\]
and
\[R^{\rm(M)}=-\frac{\pi^{2}}{6}+\frac{4}{3\tau_{a}^{2}}, \tag{117}\]
where J=(D,N) for Dirichlet and Neumann, and M=(DN, ND) holds for mixed conditions, with the configurations DN and ND, respectively.
#### a.2.2 Position dependent term
In the late time regime, that is, \(\tau_{a}\gg 1\), the term \(P_{x_{a}}^{\rm(i)}\), Eq. (115), can be written in the form
\[P_{x_{a}}^{\rm(i)}=-2\sum_{k=0}^{\infty}\left(\frac{2}{\tau_{a}} \right)^{2k}\sum_{n=-\infty}^{\infty}\frac{\delta_{n}^{\rm(i)}}{(x_{a}-n)^{2- 2k}}, \tag{118}\]
where we have considered a series expansion for the denominator of \(P_{x_{a}}^{\rm(i)}\), and \(\delta_{n}^{\rm(D)}=-1\) and \(\delta_{n}^{\rm(N)}=1\) for Dirichlet and Neumann conditions, respectively. These coefficients differ by one sign and they are independent of the summation index. So from Eq. (118) we can write
\[P_{x_{a}}^{\rm(J)} = -2\delta^{\rm(J)}\sum_{k=0}^{\infty}\left(\frac{2}{\tau_{a}} \right)^{2k}\sum_{n=-\infty}^{\infty}\frac{1}{(x_{a}-n)^{2-2k}} \tag{119}\] \[= -2\delta^{\rm(J)}\sum_{k=0}^{\infty}\left(\frac{2}{\tau_{a}} \right)^{2k}\left[\frac{-1}{x_{a}^{2-2k}}+\sum_{j=\pm 1}\sum_{n=0}^{\infty} \frac{1}{(n+jx_{a})^{2-2k}}\right],\]
with \(\delta^{\rm(J)}=[\delta^{\rm(D)},\delta^{\rm(N)}]=[-1,+1]\). To achieve the second equality we have divided the initial summation in two parts and re-labeled the summation index of the negative interval. Next, we have written the two parts, with denominators of opposite signs, in a compact form by means of the \(j\) summation.
By using the relation
\[\sum_{k=0}^{\infty}\frac{1}{(k+a)^{s}}=\zeta(s,a),\qquad\qquad{ \rm Re}(s)>1, \tag{120}\]
in Eq. (119) we obtain that
\[P_{x_{a}}^{\rm(J)} = -2\delta^{\rm(J)}\left[-\frac{x_{a}^{-2}}{[1-(2x_{a}/\tau_{a})^{2} ]}+\left(\frac{2}{\tau_{a}}\right)^{2}\sum_{j=\pm 1}\sum_{m=-1}^{\infty} \left(\frac{2}{\tau_{a}}\right)^{2m}\zeta(-2m,jx_{a})\right], \tag{121}\]
where we have performed the summation of the first term and re-labeled the sum index on the second term.
Finally, by making use of the Bernoulli polynomials
\[\zeta(-n,q)=-\frac{B_{n+1}(q)}{n+1}, \tag{101}\]
where \(n\) is a nonnegative integer and
\[(-1)^{n}B_{n}(-x)=B_{n}(x)+nx^{n-1}, \tag{102}\]
after some algebraic work, we find for Eq. (100)
\[P_{x_{a}}^{(\rm J)}=-\delta^{(\rm J)}2\pi^{2}\csc^{2}(\pi x_{a}). \tag{103}\]
Now for mixed boundary conditions \(\delta_{n}^{(\rm DN)}=(-1)^{n+1}\) and \(\delta_{n}^{(\rm ND)}=(-1)^{n}\). From Eq. (100), we have
\[P_{x_{a}}^{(\rm M)} = 2\delta^{(\rm M)}\sum_{k=0}^{\infty}\left(\frac{2}{\tau_{a}} \right)^{2k}\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}}{(x_{a}-n)^{2-2k}}, \tag{104}\] \[= 2\delta^{(\rm M)}\sum_{k=0}^{\infty}\left(\frac{2}{\tau_{a}} \right)^{2k}\left[-\frac{1}{x_{a}^{2-2k}}+\sum_{j=\pm 1}\sum_{n=0}^{\infty} \frac{(-1)^{n}}{(n+jx_{a})^{2-2k}}\right],\]
where \(\delta^{(\rm M)}=[\delta^{(\rm DN)},\delta^{(\rm ND)}]=[+1,-1]\). To establish the second equality we have performed a similar procedure to that used for \(P^{(\rm J)}\) (see text below Eq. (102)).
With the relation
\[\sum_{k=0}^{\infty}\frac{(-1)^{k}}{(k+a)^{s}}=2^{-s}\left[\zeta \left(s,\frac{a}{2}\right)-\zeta\left(s,\frac{a+1}{2}\right)\right],\qquad \qquad{\rm Re}(s)>0, \tag{105}\]
we can perform the summation on \(n\) in Eq. (104) and, after a suitable index change, use Eq. (101) to write the resulting expression in terms of Bernoulli polynomials. Then, by considering Eqs. (103), (105), (101) and the identity
\[B_{n}(1-x)=(-1)^{n}B_{n}(x), \tag{106}\]
we found
\[P_{x_{a}}^{(\rm M)}=\delta^{(\rm M)}2\pi^{2}\cot(\pi x_{a}) \csc(\pi x_{a}). \tag{107}\]
For the function \(Q_{x_{a}}^{(\rm i)}\) we can write
\[Q_{x_{a}}^{(\rm i)}=2\tau_{a}\sum_{k=1}^{\infty}\frac{1}{(2k-1) }\left(\frac{2}{\tau_{a}}\right)^{2k-1}\sum_{n=-\infty}^{\infty}\frac{\delta _{n}^{(\rm i)}}{(n-x_{a})^{4-2k}}, \tag{108}\]
where we have used Eq. (102). All mathematical manipulations are similar to those used so far, so in order to avoid repetitions we will be more succinct.
For Dirichlet and Neumann boundary conditions, we obtain
\[Q_{x_{a}}^{(\rm J)} = \tau_{a}^{2}\delta^{(\rm J)}\sum_{k=1}^{\infty}\frac{1}{(2k-1)} \left(\frac{2}{\tau_{a}}\right)^{2k}\left[-\frac{1}{x_{a}^{4-2k}}+\sum_{j=\pm 1 }\sum_{n=0}^{\infty}\frac{1}{(n+jx_{a})^{4-2k}}\right], \tag{109}\] \[= \delta^{(\rm J)}4\pi^{2}\csc^{2}(\pi x_{a}).\]
Note that in the first equality we have used Eq. (100) to perform the sum in \(n\) and Eq. (101) to express the solution in terms of the Bernoulli polynomials. Next, we have used the identity (102) to develop the resulting expression and achieve the result shown in the second equality.
In the case of mixed boundary conditions we have
\[Q^{(\rm M)}_{x_{a}} = -\tau_{a}^{2}\delta^{(\rm J)}\sum_{k=1}^{\infty}\frac{1}{(2k-1)} \left(\frac{2}{\tau_{a}}\right)^{2k}\left[-\frac{1}{x_{a}^{4-2k}}+\sum_{j=\pm 1 }\sum_{n=0}^{\infty}\frac{(-1)^{n}}{(n+jx_{a})^{4-2k}}\right], \tag{100}\] \[= -\delta^{(\rm M)}4\pi^{2}\cot(\pi x_{a})\csc(\pi x_{a}).\]
Again, to establish the above result we have followed a similar procedure to the previous case, first we use Eq. (101) to perform the summation in the index \(n\) and Eq. (101) to write the result in terms of Bernoulli polynomials. Then, we conclude the computation by observing the identities (102) and (103).
In view of the results obtained for the position dependent functions, namely, Eqs. (103), (104), (105) and (100), we find for the Dirichlet and Neumann boundary conditions
\[R^{(\rm J)}_{x_{a}}=\delta^{(\rm J)}2\pi^{2}\csc^{2}(\pi x_{a}) \tag{101}\]
while for mixed boundary conditions we have
\[R^{(\rm M)}_{x_{a}}=-\delta^{(\rm M)}2\pi^{2}\cot(\pi x_{a})\csc(\pi x_{a}), \tag{102}\]
where \(\delta^{(\rm J)}=[\delta^{(\rm D)},\delta^{(\rm N)}]=[-1,+1]\) and \(\delta^{(\rm M)}=[\delta^{(\rm DN)},\delta^{(\rm ND)}]=[+1,-1]\).
### Quasiperiodic condition
Similar to the approach introduced for the case of parallel planes, initially, we conveniently define the quantities
\[U(\beta,\tau_{a}):=\sum_{n=1}^{\infty}U(n,\beta,\tau_{a})=S(\beta,\tau_{a})+T (\beta,\tau_{a}), \tag{103}\]
with
\[S(\beta,\tau_{a}):=\sum_{n=1}^{\infty}S(n,\beta,\tau_{a}) \tag{104}\]
and
\[T(\beta,\tau_{a}):=\sum_{n=1}^{\infty}T(n,\beta,\tau_{a}). \tag{105}\]
The functions \(U(n,\beta,\tau_{a})\), \(S(n,\beta,\tau_{a})\), and \(T(n,\beta,\tau_{a})\) shown above are defined in Eqs. (73), (74), and (75), respectively.
in order to work out Eq. (104) we observe that we can write
\[S(\beta,\tau_{a})=-\frac{1}{(\tau_{a})^{2}}\sum_{m=-1}^{\infty}\frac{1}{(\tau _{a})^{2m}}\sum_{n=1}^{\infty}\frac{\cos(2\pi\beta n)}{n^{-2m}}, \tag{106}\]
where we have considered a series expansion for the denominator and re-labeled the summation index. The first term of Eq. (106), \(m=-1\), can be solved using the relation
\[\sum_{k=1}^{\infty}\frac{\cos(kx)}{k^{2n}}=\frac{(-1)^{n-1}(2\pi)^{2n}}{2(2n)! }B_{2n}\left(\frac{x}{2\pi}\right), \tag{107}\]
where \(0\leq x\leq 2\pi\), \(n=1,2,\ldots\), and \(B_{n}(z)\) are the Bernoulli polynomials of order \(n\) in the variable \(z\). The remained terms of the series in Eq. (106) can be computed by using the cosine series formula,
\[\cos(x)=\sum_{k=0}^{\infty}\frac{(-1)^{k}x^{2k}}{(2k)!}, \tag{108}\]
and observing that \(\zeta(-2n)=0\), where \(n=1,2,\ldots\). So, we obtain
\[S(\beta,\tau_{a})=-\pi^{2}B_{2}(\beta)+\frac{1}{2\tau_{a}^{2}}. \tag{100}\]
In a similar way we have
\[T(\beta,\tau_{a})=\frac{2}{\tau_{a}^{2}}\sum_{m=-1}^{\infty}\frac{\tau_{a}^{-2 m}}{(2m+3)}\sum_{n=1}^{\infty}\cos(2\pi\beta n)n^{2m}, \tag{101}\]
which from Eqs. (100), (101) and following the same arguments previously presented, give us
\[T(\beta,\tau_{a})=2\pi^{2}B_{2}(\beta)-\frac{1}{3\tau_{a}^{2}}. \tag{102}\]
In view of the results (100) and (102), from the Eq. (101), we found
\[U(\beta,\tau_{a})=\pi^{2}B_{2}(\beta)+\frac{1}{6\tau_{a}^{2}}. \tag{103}\]
We stress that the above equation is an expression resulting from Eq. (101) for the late time regime, \(\tau_{a}\gg 1\). We have numerically check that this is in fact the case.
## Appendix B Short time regime
In this part, we shall analyze the expressions \(R(n,\tau_{a})\) and \(R(x_{a}-n,\tau_{a})\) for the short time regime, that is, \(\tau_{a}\ll 1\). The methodology adopted is similar to that in Appendix A and all mathematical relations used below can be found in Refs. [31; 33; 35]. In addition, since the method used here is similar to the one used in the late time regime previously, we shall be more straightforward about the details, but we indicate the crucial steps when necessary.
### Dirichlet, Neumann and mixed boundary conditions
#### b.1.1 Position independent term
First we write Eq. (100) in form
\[P^{(\rm i)}=\frac{\tau_{a}^{2}}{2}\sum_{k=0}^{\infty}\left(\frac{\tau_{a}}{2} \right)^{2k}\sum_{n=1}^{\infty}\frac{\gamma_{n}^{(\rm i)}}{n^{2k+4}}. \tag{104}\]
By noting that for Dirichlet and Neumann boundary conditions \(\gamma_{n}^{(\rm D)}=\gamma_{n}^{(\rm N)}=1\), from Eqs. (101) and (104) we obtain
\[P^{(\rm J)}=\frac{8}{\tau_{a}^{2}}\sum_{m=2}^{\infty}\left(\frac{\tau_{a}}{2} \right)^{2m}\zeta(2m), \tag{105}\]
where we have re-labeled the index summation. Now, by using
\[\sum_{k=0}^{\infty}(\pm 1)^{k}t^{2k}\zeta(2k)=-\frac{\pi t}{2}\begin{cases} \cot(\pi t)\\ \coth(\pi t)\end{cases},\qquad\qquad|t|<1, \tag{106}\]
we obtain
\[P^{(\rm J)}=-\frac{\pi^{2}}{3}+\frac{4}{\tau_{a}^{2}}-\frac{2\pi}{\tau_{a}} \cot\left(\frac{\pi\tau_{a}}{2}\right), \tag{107}\]
with J=(D,N).
In the case of mixed boundary conditions, \(\gamma_{n}^{\rm(DN)}=\gamma_{n}^{\rm(ND)}=(-1)^{n}\). Then, from Eq. (14) we have
\[P^{\rm(M)}=\frac{\tau_{a}^{2}}{2}\sum_{k=0}^{\infty}\left(\frac{\tau_{a}}{2} \right)^{2k}\sum_{n=1}^{\infty}\frac{(-1)^{n}}{n^{2k+4}}, \tag{15}\]
which by using Eq. (13) provides
\[P^{\rm(M)}=-\frac{8}{\tau_{a}^{2}}\sum_{m=2}^{\infty}\left(\frac{\tau_{a}}{2} \right)^{2m}\zeta(2m)+\frac{16}{\tau_{a}^{2}}\sum_{m=2}^{\infty}\left(\frac{ \tau_{a}}{4}\right)^{2m}\zeta(2m). \tag{16}\]
By considering Eq. (14) in the above expression we found
\[P^{\rm(M)}=\frac{\pi^{2}}{6}+\frac{4}{\tau_{a}^{2}}-\frac{\pi}{\tau_{a}} \csc\left(\frac{\pi\tau_{a}}{4}\right)\sec\left(\frac{\pi\tau_{a}}{4}\right), \tag{17}\]
with M=(DN, ND).
Similarly, from Eq. (13) we can write
\[Q^{\rm(i)}=4\sum_{k=1}^{\infty}\frac{1}{(2k-1)}\left(\frac{\tau_{a}}{2} \right)^{2k}\sum_{n=1}^{\infty}\frac{\gamma_{n}^{\rm(i)}}{n^{2k+2}}, \tag{18}\]
which for Dirichlet and Neumann conditions provides
\[Q^{\rm(J)}=\frac{16}{\tau_{a}^{2}}\sum_{m=2}^{\infty}\frac{\zeta(2m)}{(2m-3)} \left(\frac{\tau_{a}}{2}\right)^{2m}, \tag{19}\]
where we have made use of Eqs. (15) and (16).
For mixed condition, from Eq. (18), we have
\[Q^{\rm(M)}=4\sum_{k=1}^{\infty}\frac{1}{(2k-1)}\left(\frac{\tau_{a}}{2} \right)^{2k}\sum_{n=1}^{\infty}\frac{(-1)^{n}}{n^{2k+2}}, \tag{20}\]
which by using Eq. (14) gives us
\[Q^{\rm(M)}=\frac{16}{\tau_{a}^{2}}\left[-\sum_{m=2}^{\infty}\frac{\zeta(2m)} {(2m-3)}\left(\frac{\tau_{a}}{2}\right)^{2m}+2\sum_{m=2}^{\infty}\frac{\zeta( 2m)}{(2m-3)}\left(\frac{\tau_{a}}{4}\right)^{2m}\right]. \tag{21}\]
Since we have found useful expressions for \(P^{(i)}\) and \(Q^{(i)}\) we can easily obtain the corresponding functions \(R^{(i)}\) in the short time regime. Regarding the series in the functions \(Q^{(i)}\), since we are working in the regime \(\tau_{a}\ll 1\), is sufficient to consider the leading term in the power series of \(\tau_{a}\). Thus, for Dirichlet and Neumann conditions, Eqs. (12) and (19), we found
\[R^{\rm(J)} = -\frac{\pi^{2}}{3}+\frac{4}{\tau_{a}^{2}}-\frac{2\pi}{\tau_{a}} \cot\left(\frac{\pi\tau_{a}}{2}\right)+\tau_{a}^{2}\zeta(4), \tag{22}\] \[\simeq \frac{3\tau_{a}^{2}}{2}\zeta(4).\]
On the other hand, for mixed boundary conditions, Eqs. (17) and (21), we have
\[R^{\rm(M)} = \frac{\pi^{2}}{6}+\frac{4}{\tau_{a}^{2}}-\frac{\pi}{\tau_{a}} \csc\left(\frac{\pi\tau_{a}}{4}\right)\sec\left(\frac{\pi\tau_{a}}{4}\right)- \frac{7\tau_{a}^{2}}{8}\zeta(4), \tag{23}\] \[\simeq -\frac{21\tau_{a}^{2}}{16}\zeta(4).\]
In both cases, Eqs. (22) and (23), the expressions are only valid for \(\tau_{a}\ll 1\).
Position dependent term
From Eq. (101), firstly we write
\[P_{x_{a}}^{\rm(i)}=\frac{\tau_{a}^{2}}{2}\sum_{k=0}^{\infty}\left(\frac{\tau_{a}} {2}\right)^{2k}\sum_{n=-\infty}^{\infty}\frac{\delta_{n}^{\rm(i)}}{(n-x_{a})^{ 2k+4}}. \tag{102}\]
By considering Dirichlet and Neumann boundary conditions, namely, \(\delta_{n}^{\rm(D)}=-1\) and \(\delta_{n}^{\rm(N)}=1\), respectively, we can divide the summation in Eq. (102) in two parts and relabel the sum index to the negative range. This gives
\[P_{x_{a}}^{\rm(J)}=\delta^{\rm(J)}\left[-\frac{\tau_{a}^{2}}{2x_{a}^{4}}\sum_{ k=0}^{\infty}\left(\frac{\tau_{a}}{2x_{a}}\right)^{2k}+\frac{8}{\tau_{a}^{2}} \sum_{j=\pm 1}\sum_{m=2}^{\infty}\left(\frac{\tau_{a}}{2}\right)^{2m}\zeta(2m, jx_{a})\right], \tag{103}\]
where \(\delta^{\rm(J)}=[\delta^{\rm(D)},\delta^{\rm(N)}]=[-1,+1]\) and we have used Eq. (119) to perform the summation in the \(n\) index.
The first term in the r.h.s. of Eq. (103) can easily be computed and the second term can also be solved using the integral representation
\[\zeta(z,q)=\frac{1}{\Gamma(z)}\int_{0}^{\infty}\frac{t^{z-1}e^{-qt}}{1-e^{-t}}dt. \tag{104}\]
Therefore, we obtain
\[P_{x_{a}}^{\rm(J)}=-\delta^{\rm(J)}\left\{2\pi^{2}\csc^{2}(\pi x_{a})+\frac{2 \pi}{\tau_{a}}\left[\cot\left[\frac{\pi(\tau_{a}-2x_{a})}{2}\right]+\cot\left[ \frac{\pi(\tau_{a}+2x_{a})}{2}\right]\right]\right\}. \tag{105}\]
Now, from Eq. (102), the mixed condition case provides
\[P_{x_{a}}^{\rm(M)}=-\delta^{\rm(M)}\frac{\tau_{a}^{2}}{2}\sum_{k=0}^{\infty} \left(\frac{\tau_{a}}{2}\right)^{2k}\sum_{n=-\infty}^{\infty}\frac{(-1)^{n}}{ (n-x_{a})^{2k+4}}. \tag{106}\]
Consequently, by using Eq. (101) we found
\[P_{x_{a}}^{\rm(M)}=\delta^{\rm(M)}\left\{\frac{\tau_{a}^{2}}{2x_{a}^{4}}\sum_ {k=0}^{\infty}\left(\frac{\tau_{a}}{2x_{a}}\right)^{2k}-\frac{8}{\tau_{a}^{2} }\sum_{j=\pm 1}\sum_{m=2}^{\infty}\left(\frac{\tau_{a}}{4}\right)^{2m}\left[ \zeta\left(2m,\frac{jx_{a}}{2}\right)-\zeta\left(2m,\frac{1+jx_{a}}{2}\right) \right]\right\}. \tag{107}\]
Furthermore, by making use of Eq. (104) in the above expression, we can first perform the sums and then the integrals. Thus, after some algebraic work we obtain
\[P_{x_{a}}^{\rm(M)}=\delta^{\rm(M)}\left\{2\pi^{2}\cot(\pi x_{a})\csc(\pi x_{a })+\frac{2\pi}{\tau_{a}}\left[\csc\left[\frac{\pi(2x_{a}+\tau_{a})}{2}\right]- \csc\left[\frac{\pi(2x_{a}-\tau_{a})}{2}\right]\right]\right\}, \tag{108}\]
where \(\delta^{\rm(M)}=[\delta^{\rm(DN)},\delta^{\rm(ND)}]=[+1,-1]\).
For the function \(Q_{x_{a}}^{\rm(i)}\), from Eq. (100), we have
\[Q_{x_{a}}^{\rm(i)}=\sum_{k=1}^{\infty}\frac{1}{(2k-1)}\left(\frac{\tau_{a}}{2 }\right)^{2k}\sum_{n=-\infty}^{\infty}\frac{\delta_{n}^{\rm(i)}}{(n-x_{a})^{2k +2}}. \tag{109}\]
In the case of Dirichlet and Neumann conditions
\[Q_{x_{a}}^{\rm(J)}=\delta^{\rm(J)}\frac{16}{\tau_{a}^{2}}\sum_{m=2}^{\infty} \frac{1}{(2m-3)}\left(\frac{\tau_{a}}{2}\right)^{2m}\left[-\frac{1}{x_{a}^{2m }}+\sum_{j=\pm 1}\sum_{n=0}^{\infty}\frac{1}{(n+jx_{a})^{2m}}\right], \tag{110}\]
which from Eq. (119) gives us
\[Q_{x_{a}}^{\rm(J)}=\delta^{\rm(J)}\left[-\frac{\tau_{a}}{2x_{a}^{3}}\ln\left( \frac{2x_{a}+\tau_{a}}{2x_{a}-\tau_{a}}\right)^{2}+\frac{16}{\tau_{a}^{2}} \sum_{j=\pm 1}\sum_{m=2}^{\infty}\frac{\zeta(2m,jx_{a})}{(2m-3)}\left(\frac{\tau_ {a}}{2}\right)^{2m}\right]. \tag{111}\]
The mixed condition case has an expression very similar to the one in Eq. (100), namely,
\[Q_{x_{a}}^{\rm(M)}=-\delta^{\rm(M)}\frac{16}{\tau_{a}^{2}}\sum_{m=2}^{\infty} \frac{1}{(2m-3)}\left(\frac{\tau_{a}}{2}\right)^{2m}\left[-\frac{1}{x_{a}^{2m}} +\sum_{j=\pm 1}\sum_{n=0}^{\infty}\frac{(-1)^{n}}{(n+jx_{a})^{2m}}\right], \tag{101}\]
which by mean of Eq. (100) provides
\[Q_{x_{a}}^{\rm(M)}=\delta^{\rm(M)}\left\{\frac{\tau_{a}}{2x_{a}^ {3}}\ln\left(\frac{2x_{a}+\tau_{a}}{2x_{a}-\tau_{a}}\right)^{2}-\frac{16}{\tau _{a}^{2}}\sum_{j=\pm 1}\sum_{m=2}^{\infty}\frac{1}{(2m-3)}\left(\frac{\tau_{a}}{4} \right)^{2m}\left[\zeta\left(2m,\frac{jx_{a}}{2}\right)-\zeta\left(2m,\frac{1 +jx_{a}}{2}\right)\right]\right\}.\]
Finally, with the useful expressions for \(P_{x_{a}}^{(i)}\) and \(Q_{x_{a}}^{(i)}\) we can use Eq. (100) to construct the functions \(R_{x_{a}}^{(i)}\). Thereby, for Dirichlet and Neumann conditions, from Eqs. (102) and (103), we obtain
\[R_{x_{a}}^{\rm(J)} = \delta^{\rm(J)}\left\{-2\pi^{2}\csc^{2}(\pi x_{a})-\frac{2\pi}{ \tau_{a}}\left[\cot\left[\frac{\pi(\tau_{a}-2x_{a})}{2}\right]+\cot\left[ \frac{\pi(\tau_{a}+2x_{a})}{2}\right]\right]\right. \tag{102}\] \[- \left.\frac{\tau_{a}}{2x_{a}^{3}}\ln\left(\frac{2x_{a}+\tau_{a}} {2x_{a}-\tau_{a}}\right)^{2}+\tau_{a}^{2}[\zeta(4,x_{a})+\zeta(4,-x_{a})] \right\},\] \[\simeq \delta^{\rm(J)}\left\{\frac{\tau_{a}^{2}\pi^{4}}{2}[2+\cos(2\pi x _{a})]\csc^{4}(\pi x_{a})\right\},\]
whereas for mixed boundary conditions, from Eqs. (101) and (102), we have
\[R_{x_{a}}^{\rm(M)} = \delta^{\rm(M)}\left\{2\pi^{2}\cot(\pi x_{a})\csc(\pi x_{a})+ \frac{2\pi}{\tau_{a}}\left[\csc\left[\frac{\pi(2x_{a}+\tau_{a})}{2}\right]+ \csc\left[\frac{\pi(2x_{a}-\tau_{a})}{2}\right]\right]\right. \tag{103}\] \[+ \left.\frac{\tau_{a}}{2x_{a}^{3}}\ln\left(\frac{2x_{a}+\tau_{a}} {2x_{a}-\tau_{a}}\right)^{2}-\frac{\tau_{a}^{2}}{16}\left[\zeta\left(4,\frac{ x_{a}}{2}\right)+\zeta\left(4,\frac{-x_{a}}{2}\right)-\zeta\left(4,\frac{1+x_{a}}{2} \right)-\zeta\left(4,\frac{1-x_{a}}{2}\right)\right]\right\},\] \[\simeq \delta^{\rm(M)}\left\{\frac{-\tau_{a}^{2}\pi^{4}}{8}[11+\cos(2 \pi x_{a})]\cot(\pi x_{a})\csc^{3}(\pi x_{a})\right\},\]
with \(\delta^{\rm(J)}=[\delta^{\rm(D)},\delta^{\rm(N)}]=[-1,+1]\) and \(\delta^{\rm(M)}=[\delta^{\rm(DN)},\delta^{\rm(ND)}]=[+1,-1]\). It is important to point out that, since we are working in the regime \(\tau_{a}\ll 1\), we have only considered the leading term for the series of the functions \(Q_{x_{a}}^{(i)}\).
### Quasiperiodic condition
The function \(S(\beta,\tau_{a})\), Eq. (100), can be worked out by observing that
\[S(\beta,\tau_{a})=\sum_{n=1}^{\infty}\frac{\tau_{a}^{2}\cos(2\pi\beta n)}{n^{2 }(n^{2}-\tau_{a}^{2})}=\frac{1}{\tau_{a}^{2}}\sum_{m=2}^{\infty}(\tau_{a})^{2 m}\sum_{n=1}^{\infty}\frac{\cos(2\pi\beta n)}{n^{2m}}, \tag{104}\]
where we have first considered a series expansion for the denominator and re-labeled the index sum. So, by using the relation in Eq. (100), we find that
\[S(\beta,\tau_{a})=-\frac{1}{2\tau_{a}^{2}}\sum_{m=2}^{\infty}\frac{(-1)^{m}(2 \pi\tau_{a})^{2m}}{(2m)!}B_{2m}(\beta), \tag{105}\]
where \(B_{2m}(\beta)\) are the Bernoulli polynomials of order \(2m\) on the variable \(\beta\).
Similarly, for the functions \(T(\beta,\tau_{a})\), Eq. (100), using the relation (102) and redefining the sum index, we found
\[T(\beta,\tau_{a}) = \frac{2}{\tau_{a}^{2}}\sum_{m=2}^{\infty}\frac{\tau_{a}^{2m}}{(2m -3)}\sum_{k=1}^{\infty}\frac{\cos(2\pi\beta n)}{n^{2m}}, \tag{106}\]
which by using Eq. (100) provides
\[T(\beta,\tau_{a}) = -\frac{1}{\tau_{a}^{2}}\sum_{m=2}^{\infty}\frac{(-1)^{m}(2\pi\tau_{ a})^{2m}}{(2m-3)(2m)!}B_{2m}(\beta). \tag{101}\]
Now, from Eqs. (101), (102) and (101) we finally find the following approximation:
\[U(\beta,\tau_{a}) = -\frac{1}{2\tau_{a}^{2}}\sum_{m=2}^{\infty}\frac{(2m-1)(-1)^{m}(2 \pi\tau_{a})^{2m}}{(2m-3)(2m)!}B_{2m}(\beta), \tag{102}\] \[\simeq -\pi^{4}\tau_{a}^{2}B_{4}(\beta),\]
where we have only considered the leading term of the series, since \(\tau_{a}\ll 1\). Once again, the expressions obtained here are only valid for the short time regime, something that we have numerically checked.
|
2305.06712 | Incorporating intrinsic compressibility effects in velocity
transformations for wall-bounded turbulent flows | A transformation that relates a compressible wall-bounded turbulent flow with
non-uniform fluid properties to an equivalent incompressible flow with uniform
fluid properties is derived and validated. The transformation accounts for both
variable-property and intrinsic compressibility effects, the latter being the
key improvement over the current state-of-the-art. The importance of intrinsic
compressibility effects contradicts the renowned Morkovin's hypothesis. | Asif Manzoor Hasan, Johan Larsson, Sergio Pirozzoli, Rene Pecnik | 2023-05-11T10:45:27Z | http://arxiv.org/abs/2305.06712v3 | # Incorporating intrinsic compressibility effects in velocity transformations
###### Abstract
A transformation that relates a compressible wall-bounded turbulent flow with non-uniform fluid properties to an equivalent incompressible flow with uniform fluid properties is derived and validated. The transformation accounts for both variable-property and intrinsic compressibility effects, the latter being the key improvement over the current state-of-the-art. The importance of intrinsic compressibility effects contradicts the renowned Morkovin's hypothesis.
The law of the wall for incompressible turbulent flows is one of the cornerstones of fluid dynamics [1]. Such a universal law is still missing for compressible flows, because the interplay of thermodynamics and hydrodynamics leads to significantly richer flow physics and even more intricate phenomena in turbulence. Efforts have long been devoted to find a transformation that reduces the mean velocity profile of compressible wall-bounded flows to that of incompressible, constant-property flows [2]. Such a transformation can assist in extending the incompressible modeling techniques to compressible flows, eventually enabling better flow and heat transfer predictions for a range of applications and phenomena in nature.
The history of velocity transformations dates back to the 1950s, when Van Driest [3] (hereafter VD) proposed a correction to the incompressible law of the wall, accounting for mean density variations in the friction velocity scale. Zhang _et al._[2] proposed a transformation that improves the collapse in the wake region of compressible boundary layers. However, both the transformations were developed for adiabatic boundary layers, and as such, they fail for diabatic flows. In 2016, Trettel and Larsson [4] (hereafter TL) formally derived an alternative to the VD transformation, suggesting that the semi-local wall coordinate, previously defined on intuitive grounds by Huang _et al._[5], is the correct scaling to account for changes in the viscous length scale. Patel _et al._[6] derived a mathematically equivalent velocity transformation using several diabatic variable-property channel flows at the zero Mach number limit. They observed that the leading-order effect of variable properties on the transformation can effectively be characterized by the semi-local Reynolds number only. Despite being accurate for channel flows, these transformations are inaccurate for compressible boundary layers [7; 4; 8; 9]. Recently, Griffin _et al._[8] (hereafter GFM) proposed a new transformation based on the universality of the ratio of production and dissipation, and a stress-based blending function. The GFM transformation improved the results for compressible boundary layers, however, it is inaccurate for ideal gas flows with non-air-like viscosity laws (presented and discussed at the end), and for flows with fluids at supercritical pressures [10]. Volpiani _et al._[11] proposed a data-driven transformation which also improved the results for compressible boundary layers. However, due to the limited parameter space used for calibration, that model is still far from universal. The lack of a universal transformation sets up the motivation for this Letter.
_Objectives_.--The transformations described above are all built on the implicit assumption that intrinsic compressibility effects--associated with the density changes of fluid elements in response to changes in pressure [16]--are insignificant, and that only mean fluid property variations matter for this problem. This is Morkovin's hypothesis [17], a key building block in the theory of compressible turbulent wall-bounded flows. The first objective of this Letter is to argue that Morkovin's hypothesis is inaccurate and that intrinsic compressibility effects do modify the mean velocity profile, whereas the second objective is to propose a transformation that accounts for these effects. To address the first part, we perform Direct Numerical Simulations (DNS) of high Mach number compressible channel flows in which we isolate intrinsic compressibility effects by eliminating mean property variations. To attain approximately constant mean properties, we follow the method proposed by Coleman _et al._[18] in which the viscous heating is removed from the energy equation. These "constant property" (CP) simulations are performed at bulk Mach numbers (ratio of the bulk velocity and the speed of sound based on wall temperature) of 0.3, 2.28, 3, and 4 and a friction Reynolds number of \(Re_{\tau}\approx 550\), using STREAmS [19] with an ideal gas equation of state and a power-law for the dynamic viscosity.
Fig. 1(a) shows the velocity profiles under the TL transformation for the four cases. A clear log-law shift is
observed with increasing Mach number. Since the mean properties are nearly uniform in these cases, this contradicts Morkovin's hypothesis and suggests that intrinsic compressibility effects are present.
_Definitions._--At this point, it is necessary to introduce a few important quantities that will be used throughout this Letter. The friction velocity and the viscous length scales at the wall are defined as \(u_{\tau}=\sqrt{\tau_{w}/\rho_{w}}\) and \(\delta_{v}=\mu_{w}/(\rho_{w}u_{\tau})\), respectively, where \(\tau_{w}\) is the wall-shear stress, and \(\rho_{w}\), \(\mu_{w}\) are density and viscosity at the wall, respectively. The friction Reynolds number is defined as \(Re_{\tau}=\delta/\delta_{v}\), where \(\delta\) is either the channel half-height or the thickness of the boundary layer. To account for variations in fluid properties, the semi-local friction velocity and viscous length scales are defined by using the local density and viscosity as \(u_{\tau}^{*}=\sqrt{\tau_{w}/\bar{\rho}}\) and \(\delta_{v}^{*}=\bar{\mu}/(\bar{\rho}u_{\tau}^{*})\), which vary in the wall-normal direction. Superscripts + and \(*\) are then used to denote scaling with wall or semi-local quantities, respectively. We use a bar to denote Reynolds averaging, whereas single and double primes are used to denote fluctuations in the Reynolds and Favre averaging framework, respectively. To account for intrinsic compressibility effects, we define the friction Mach number as \(M_{\tau}=u_{\tau}/a_{w}\) (ratio of the wall friction velocity and the speed of sound based on wall temperature). In an ongoing study by the authors, where these effects are being studied more comprehensively, we show that \(M_{\tau}\) is the most suitable parameter to quantify them, consistent with the suggestion in Smits and Dussauge [20].
The TL-transformed log-law intercept \(C_{TL}\) is quantified in the inset of Fig. 1(a) as a function of the friction Mach number \(M_{\tau}\) for the four CP cases, several ideal gas channel flows, and boundary layers available in literature. Note that \(C_{TL}\) is evaluated as in Trettel and Larsson [4], using integration bounds from \(y^{*}=y/\delta_{v}^{*}\approx 50\) to \(y/\delta\approx 0.1\). The trend line in the inset of Fig. 1(a) is obtained using the CP cases only. Thus, it is a measure of the log-law shift due to intrinsic compressibility alone. Interestingly, most of the cases follow the trend line, suggesting that the log-law shift is mainly due to intrinsic compressibility effects. Deviations from the trend line can then be attributed to effects other than those directly related to mean property variations (within the assumptions of the TL transformation) and intrinsic compressibility, as we briefly discuss at the end of this Letter. Note that non-negligible scatter is also observed in incompressible flows [21], suggesting high sensitivity of the log-law constant to numerical/experimental uncertainties. Hereafter, we propose a framework to account for the log-law shift due to intrinsic compressibility effects.
_Derivation._--In the inner layer of a parallel (or quasi-parallel) shear flow, integration of the mean momentum equation implies that the sum of viscous and turbulent shear stresses is equal to the total shear stress, given as
\[\bar{\mu}\frac{d\bar{u}}{dy}-\overline{\rho u^{\prime\prime}v^{\prime\prime}} =\tau_{t}, \tag{1}\]
where \(\tau_{t}\approx\tau_{w}\) in boundary layers and it varies linearly in channel flows. Note that terms due to viscosity fluctuations are neglected. Normalizing Eq. (1) by \(\tau_{w}\) and using
Figure 1: (a) Mean velocity profiles in constant-property compressible channel flows, after TL transformation. (Inset): Log-law constant \(C_{TL}\) as a function of \(M_{\tau}\). _Ideal gas_: (Large closed stars) constant-property compressible channels; (open \(\triangle\)) cooled channels [4]; (closed \(\triangle\)) adiabatic channels with pseudo-heat sources (present authors, unpublished); (open \(\square\)) cooled and (closed \(\square\)) adiabatic boundary layers ([12; 13; 14]; [7] Mach 2 and 14 cases only; A. Ceci, private communication); (closed \(\circ\)) channels with non-air-like viscosity power-law exponent of -0.5 ([9]; present authors, unpublished). The dashed line shows a fit for the constant-property cases, whereas the gray shaded area indicates an error bar of +/-5%. Note that low-Reynolds number cases (less than 300) are excluded. (b) Reynolds shear stress for the constant-property compressible cases. (Inset): The Kolmogorov length scale, scaled by the semi-local viscous length scale \(\delta_{v}^{*}\). The black symbols are the incompressible case of Moser _et al._[15], in both (a) and (b).
the definitions of \(u_{\tau}^{*}\) and \(\delta_{v}^{*}\), the non-dimensional form is
\[\frac{\delta_{v}^{*}}{u_{\tau}^{*}}\frac{d\bar{u}}{dy}+r_{uv}^{+}=\tau_{t}^{+}, \tag{2}\]
where \(r_{uv}^{+}=-\overline{\rho u^{\prime\prime}v^{\prime\prime}}/\tau_{w}\) and \(\tau_{t}^{+}=\tau_{t}/\tau_{w}\). Next, following Trettel and Larsson [4], we assume universality of the total shear stress and equate Eq. (2) with its incompressible counterpart to get
\[\frac{d\bar{U}^{+}}{dY^{+}}+R_{uv}^{+}=\frac{\delta_{v}^{*}}{u_{\tau}^{*}} \frac{d\bar{u}}{dy}+r_{uv}^{+}, \tag{3}\]
where \(\bar{U}^{+}=\bar{U}/u_{\tau}\) and \(Y^{+}=Y/\delta_{v}\) denote the non-dimensional velocity and wall-normal coordinate of an incompressible flow, that constitute the universal law of the wall.
Trettel and Larsson [4] proceeded by assuming universality of the Reynolds shear stress in the inner layer, which, when combined with Van Driest's log-law matching condition, leads to their final transformation. However, from Fig. 1(b), it is apparent that the Reynolds shear stress is _not_ universal in the buffer layer as a consequence of intrinsic compressibility effects, which were not accounted for in the TL transformation. In the current approach, we will disregard the assumption \(R_{uv}^{+}=r_{uv}^{+}\), and instead invoke the definition of the eddy viscosity to represent the Reynolds shear stress.
Introducing the eddy viscosity for incompressible flows (superscript '\(i\)') as \(\mu_{t}^{i}/\mu_{w}=R_{uv}^{+}/(d\bar{U}^{+}/dY^{+})\), and analogously for compressible flows (superscript '\(c\)') as \(\mu_{t}^{c}/\bar{\mu}=r_{uv}^{+}/([\delta_{v}^{*}/u_{\tau}^{*}]d\bar{u}/dy)\), the total stress balance equation can be rearranged as,
\[\frac{d\bar{U}^{+}}{d\bar{u}^{+}}=\left(\frac{1+\mu_{t}^{c}/\bar{\mu}}{1+\mu_{ t}^{i}/\mu_{w}}\right)\delta_{v}^{*}\frac{dy^{*}}{dy}\frac{u_{\tau}}{u_{\tau}^{* }}\frac{dY^{+}}{dy^{*}}. \tag{4}\]
Eq. (4) is quite significant as it presents a general framework for deriving velocity transformations for wall-bounded flows that satisfy Eq. (1).
Yet, to derive the velocity transformation, we must finally establish a relation between \(Y^{+}\) and \(y^{*}\), which is commonly known as the coordinate transformation. Assuming that the Reynolds shear stress is universal in the inner layer, Trettel and Larsson [4] derived that \(Y^{+}=y^{*}\). However, as seen in Fig. 1(b), the Reynolds shear stress is not universal in the presence of intrinsic compressibility effects, and the question of whether or not \(Y^{+}=y^{*}\) still holds has to be reassessed. \(Y/\delta_{v}=y/\delta_{v}^{*}\) implies that \(\delta_{v}^{*}\) is the proper measure of small-scale turbulence and viscous effects in compressible flows, just like \(\delta_{v}\) is in incompressible flows. This was first proposed by Huang _et al._[5] and later verified for a range of turbulent statistics by Patel _et al._[6]. The same is true also in the presence of intrinsic compressibility effects as evidenced in the inset of Fig. 1(b). The almost universal semi-locally scaled distributions of the Kolmogorov length scale throughout the inner layer, despite non-universal Reynolds shear stress, support the validity of \(Y^{+}=y^{*}\) in the presence of intrinsic compressibility effects.
Eliminating \(dY^{+}/dy^{*}\) in Eq. (4) based on the relation \(Y^{+}=y^{*}\), using \(dy^{*}/dy=\left(1-y^{*}\:d\delta_{v}^{*}/dy\right)/\delta_{v}^{*}\) and \(u_{\tau}/u_{\tau}^{*}=\sqrt{\bar{\rho}/\rho_{w}}\) we obtain the final proposed velocity transformation kernel,
\[\frac{d\bar{U}^{+}}{d\bar{u}^{+}}=\underbrace{\left(\frac{1+\mu_{t}^{c}/\bar {\mu}}{1+\mu_{t}^{i}/\mu_{w}}\right)}_{3}\underbrace{\left(1-\frac{y}{\delta_ {v}^{*}}\frac{d\delta_{v}^{*}}{dy}\right)}_{2}\underbrace{\sqrt{\frac{\bar{ \rho}}{\rho_{w}}}}_{1}. \tag{5}\]
Eq. (5) embodies a sequence of velocity transformations, as outlined below:
* Factor 1 is the correction proposed by Van Driest [3] to account for the change in the friction velocity scale from \(u_{\tau}\) to \(u_{\tau}^{*}\).
* Factor 2 is the correction proposed in the TL transformation [4; 6] to account for the change in the viscous length scale from \(\delta_{v}\) to \(\delta_{v}^{*}\). Factors 1 and 2 combined form the TL transformation kernel, but written in terms of the semi-local viscous length scale, equivalent to that proposed in Patel _et al._[6]. These factors account for the effects of mean property variations on the velocity transformation.
* Factor 3 is the proposed correction which accounts for additional physics beyond those captured by the TL transformation. As such, it offers a general connection between a transformation and eddy viscosity, extending the work of Yang and Lv [22].
Finally, to obtain a closed form of the proposed transformation, we must define the eddy viscosities \(\mu_{t}^{i}\) and \(\mu_{t}^{c}\). While many possible eddy viscosity models exist, for the sake of simplicity we consider the Johnson-King (JK) model [29], which is defined as \(\mu_{t}^{i}/\mu_{w}=\kappa Y^{+}D^{i}\), where \(D^{i}\) is a damping function of the form \(D^{i}=[1-\exp(-Y^{+}/A^{+})]^{2}\). The set of constants \(\kappa=0.41\), \(A^{+}=17\) is commonly used [30] to reproduce the incompressible log-law constant of 5.2. Analogously, \(\mu_{t}^{c}/\bar{\mu}=\kappa y^{*}D^{c}\). Now, to account for the outward shift of the Reynolds shear stress caused by intrinsic compressibility effects (Fig. 1(b)), we empirically modify the constant in the damping function \(D^{c}\) to depend on the friction Mach number as
\[D^{c}=\left[1-\exp\left(\frac{-y^{*}}{A^{+}+f(M_{\tau})}\right)\right]^{2}. \tag{6}\]
The function \(f(M_{\tau})=19.3\,M_{\tau}\) is used to reproduce the linear curve-fit presented in Fig. 1(a). However, other forms of \(f(M_{\tau})\) can be used, which reproduce curve-fits that better represent a larger dataset. It is important to note that the eddy viscosity can be rewritten in dimensional form as \(\mu_{t}^{c}=\sqrt{\tau_{w}\bar{\rho}}\kappa yD^{c}\), which can be directly used as a wall-model in Large Eddy Simulations.
Integrating Eq. (5) and replacing \(Y^{+}\) by \(y^{*}\) in \(\mu_{t}^{i}/\mu_{w}\) yields the transformation
\[\bar{U}^{+}=\int_{0}^{\bar{u}^{+}}\!\left(\frac{1+\kappa y^{*}D^{c}}{1+\kappa y ^{*}D^{i}}\right)\left(1-\frac{y}{\delta_{v}^{*}}\frac{d\delta_{v}^{*}}{dy} \right)\sqrt{\frac{\bar{\rho}}{\rho_{w}}}\,d\bar{u}^{+}. \tag{7}\]
_Results and Discussion.--_This transformation is tested and compared to the TL and GFM transformations in Fig. 2 for 57 cases, including adiabatic and cooled boundary layers, cooled channels, and non-ideal flows, covering a wide range of Mach numbers. The three transformations are equivalent in the viscous sublayer, because GFM's log-layer transformation is blended with TL, whereas the current transformation naturally reduces to TL in the viscous sublayer, where \(\mu_{t}\approx 0\) and factor 3 in Eq. (5) reduces to unity. The log-law shift in the TL transformation is apparent. Such a selective upward shift is not seen in the GFM transformation for the conventional ideal gas cases. However, the transformation fails for the four constant-property cases, ideal gas cases with non-air-like viscosity laws, and supercritical fluid cases. The current transformation shows the least spread for the cases considered herein. Note that for all the transformations, the spread is larger in the outer part of boundary layers or channels, which is arguably beyond the scope of these theories that are focused on the inner, constant-stress layer.
Despite the improved accuracy, the proposed transformation is only as accurate as the assumptions made in its derivation. For instance, the transformation might be inaccurate for cases where the stress balance equation (Eq. (1)) does not hold, like in supercritical boundary layers [31]. Also, we assume that the variable-property effects are limited to factors 1 and 2 in Eq. (5), and that these effects do not contribute to the non-universality of the Reynolds shear stress (factor 3 in Eq. (5)). This is not always the case, as suggested by the scatter in the log-law intercept with respect to the fitted curve in Fig. 1(a), eventually reflected in the new transformation in Fig. 2(c). We suspect that the cancellation of these unincorporated variable-property effects and intrinsic compressibility effects is why the TL transformation was very accurate for ideal gas channel flows but not for boundary layers. Incorporating these additional physics is the next step for future studies aimed at developing an even more general transformation.
To summarize, the log-law shift observed in the TL transformation can be primarily attributed to intrinsic compressibility effects. We ascertain this based on our tailored constant-property compressible cases, in which the only dominant effect is due to intrinsic compressibility. Taking \(M_{\tau}\) as the most suitable parameter to quantify these effects, we propose a new transformation that effectively removes the log-law shift. The proposed transformation accounts for the changes in friction velocity and viscous length scales due to variations in mean properties, and for intrinsic compressibility effects. Thus, it applies to a wide variety of cases. We anticipate that it may serve as a building block for improved turbulence models; for example, it could be directly used as an equilibrium wall-model.
We thank Dr. P. Costa for the insightful discussions and for reviewing the manuscript. We thank A. Ceci for performing 2 boundary layer simulations for this work. Dr. A. Trettel is thanked for discussions on computing the log-law constant. Finally, we thank all the authors [4; 7; 9; 12; 13; 14; 23; 24; 25; 26; 27] for sharing their data with us. This work was supported by the European Research Council grant no. ERC-2019-CoG-864660, Critical; and the Air Force Office of Scientific Research under grants FA9550
Figure 2: Assessment of the (a) TL, (b) GFM, and (c) proposed transformations for 55 ideal gas and 2 supercritical fluid cases. _Ideal gas_: (red solid lines) constant-property compressible channels; (gray solid lines) cooled channels [23; 4], adiabatic channels with pseudo-heat sources ([23; 24]; present authors, unpublished), cooled and adiabatic boundary layers ([12; 13; 14; 25; 26]; [7]; Mach 2 and 14 cases only; A. Ceci, private communication); (gray dashed lines) channels and boundary layers with non-air-like viscosity power-law exponents of -0.5 and -1.75 ([9]; present authors, unpublished). _Supercritical fluid_: (gray dash-dotted lines) channel flows [27]. (Insets): Percent error (\(\varepsilon\)) in the velocity transformation computed with respect to the incompressible reference [28], as described in Griffin _et al._[8]. Note that the inset for GFM has larger axis limits, and that the non-air-like case with the largest error of 44% is not shown. Symbols as in Fig. 1(a). Additionally, supercritical cases are denoted using closed \(\lozenge\). Shaded region indicates an error bar of +/-3%. As in Fig. 1, low-Reynolds number cases (less than 300) are excluded.
19-1-0210 and FA9550-19-1-7029.
|
2304.11147 | Path instabilities and drag in the settling of single spheres | The settling behavior of individual spheres in a quiescent fluid was studied
experimentally. The dynamics of the spheres was analyzed in the parameter space
of particle-to-fluid density ratio ($\Gamma$) and Galileo number
($\mathrm{Ga}$), with $\Gamma \in (1.1, 7.9)$ and $\mathrm{Ga} \in (100, 340)$.
The experimental results showed for the first time that the mean trajectory
angle with the vertical exhibits a complex behavior as $\mathrm{Ga}$ and
$\Gamma$ are varied. Numerically predicted regimes such as Vertical Periodic
and Planar Rotating were validated at high $\Gamma$ values. In particular, for
the denser spheres, a clear transition from planar to non-planar trajectories
was observed, accompanied by the emergence of semi-helical trajectories
corresponding to the Planar Rotating Regime. The spectra of trajectory
oscillations were also quantified as a function of $\mathrm{Ga}$, confirming
the existence of oblique oscillating regimes at both low and high frequencies.
The amplitudes of the perpendicular velocities in these regimes were also
quantified and compared with numerical simulations in the literature. The
terminal velocity and drag of the spheres were found to depend on the
particle-to-fluid density ratio, and correlations between the drag coefficient
and particle Reynolds number ($Re_p$) as a function of Ga were established,
allowing for the estimation of drag and settling velocity using $\mathrm{Ga}$,
a control parameter, rather than the response parameter $Re_p$. | Facundo Cabrera-Booman, Nicolas Plihon, Mickaël Bourgoin | 2023-04-21T17:46:22Z | http://arxiv.org/abs/2304.11147v2 | # Path instabilities and drag in the settling of single spheres
###### Abstract
The settling behavior of individual spheres in a quiescent fluid was studied experimentally. The dynamics of the spheres was analyzed in the parameter space of particle-to-fluid density ratio (\(\Gamma\)) and Galileo number (\(\mathrm{Ga}\)), with \(\Gamma\in(1.1,7.9)\) and \(\mathrm{Ga}\in(100,340)\). The experimental results showed for the first time that the mean trajectory angle with the vertical exhibits a complex behavior as \(\mathrm{Ga}\) and \(\Gamma\) are varied. Numerically predicted regimes such as Vertical Periodic and Planar Rotating were validated at high \(\Gamma\) values. In particular, for the denser spheres, a clear transition from planar to non-planar trajectories was observed, accompanied by the emergence of semi-helical trajectories corresponding to the Planar Rotating Regime. The spectra of trajectory oscillations were also quantified as a function of \(\mathrm{Ga}\), confirming the existence of oblique oscillating regimes at both low and high frequencies. The amplitudes of the perpendicular velocities in these regimes were also quantified and compared with numerical simulations in the literature. The terminal velocity and drag of the spheres were found to depend on the particle-to-fluid density ratio, and correlations between the drag coefficient and particle Reynolds number (\(Re_{p}\)) as a function of \(\mathrm{Ga}\) were established, allowing for the estimation of drag and settling velocity using \(\mathrm{Ga}\), a control parameter, rather than the response parameter \(Re_{p}\).
## I Introduction
Particles in fluids are representative of many natural and industrial systems and therefore extensively investigated in a variety of scenarios such as turbulence [1; 2; 3], and low [4; 5] to moderate [6; 7] Reynolds number such as this work. Particularly, and despite its apparent simplicity, the physics of finite size spheres settling hides a hierarchy of rich intricate phenomena, some of which are still shrouded in mystery. We are for instance still unable to finely model and predict the terminal velocity of a particle settling in a turbulent environment. The role of linear and non-linear drag [8; 9], the link with possible scenarios enhancing the settling [10] or hindering it [11], the influence of finite size effects [12] and the role of collective effects [13] are just some examples of subtle couplings which still need to be further explored to improve our capacity to predict the turbulent settling of spherical particles. Challenges are particularly important for environmental issues such as the forecast of particle and pollutants deposition in the atmosphere, rivers and seas.
Interestingly, even the non-turbulent situation, where a sphere settles in a quiescent fluid, is already far from trivial and results in a series of path instabilities [7] not yet fully understood. These path instabilities are related to a complex wake dynamics which emerges for a sphere with a relative velocity with respect to the surrounding fluid. It is indeed well known for instance that the wake behind a fixed sphere of typical size \(d\), in a steady stream with velocity \(U\) and viscosity \(\nu\), has a number of bifurcations that depend on Reynolds number \(\mathrm{Re}=Ud/\nu\). These transitions have been thoroughly explored in numerical and theoretical [14; 15; 16] and experimental [17; 18] studies for the case of fixed spheres in a steady stream for which the onsets of different wake bifurcations are finely characterised.
When the sphere is not fixed (e.g. if it is settling under gravity or rising due to buoyancy in a quiescent fluid), these wake instabilities develop into path instabilities [19] as the momentum and torque exerted by the perturbed fluid onto the particle will influence its trajectory. A pioneering work regarding fluidised beds already highlighted the non-applicability of Newton's free settling law on rising particles [20], caused by the aforementioned wake effect on the particle trajectory. Jenny and coworkers [7; 21] made the first systematic numerical study exploring the trajectory dynamics of a single spherical particle settling or rising in a quiescent unconfined fluid. This study was refined later by Zhou and Dusek [6]. The complex dynamics of rising or settling spheres has also been characterized experimentally and theoretically [22; 23; 24; 25; 26; 27].
Two dimensionless numbers control the free sphere settling problem: particle-to-fluid density ratio \(\Gamma=\rho_{p}/\rho_{f}\) (with \(\rho_{p}\) and \(\rho_{f}\) the particle and fluid densities respectively) and Galileo number \(\mathrm{Ga}=\sqrt{|\Gamma-1|g}d_{p}^{3/2}\nu\) (with \(d_{p}\) the particle diameter, \(g\) the local acceleration of gravity and \(\nu\) the kinematic viscosity of the surrounding fluid). The Galileo number was defined here as \(\mathrm{Ga}=U_{g}d_{p}/\nu\), where the characteristic velocity is the buoyancy velocity \(U_{g}=\sqrt{|\Gamma-1|gd_{p}}\). The different regimes and bifurcations of single settling or rising spheres were then assessed in a \(\Gamma\) - \(\mathrm{Ga}\) parameter space. While the regimes observed for both density ratio below one (rising spheres and bubbles) [20; 22; 23; 24] and for density ratio above unity (particle settling) [6; 7; 25; 26; 27] are interesting, we will restrict ourselves to density ratios larger than unity in the present article. To keep this introduction concise, a detailed review of previous investigations is provided in Sec. III, to which our experimental observations are systematically compared. We specifically stress that a number of important regions of the parameter space still remain experimentally unveiled and need to explored in order to characterise the settling regimes and corroborate numerical predictions. This is particularly the case for particle-to-fluid density ratios larger than 3.9 for which no experimental data is available.
Besides the complexity of path instabilities, the drag force experienced by the particles is an important element of the problem which has interested the scientific community. Inquiring in particular on whether the drag force of fixed spheres in a steady stream could be used to estimate the terminal settling or rising velocity of freely moving particles. Raaghav _et al._[27] have studied the drag of rising and settling particles and concluded that for density ratios between 0.86 and 3.9, the particle settling drag estimated from the mean vertical terminal velocity of the spheres does not differ significantly from that of a fixed sphere in free stream flowing at the same velocity. The latter implies that the drag coefficient \(C_{D}\) does not depend on particle-to-fluid density ratio. This idea is used extensively in the literature, and it has been widely used to obtain correlations and empirical models assuming a simple dependency of \(C_{D}\) on particle Reynolds number \(\text{Re}_{p}=v_{p}d_{p}/\nu\)[28; 29]. This has been proven incorrect for light particles where a marked dependency appears when \(\Gamma<0.1\)[20; 24].
Another practical issue is that the correlations for drag and settling velocity available in the literature are usually given in terms of the particle Reynolds number. However, when the particles are free to move, the velocity \(v_{p}\) is not a control parameter but a response parameter. For the case of settling particles, these correlations do not allow to give an explicit expression for the terminal velocity in terms of the drag coefficient \(C_{D}\), because \(C_{D}\) itself depends on the terminal velocity. However, from a pure dimensional analysis approach, the natural expected dependencies of the drag coefficient for settling spheres are both on \(\Gamma\) and \(Ga\), which are actual control parameters, only depending on known physical parameter of the problem (densities of the particles and the fluid, fluid viscosity, particle diameter and acceleration of gravity). This brings the two following questions: (i) to which extent is the approximation of \(C_{D}\) not depending on density ratio valid? And (ii) can a correlation of \(C_{D}\) be given in terms of Ga rather than \(\text{Re}_{p}\)? This would allow to know the drag coefficient _a priori_ without requiring to know the terminal velocity beforehand.
In the present article, we investigate experimentally the settling of spherical particles in a quiescent fluid over a broad region of the parameter space, namely \(1<\Gamma<8\) and \(100<\text{Ga}<350\) (symbols in figure 2 indicate all points explored in the parameter space). For all the investigated conditions, we fully characterize the trajectory properties of the particles as well as the drag coefficient derived from the particle's terminal velocity. The article is organized as follows. We first introduce the experimental setup in Sec. II. The results are then described in Sec. III. Finally, our conclusions are summarized in Sec. IV.
## II Experimental Methods
### Experimental setup and protocol
The experiments are performed in a transparent PMMA tank with a square cross-section of \(170\times 170\) mm\({}^{2}\) and a height of 710 mm, shown in Figure 1. The tank is filled with different mixtures of pure glycerol (Sigma-Aldrich W252506-25KG-K) and distilled water, ranging from 0% to 40% glycerol concentration. The viscosity of each mixture is measured with a rheometer Kinexus ultra+ from Malvern industries with a maximum uncertainty of 3%. The kinematic viscosity \(\nu\) ranges from \(10^{-6}\) to \(1.05\times 10^{-3}\) m\({}^{2}/\text{s}\). Moreover, as the viscosity is dependent on the temperature, an air-conditioning system keeps a constant room temperature of \((22\pm 1)^{\circ}\)C yielding a 5% uncertainty on the precise value of the viscosity.
A 150 mm region of fluid above and below the visualisation volume is set to ensure both the disappearance of any initial condition imposed on the particles release and the effects of the bottom of the tank. Furthermore, a minimum distance of 20 mm between the tank walls and the particles is maintained. In this configuration and using the correlations proposed by Chhabra _et al._[30] the settling velocity hindering due to wall effects is estimated to be lower than 3%.
The trajectory of the settling particles is recorded using two high speed cameras (model fps1000 from The Slow Motion Camera Company _Ltd_) with a resolution of \(720\times 1280\) px\({}^{2}\) and a frame rate of 2300 fps. The movies recorded from these two cameras allow the implementation of time resolved 4D-Lagrangian Particle Tracking (4D-LPT) to reconstruct the particle trajectories [31]. Backlight illumination was used, with two LED panels facing each camera on the opposite side of the tank, as represented by the dark blue rectangles in Fig. 1.
Various series of experiments were carried with different
Figure 1: Experimental setup. Two cameras image the particles settling inside the water tank.
optical magnification ratios, in order to access large scale properties of the trajectories (with lower magnification) as well as higher resolution data (with higher magnification). The magnification was varied by keeping the same optics mounted on the cameras, and varying the distance \(A\) from the cameras to the exterior of the tank's wall. The datasets corresponding to these different situations are detailed in the next subsection.
In order to span the \(\Gamma\) - Ga parameters space, we considered a set of spherical particles with different diameters (\(d_{p}\)) and densities (\(\rho_{p}\)), while varying the water-glycerol mixture in order to vary the fluid viscosity \(\nu\). Varying the fluid viscosity \(\nu\) allows to change Ga for a given type of particle, at the expense of the slight modification of the value of \(\Gamma\) due to the associated variation of the fluid density. The characteristics of the particles and the ranges of values Ga and \(\Gamma\) investigated in this articles are reported in Table 1. Overall, a total of 68 points in the \(\Gamma\) - Ga parameters space has been explored (see figure 2. For each point up to 25 independent drops were released in order to test the repeatability of the observed regimes and the eventual presence of bi-stable regions where different settling regimes could co-exist in the same region of the parameters space. The particle's diameter and sphericity were measured using a microscope with a precision of 10 \(\mu\)m. In particular, no significant deviation from the spherical shape or the manufacturer's documented diameter could be measured. The surface roughness of the particles was also measured, with a Scanning Electron Microscope ZEISS SUPRA 55 VP, over an area of \(200\times 500\)\(\mu\)m\({}^{2}\). The arithmetical mean height of rugosities Ra reported in table 1 shows a high degree of smoothness as Ra/\(d_{p}<0.05\), therefore roughness is not expected to alter the spheres dynamics [32].
The experimental procedure is the following: the tank is filled with a water-glycerol mixture and after approximately 24 hours the temperature at different positions in the fluid's bulk differs in less than 0.5\({}^{\circ}\)C thus thermal equilibrium is reached. Then a standard calibration of the 4D-LPT system is performed [31]. The spheres are released at the center of the tank with chemical tweezers: they are completely submerged below the air-liquid interface and released after approximately 20 s when the fluid free surface is at rest. A minimum time of 120 s is taken between successive drops to ensure that the fluid has no perturbations left from the previous drop.
### Data sets
The experiments were conducted using two different optical magnifications, resulting in various values of the non-dimensional trajectory length \(l_{\text{max}}^{*}=h/d_{p}\) ranging from 11.6 to 200 (see Fig. 1). Note that the lowest values of \(l_{\text{max}}^{*}\) (11.6 and 23.3) correspond to the larger optical magnification, or small \(A\) (hence giving better spatial resolution, but shorter tracks) while the larger values of \(l_{\text{max}}^{*}\) were obtained with the smaller magnification, or large \(A\) (resulting in a larger field of view, hence giving access to longer trajectories, what is important in particular to properly estimate the frequency of oscillating regimes). The values of \(l_{\text{max}}^{*}\) are reported in the \(\Gamma-\) Ga parameters space in Fig. 2.
All the relevant geometric (inclination and planarity) and dynamic characteristics (spectral content and terminal velocity) of particle trajectories cannot be equally addressed from the different datasets as the accuracy of their estimate depends on the maximum accessible track length \(l_{\text{max}}^{*}\). Empirically, we found that to reasonably resolve trajectory inclination, a dimensionless trajectory length of at least \(l^{*}\gtrsim 10\) (which is accessible with all datasets) is needed. This has been tested by checking the estimation of the inclination angle using the longest trajectories in the oblique regime and successively considering shorter and shorter portions of those long tracks. On the other hand, the quantification of the planarity via the eigenvalue method detailed in Sec. III, requires \(l^{*}\gtrsim 23\) - a condition not met for plastic particles, due to their large diameter. This conclusion has been reached by checking the estimation of the planarity using the longest available trajectories in the chaotic regime and successively considering shorter and shorter portions of those long tracks. This effect will be explored further in Section III.3. Finally, the spectral analysis required long trajectories, an issue further discussed in Sec. III.4.
In order to reduce experimental noise (due to inevitable particle detection errors in the Lagrangian Particle Tracking treatment [33]), the raw trajectories are smoothed by convolution with a Gaussian kernel of width \(\sigma=12\) frames. It behaves as a low-pass filter with a cut-off frequency \(f_{c}=\text{fps}/\sigma=2300\)\(\text{Hz}/\sigma=192\) Hz. Spectral analysis is therefore expected to be well resolved for frequencies up to of the order of 80 Hz as to respect the Nyquist-Shannon sampling theorem.
As previously mentioned, for each data point in the \(\Gamma\) - Ga parameters space, at least 10 and up to 25 experimental repetitions were executed and their trajectories analysed. This is mandatory in order to test the repeatability of the observed regimes, estimate uncertainties, and eventually detect multi-stable regions of the parameters space where multiple settling regimes may coexist. The uncertainties in quantities extracted from this data (e.g. trajectory angle or planarity) are taken as half of the standard deviation over the total set of drops for each data point.
Finally, in the remainder of this article, dimensionless parameters are denoted by a superscript asterisk. Spatial variables are normalized by particle diameter \(x^{*}=x/d_{p}\), velocities are normalized by the buoyancy velocity \(v^{*}=v/U_{g}=v/\sqrt{|\Gamma-1|gd_{p}}\), and time is normalized by the response time of the particles \(\tau_{g}=d_{p}/U_{g}\).
\begin{table}
\begin{tabular}{c c c c c c} \hline _Material (label)_ & \(\rho_{p}\) (kg/m\({}^{3}\)) & \(d_{p}\) (mm) & \(\Gamma\) & Ga & Ra(\(\mu\)m) \\ \hline Metal & 7950 & \{1,2,3\} & 6.6-7.8 & 112-290 & 9 \\ Glass & 2500 & 3 & 2.1-2.5 & 130-270 & 15 \\ Polyacetal & 1150 & 6 & 1.1-1.3 & 124-340 & 120 \\ \hline \end{tabular}
\end{table}
Table 1: Properties of the different settling particles investigated. See text for details.
## III Results
In this section, we first recall and present the different settling regimes reported in the literature. Then, the features of the 68 points experimentally investigated in the parameter space (Fig. 2) are described. Particular emphasis is put on their geometric and spectral properties, as well their terminal velocity and drag coefficient estimation
### Different Regimes
The different regimes in the parameters space obtained from numerical simulations by Zhou and Dusek [6] are represented by different colors in Fig. 2. Seven distinct regimes were numerically identified, whose features are summarized in the following:
1. Rectilinear Regime (white), with planar vertical trajectories and no inclination or oscillations;
2. Steady Oblique Regime (gray), with planar and oblique trajectories with respect to the vertical, and no oscillations;
3. Oblique Oscillating Regime, with planar and oblique trajectories, and the presence of oscillations. The frequency of oscillations \(f^{*}\) depends on the particle-fluid density ratio \(\Gamma\), with a High-Frequency Regime (HF, orange) at \(f^{*}\simeq 0.18\) and a Low-Frequency (LF, green) at \(f^{*}\simeq 0.068\).
4. Planar or Rotating Regime (yellow), a tri-stable region of the parameters space composed of oblique and (High or Low-Frequency) oscillating trajectories, which could be either planar or exhibit a slowly rotating symmetry plane (thus generating helicoid-like trajectories), coexisting with Chaotic Regimes. The High-Frequency Regime, Low-Frequency Regime, and Chaotic Regime coexist in this zone.
5. Vertical Periodic Regime (blue), where the trajectories are planar, rectilinear and vertical, and oscillate at the High-Frequency \(f^{*}\)=0.18;
6. and finally the Chaotic Regime (pink), with oblique and non-planar trajectories with no periodic oscillations.
A systematic study of the bifurcations between regimes was performed numerically by [6]. That study narrowed down the limits between regimes, in terms of Ga and \(\Gamma\), and has reported new regimes not previously detected in the simulations by Jenny _et al._[7; 21] (such as a Helical/Rotating Regime and a Vertical Periodic Regime). They also demonstrate the existence of bi-stable zone in the parameters space, where two regimes could co-exist. For instance, for moderate particle-to-fluid density ratios \(\Gamma\lesssim 2\) a bi-stable regime between a Chaotic and a Vertical Oscillating Regime are reported, while for larger density ratios they report bi-stability between Planar Oscillating and Helical Regimes. Furthermore, they have better quantified trajectory parameters such as angle, velocities and spectral content. Note that this description of the dynamics of individual particles was later used as a benchmark for numerical investigations of collective particle effects [34; 35]. Few analytical results have been derived regarding the bifurcations between different settling regimes, one exception being the transition between the Rectilinear and the Steady Oblique Regimes which have been analytically
Figure 2: Particle-to-fluid density ratio (\(\Gamma\)) – Galileo number (Ga) space of parameters. Data points are classified by their maximum trajectory length \(l_{\max}^{*}=h/d_{p}\).
shown by Fabre _et al._[36] to occur at a critical Galileo number of the order of 155, independently of the particle-to-fluid density ratio, in excellent agreement with the numerical findings previously mentioned.
To the best of our knowledge, only three experimental studies [25; 26; 27] have explored the predictions made by aforementioned simulations and theories. Horowitz _et al._[25] were mostly interested in regimes for rising spheres or slightly denser than the fluid and high Galileo numbers: they studied particle-to-fluid density ratios \(\Gamma\) below 1.4 and Galileo numbers ranging from \(10^{2}\) to \(10^{4}\). In particular, they studied trajectory angle and drag following the work of Karamanev [20]. Intriguingly, most findings from this study deviate from numerical simulations by Zhou and Dusek [6], in particular for the case of settling particles which will be investigated here. On the other hand, Veldhuis and Biesheuvel [26], although with some discrepancies, observed several of the dynamical regimes observed in the numerical simulations. In particular, oblique trajectories with no significant frequencies (Steady Oblique Regime in simulations) were reported. They also report oblique trajectories with oscillations at three dominant dimensionless frequencies of 0.07, 0.017 and 0.025 (Oblique Oscillating Regime in simulations), whose presence depends on the particle-to-fluid density ratio \(\Gamma\). Finally, an oblique chaotic regime with no dominant frequencies and random trajectory curvature (Chaotic Regime) was described. These regimes were measured for particle-to-fluid density ratios \(\Gamma\) of 1.3 and 2.3 at various Galileo numbers spanned by varying the fluid viscosity. Finally, in 2022, Raaghav _et al._[27] performed experiments on rising and settling particles, with four particle-to-fluid density ratios (\(\Gamma\) = 0.87, 1.12, 3.19 and 3.9) and Ga ranging from 100 to 700. They confirmed and contradicted some results of previous numerical simulations and experiments. The low Ga regimes (up to the Steady Oblique Regime) is unambiguously confirmed, in agreement with previous studies. For higher Galileo numbers (typically above 200), they found however discrepancies both with previous numerical and experimental studies. For instance, they observed a bi-stable behavior (between the Oscillating and Chaotic Regimes) for moderately dense spheres (\(\Gamma\simeq 1.1\)) in the range \(250<\text{Ga}<300\) in agreement with by Zhou and Dusek [6], but for density ratios above 3, they did not observe the High-Frequency Oblique Oscillating Regime reported by Zhou and Dusek [6]; they confirmed though the existence of a helical mode, although no bi-stability with the Chaotic Regime was observed, contrary to the findings by Zhou and Dusek [6] reported.
Our experiments confirm the existence of all the predicted regimes, in regions of the parameters space in relatively good agreement with the ones delimited by numerical simulations. Figures 3(a-c) qualitatively show some examples of trajectories. More specifically, Fig. 3 (a-c) show some representative 3D trajectories for \(\Gamma\approx 7.9\) particles, from the \(l_{\text{max}}^{*}=200\) dataset. The trajectories have been arbitrarily centered in the horizontal axis. Sub-figures show top and side views.
Fig. 3(a) represents a case of planar and oblique type of trajectories measured here at Ga = 200. Note that steady and oscillating regimes are almost indistinguishable in such a representation by a simple visual inspection of the trajectories as the amplitude of oscillations is of the order of the particle diameter. The distinction between the two regimes will be quantitatively discussed later, based on the estimation of the particle velocity and their spectral analysis (the example shown in Fig. 3(a) is actually an oblique oscillating case). It can also be noted that the angle of the trajectories with the vertical in this oblique regime remains almost constant for all drops (the angle will be quantitatively investigated in the next subsection, and is of the order of \(5^{\circ}\) in the present example), but each trajectory has its own direction so that the ensemble forms a cone hence preserving the global symmetry of the problem.
Fig. 3(b) represents a sample of trajectories of \(\Gamma\approx 7.9\) particles at \(\text{Ga}=217\). By combining the side and top views, it can be seen that several of these trajectories are consis
Figure 3: Typical trajectories in the: (a) Steady Oblique; (b) Planar or Rotating; and (c) Chaotic Regimes.
tent with portions of helicoids (for instance the red and the dark blue curves, which appears as quasi circular from the top view, although even with the \(l_{\text{max}}^{*}=200\) dataset, we only catch half of the period at most). Those co-exist with non-planar chaotic trajectories (as for instance the black and yellow curves). These measurements fall in the tri-stable regime previously mentioned.
Finally, Fig. 3(c) presents several trajectories that fall in the Chaotic Regime: all trajectories are different and no pattern of planarity or oscillations is present.
After this brief qualitative description of some observed trajectory regimes, the next Subsections present a systematic quantitative analysis of the different properties used to characterise trajectory geometry and dynamics: angle with the vertical, planarity, spectral content, terminal velocity, and drag.
### Trajectories Angle
For each recorded trajectory we define the settling orientation as the angle between a 3D linear fit of the trajectory and the vertical, and for each given set of parameters \((\text{Ga},\Gamma)\) we define the mean settling orientation as the ensemble average of settling angles over all trajectories recorded at those parameters. Fig. 4 shows the mean settling orientation as a function of \(\text{Ga}\) for the three different classes of particles investigated (\(\Gamma\approx 7.9\), \(\Gamma\approx 2.5\) and \(\Gamma\approx 1.1\)). Besides, the different settling regimes as reported from numerical simulations and previously shown in Fig. 2 are delimited by the dashed vertical lines and identified by coloured rectangles that respect the colour code in Fig. 2. Furthermore, the type of symbols represents the value \(l_{\text{max}}^{*}\), also following the nomenclature of Fig. 2.
A smooth transition from rectilinear to oblique (primary regular bifurcation) is seen around the expected critical Galileo number of 150 for \(\Gamma\approx 1.1\) and \(\Gamma\approx 7.9\) particles and, although there is a lack of data points in this region of \(\text{Ga}\) for \(\Gamma\approx 2.5\) particles, the available data points are consistent with a similar transition also occurring in the same range of \(\text{Ga}\) for those particles. More precisely, if the threshold between this regimes is defined as the Galileo number value at which the angle of the mean settling orientation has a non-zero angle, \(\Gamma\approx 1.1\) and \(\Gamma\approx 7.9\) particles present threshold values of \((125\pm 10)\) and \((115\pm 10)\) respectively, leading to a joint threshold at \(\text{Ga}=(120\pm 15)\). The trajectory angle is then found to continuously vary with the Galileo number; see for example \(\Gamma\approx 7.9\) particles: the angle varies monotonously from 0 to 6 degrees in the \(\text{Ga}\) range 110-190. Additionally, the maximum observed angles are \((5.7^{\circ}\pm 0.2^{\circ})\), \((5.1^{\circ}\pm 0.2^{\circ})\) and \((5.1^{\circ}\pm 0.2^{\circ})\) for density ratios 7.9, 2.5 and 1.1, respectively. Besides, this maximum angle is reached around \(\text{Ga}=200\) in all cases in the region of parameters space that has been identified in numerical simulations by Zhou and Dusek [6] and previous experiments [25; 27] as corresponding to the Oblique Regimes, although the distinction between steady and oscillating regimes requires further analysis of the spectral content of the trajectories, which will be presented later. We note also that, although the detailed trend of the settling angle with
Figure 4: Trajectory angle versus Galileo number for the three particle densities. Regimes are delimited by dashed vertical lines and identified by colors following Fig. 2. Symbols represent the value of \(l_{\text{max}}^{*}\), according to Fig. 2.
Ga as presented here has not been systematically explored in previous studies, the values we observe for the maximum settling angle are in good agreement with the range of angles previously reported: "of about 4 to 6 degrees" in the Steady Oblique and Oblique Oscillating Regimes in numerical simulations by Zhou and Dusek [6], " approximately \(4^{\circ}\) to \(7.5^{\circ}\) " in [25] and " approximately \(2.8^{\circ}\) to \(7.4^{\circ}\) " in [27].
It can be seen in Fig. 4 that for large Galileo numbers (typically Ga \(>200\)) multiple values of the average settling angle can be observed for similar values of Ga. These situations are generally consistent with regions of the parameters space which have been identified in numerical simulations either as multi-stable (yellow) or chaotic (pink). For the denser particles, such multi-values of the settling angle are for instance pronounced in the range Ga \(\in(200,230)\) encompassing both the HF-Oblique Oscillating (orange) and tri-stable Planar/Rotating (yellow) regions of the numerical parameters space, what may suggest that the multi-stable Planar/Rotating Regime, identified numerically around Ga \(\approx 220\), may actually extend further into the HF-Oblique Oscillating region at lower Galileo numbers. For the lightest particles, the trend to observe multiple values of the settling angle is very clear in regions of Ga expected to correspond to the Chaotic Regime (pink), in particular in the range Ga \(\in(200,260)\). For the intermediate density case (\(\Gamma\approx 2.5\)), this trend is observed in the vicinity of the LF-Oblique Oscillating Regime (green), what may be a sign that as for the dense particles case, the region numerically identified as bi-stable Planar/Rotating (yellow) may actually extend to lower values of Ga particularly into the LF-Oblique Oscillating region.
It is also interesting to see that for the \(\Gamma\approx 1.1\) particles the drop of the settling angle in the range Ga \(\in(250,300)\) is consistent with the numerical prediction of a Vertical Periodic Regime (blue) appearing in that range and surrounded by Chaotic Regimes.
Overall, measured settling angles are consistent with what is expected from the numerical parameters space. With the exception of a probably more extended multi-stable region (yellow) overlapping (partially or totally) the Oblique-Oscillating regions.
### Trajectories Planarity
The trajectory planarity is quantified by the ratio of eigenvalues \(\lambda_{2}/\lambda_{1}\) (with \(\lambda_{1}\geq\lambda_{2}\)) of the dimensionless perpendicular (to gravity) velocity correlation matrix defined as:
\[\langle\mathbf{v}_{\perp}^{*}\ \mathbf{v}_{\perp}^{*\mathrm{T}}\rangle= \begin{bmatrix}<v_{x}^{*\,2}>&<v_{x}^{*}v_{y}^{*}>\\ <v_{y}^{*\,2}>&<v_{y}^{*\,2}>\end{bmatrix}, \tag{1}\]
with \(v^{*}=v/U_{g}\). Perfectly planar trajectories yield \(\lambda_{2}/\lambda_{1}\)=0, while non-vanishing values of this ratio indicate a departure from planarity [37]. Note that the analysis of the planarity only yields meaningful results for trajectories with \(l_{\mathrm{max}}^{*}>33.3\). Fig. 5 shows the ratio \(\sqrt{\lambda_{2}/\lambda_{1}}\) versus Ga number, for the three types of particles. As in previous figures, the different regimes are delimited by dashed vertical lines and identified by coloured rectangles.
Planarity is lost at Ga = \((220\pm 15)\) for \(\Gamma\approx 7.9\) particles and at Ga = \((220\pm 15)\) for \(\Gamma\approx 2.5\) particles. At these points the ratio between the eigenvectors of the velocity correlation matrix \(\sqrt{\lambda_{2}/\lambda_{1}}\) increases from approximately 0.15 to 0.50 for \(\Gamma\approx 7.9\) particles (0.30 for \(\Gamma\approx 2.5\) particles). The range of
Figure 5: Planarity versus Galileo number for the three particle densities. Regimes are delimited by dashed vertical lines and identified by colors following Fig. 2. Symbols represent the value of \(l_{\mathrm{max}}^{*}\), according to Fig. 2.
Galileo number where planarity is found to be lost is consistent with the transition towards the Planar or Rotating Regime reported in numerical simulations by Zhou and Dusek [6], with a possible overlap with the LF-Oblique Oscillating region for \(\Gamma\approx 1.1\) particles and with the HF-Oblique Oscillating region for \(\Gamma\approx 2.5\) particles. On the other hand, no clear transition between planar and non-planar trajectories is observed for \(\Gamma\approx 1.1\) particles, which may be due to too small values of \(I_{\text{max}}^{*}\).
In the case of \(\Gamma\approx 7.9\) particles, the loss of planarity seems to be associated to the emergence of helicoidal trajectories. Fig. 3(b) presents indeed a sample of trajectories for \(\Gamma\approx 7.9\) particles, representative of the ensemble of trajectories at \(\text{Ga}\sim 217\), that are consistent with a half-helicoid. Similar trajectories are found at \(\text{Ga}=\{215,\,217,\,221\}\) and \(\text{Ga}\) = \(\{228,233\}\), for several values of \(I_{\text{max}}^{*}>33.3\). Hence the aforementioned loss of planarity for data with Galileo numbers larger than \((220\pm 15)\) (see Fig. 5) can be related to the appearance of these helicoid-like trajectories. Limitations of the measurement volume, even in the \(I_{\text{max}}^{*}=200\) configuration, do not allow to be fully conclusive as only a portion of the helicoid's period is recognizable. However, assuming that these trajectories are helicoids, the radius of their horizontal projection (Fig. 3(b) top view) would be roughly 7 particle diameters, and their pitch would be approximately 500 particle diameters. Similar helicoid-like trajectories have also been seen experimentally in previous studies, although for smaller density ratios, \(\Gamma<3.9\) (recall that metallic particles in the present study have a density ratio \(\Gamma\simeq 7.5\), which has not been investigated in previous works): [26] reported what are possibly helicoidal trajectories for particles with density ratio of the order of \(\Gamma\simeq 2.5\) (hence close to the present \(\Gamma\approx 2.5\) particles), while [27] found similar trajectories for particles with \(\Gamma=\{3.2,\,3.9\}\). Their results show a pitch of the order of 430 \(d_{p}\) which is comparable to the one of 500 \(d_{p}\) found here. In this sense, the results of this work confirm the existence of such non-planar, very likely helicoidal, regime for \(\text{Ga}\in(215,233)\) at larger particle-to-fluid density ratios, in the range of metallic particles (\(\Gamma\approx 7.9\)). Recall that the short \(I^{*}\) in the data sets of \(\Gamma\approx 2.5\) and \(\Gamma\approx 7.9\) particles do not allow to see a portion of an helicoid long enough to make such claims.
Fig. 5 for \(\Gamma\approx 2.5\) an \(\Gamma\approx 7.9\) particles also shows signatures of non-planarity in the region numerically identified as chaotic (\(\text{Ga}\gtrsim 230\)), in agreement with the sample trajectories shown in Fig. 3(c), where several trajectories show a clear departure from simple portions of helicoids. A clear distinction between non-planar helicoidal and chaotic trajectories, with a systematic characterization of the pitch and radius of the helicoids and of the frontier with the Chaotic Regime would nevertheless require further dedicated experiments with a taller visualisation volume.
### Trajectories Oscillations
We analyze the emergence of oscillatory dynamics by studying the fluctuations of the horizontal (_i.e._ perpendicular to gravity) dimensionless velocity: \(V^{*}_{\perp}:=v^{*}_{\perp}-\langle v^{*}_{\perp}\rangle\). In particular, while oblique-oscillatory regimes have been experimentally reported for density ratios below 3.9, we want to confirm here their existence at higher density ratios (_i.e._ for the \(\Gamma\approx 7.9\) particles, with \(\Gamma=7.9\)) and in that case evaluate the corresponding frequency. On the other hand, the existence of a Vertical Periodic Regime (light blue region in Fig. 2) for density ratios below 1.8, as predicted by Zhou and Dusek [6], is yet to be corroborated experimentally. This regime is expected to have trajectories with zero angle and Low-Frequency Oscillations. Recall that the regime has been already discussed in the previous section where a sharp decrease in trajectory angle was found. We will therefore confirm here that the oscillations are at the Low-Frequency \(f^{*}\approx 0.06\).
Numerical simulations by Zhou and Dusek [6] predict the existence of Oblique-Oscillatory Regimes for \(\text{Ga}\) of the order of 200, with a characteristic dimensionless frequency \(f^{*}\) which depends on the density ratio \(\Gamma\). More specifically, the simulations by Zhou and Dusek [6] predict a transition from a Low-Frequency Regime (with a dominant dimensionless frequency \(f^{*}\approx 0.07\), corresponding to green regimes in previous graphs) to a High-Frequency Regime (with \(f^{*}\approx 0.18\), corresponding to orange regimes in previous graphs) occurring at \(\Gamma\approx 2.3\). However, previous experiments by Veldhuis and Biesheuvel [26] and Raaghav _et al._ have only partially confirmed this scenario. Veldhuis and Biesheuvel [26] for instance did observe Oblique-Oscillating Regimes in the expected range of Galileo number for particles with density ratios \(\Gamma\approx 1.5\) and \(\Gamma\approx 2.5\), but they report a dominant characteristic frequency of \(f^{*}\approx 0.25\) for the lower density ratio case (_i.e._ about three times higher than the numerical prediction) while two main frequencies, of the order of 0.07 and 0.25, were detected for the larger density ratio. On the other hand, Raaghav _et al._ consistently report a Low-Frequency Oblique-Oscillating Regime (with \(f^{*}\approx 0.06\)) for particles with density ratio \(\Gamma\approx 1.1\), but did not find any planar High-Frequency Oblique-Oscillating Regime for particles with \(\Gamma=3.9\), for which only non-planar helical trajectories (similar to those reported in the previous section of this work) were observed. The existence of Oblique-Oscillating Regimes (and eventually the value of their frequency) for high density ratios therefore remains open.
Fig. 6, shows a sample of perpendicular velocity fluctuations versus time for \(\Gamma\approx 7.9\) particles at \(\text{Ga}=200\), and \(\Gamma\approx 1.1\) particles for \(\text{Ga}=208\). They exhibit a clear oscillatory dynamics, which is oblique (remember that \(\theta\approx 5^{\circ}\) for particles at these \(\text{Ga}\)) with marked frequency and amplitude differences. \(\Gamma\approx 7.9\) particles show higher frequency and smaller amplitude than \(\Gamma\approx 1.1\) particles. These observations are in qualitative agreement with numerical predictions. The amplitude ratio between the High and Low-Frequency perpendicular dimensionless velocity oscillations of approximately 5 times is found however to be substantially smaller than what is reported in numerical simulations by Zhou and Dusek [6] where a ratio of 12 is observed. From the oscillations reported in Fig. 6, it is possible to estimate the typical dimensionless frequencies \(f^{*}\) for both regimes which is found to be of the order of 0.07 for the Low-Frequency case (\(\Gamma\approx 1.1\) particles)
and of the order of 0.2 for the High-Frequency case (\(\Gamma\approx 7.9\) particles). These values are in good agreement with the numerical prediction, and the spectral analysis that follows.
A more accurate and systematic analysis of the oscillatory dynamics in the different regimes can be performed by computing the Power Spectral Density (PSD) of the velocity fluctuations averaged over multiple realizations in a narrow range of Ga. Fig. 7 presents various PSD of velocity fluctuations at different values of the Galileo number, for the \(l_{\rm max}^{*}=200\) data-set of \(\Gamma\approx 7.9\) particles. Both parallel and perpendicular components of velocity fluctuations have been analyzed. Each sub-figure presents the ensemble average of all PSDs in ranges of Ga where the spectral content was found to be robust: Ga = {187, 195, 198, 202, 205} for Fig. 7(a), Ga = {215, 217, 221} for Fig. 7(b), Ga = {227, 233} for Fig. 7(c), and Ga = 235 for Fig. 7(d). We note that the spectral resolution, limited by the accessible trajectory length, is 0.01. All measurements with Ga smaller than 187 have no spectral content (settling is then stationary, either vertical or oblique), therefore not shown.
The perpendicular velocity fluctuations PSDs presented in Fig. 7(a) show that for Ga \(\in(187,205)\) oscillations have a broad frequency peak centred around a dominant frequency \(f^{*}=(0.19\pm 0.01)\), and a secondary frequency around \(f^{*}=(0.27\pm 0.01)\). The dominant frequency confirms the High-Frequency nature of the oscillations qualitatively discussed in the previous paragraphs for \(\Gamma\approx 7.9\) particles at Ga \(=200\), corresponding to the perpendicular velocity signal shown in Fig. 6. It is also in agreement with the frequency predicted in numerical simulations by Zhou and Dusek [6] for such dense particles in this range of Galileo number, where a High-Frequency Oblique Oscillating Regime, with \(f^{*}=0.18\) has been reported by Zhou and Dusek [6].
The main difference between these experiments and the simulations by Zhou and Dusek [6] is the non-negligible intensity of the peak at \(f^{*}\approx 0.27\) (and possibly a sub-harmonic of \(f^{*}\approx 0.13\)). The existence of the frequency peak at \(f^{*}=0.27\) reminds of the observation by Veldhuis and Biesheuvel [26] who reported a similar frequency for particles both the Low and High-Frequency Regimes and was interpreted as a possible fourth harmonic of the Low-Frequency \(f^{*}=0.07\).
When Ga is increased to the range \((215,221)\), the trajectories lose any significant spectral signature. Neither the parallel, nor the perpendicular velocity PSD in Fig. 7(b) show any marked peak. Only a mild peak at f" = \((0.01\pm 0.01)\) is present for both parallel and perpendicular velocities and a mild peak at f" = \((0.19\pm 0.01)\), with an intensity 6 times smaller than in the previous Ga range for the parallel velocity. The angular and planarity analysis in the previous Subsection suggest that
Figure 8: PSD of parallel and perpendicular dimensionless velocity fluctuations (\(v^{*}_{\parallel}\) and \(v^{*}_{\perp}\), respectively) for \(\Gamma\approx 2.5\) particles in the LF-Oblique Oscillating Regime (a) and Vertical Periodic Regime (b) and (c) for \(\Gamma\approx 1.1\) particles in the LF-Oblique Oscillating Regime.
Figure 6: Typical perpendicular (to gravity) velocity fluctuations for trajectories in the Low-Frequency (continuous line) and High-Frequency (dashed line) Regimes.
Figure 7: PSD of parallel and perpendicular dimensionless velocity fluctuations (\(v^{*}_{\parallel}\) and \(v^{*}_{\perp}\), respectively) for \(\Gamma\approx 7.9\) particles. Colors correspond to Regimes defined in Fig. 2.
trajectories of \(\Gamma\approx 7.9\) particles in this range of \(\mathrm{Ga}\) might fall in the Planar or Rotating Regime, with some evidence of the existence on helicoidal trajectories in this regime. The estimated pitch of the helicoids (\(\approx 500d_{p}\)) would correspond to a frequency of oscillation of \(f^{*}=v_{\parallel}^{s}/(500)\approx 0.002\), in principle out of reach of the 0.01 resolution of the present spectral analysis. The mild peak at \(f^{*}\approx 0.01\) might however be a reminiscence of this slow helicoidal motion.
At higher \(\mathrm{Ga}\), in the range \(\mathrm{Ga}\in(225,235)\), the perpendicular velocity fluctuations PSD presented in Fig. 7(c) have a marked peak at the frequency \(f^{*}=(0.055\pm 0.010)\) with a broad base extending towards lower frequencies, down to the spectral resolution of 0.1. This behavior is similar to the one reported by Raaghav _et al._Raaghav _et al._ (2017) for particles with density ratio \(\Gamma\approx 3.9\) at \(\mathrm{Ga}\sim 210\), where a peak at \(f^{*}\simeq 0.05\) and a peak at \(f^{*}\simeq 0.005\) were reported. This was interpreted as a probable superposition of Low-Frequency oblique oscillations and a slow helical rotation. This scenario is consistent with the combined analysis of angle, planarity and spectral content in the present study. Indeed Fig. 4(a) shows that trajectories in the range \(\mathrm{Ga}\in(225,235)\) are oblique, while Fig. 5 indicates coexistence of planar and non-planar (hence compatible with helical motion) trajectories in this range of \(\mathrm{Ga}\). Intriguingly while both, Raaghav _et al._'s and the present experiments seem to observe this co-existence of Low-Frequency Oblique and helicoidal trajectories for high density ratio particles, such a behavior has not been reported in numerical simulations by Zhou and Dusek Zhou and Dusek (2018).
At the largest \(\mathrm{Ga}\) explored, Fig. 7(d) presents the PSDs for the case \(\mathrm{Ga}=235\). It does not present any dominant frequency, as it is expected for Chaotic dynamics.
Overall, our study of oscillations for the high density ratio particles (\(\Gamma\approx 7.9\)), is in good agreement with numerical simulations apart from the range \(\mathrm{Ga}\in(227,233)\) where Low-Frequency oscillations, possibly co-existing with non-planar helical motion, were observed but not reported in simulations. Reasonable agreement is also found with previous experiments by Raaghav _et al._ at density ratio \(\Gamma\approx 3.9\), although we do confirm the existence of the High-Frequency oscillating region for \(\mathrm{Ga}\in(187,205)\), which they did not observe, but is predicted by the simulations by Zhou and Dusek Zhou and Dusek (2018). We do not observe however the same regimes as in the study by Veldhuis and Biesheuvel Veldhuis and Biesheuvel (2018) at \(\Gamma\approx 2.5\); in particular in the Oblique Oscillating Regimes, they report a Low-Frequency behavior (at \(f^{*}\approx 0.07\)) rather than a High-Frequency one, as predicted by the simulations. It is likely that is due to the fact that the density ratio they considered is very close to the Low/High-Frequency transition, found to occur around \(\Gamma\approx 2.3\) in the simulations.
Fig. 8 presents PSDs of velocity fluctuations for \(\Gamma\approx 1.1\) and \(\Gamma\approx 2.5\) Particles, at different values of the Galileo number corresponding to the following data sets: \(l_{\mathrm{max}}^{*}=23.3\) for \(\Gamma\approx 2.5\) particles; and \(l_{\mathrm{max}}^{*}=11.6\) for \(\Gamma\approx 1.1\) particles. Both parallel and perpendicular components of velocity fluctuations have been analyzed. Each sub-figure presents the ensemble average of all the PSDs in the following \(\mathrm{Ga}\) regimes: \(\Gamma\approx 2.5\) Particles in the L-F Oscillating Regime showed in Fig. 7(a); and \(\Gamma\approx 1.1\) Particles in the L-F Oscillating and Vertical Periodic Regimes, presented in Fig. 8(b) and (c), respectively.
The perpendicular velocity fluctuations PSDs presented in Fig. 8(a) show that for \(\mathrm{Ga}=208\) oscillations have a broad frequency peak centred around a dominant frequency \(f^{*}=(0.043\pm 0.021)\). While the parallel velocity presents a peak at the same frequency but with 10 times less energy. Note that the uncertainty is considerably higher here since the trajectories are shorter (\(l_{\mathrm{max}}^{*}=11.6\) or \(23.3\)). The dominant frequency confirms the Low-Frequency nature of the oscillations qualitatively identified in the velocity signal shown in Fig. 6. This is also in agreement with the frequency predicted by numerical simulations by Zhou and Dusek Zhou and Dusek (2018) and observations from Veldhuis and Biesheuvel Veldhuis and Biesheuvel (2018).
On the other hand, Fig. 8(b) presents the perpendicular and parallel velocity PSDs of \(\Gamma\approx 1.1\) particles in the range \(\mathrm{Ga}\in(269,~{}272)\). We observe a single broad frequency peak centered around \(f^{*}=(0.085\pm 0.021)\), that overlaps with the Low-Frequency. As for the sub-figure (a), the parallel velocity presents a peak at the same frequency but with 10 times less energy. This peak at the Low-Frequency identified by Zhou and Dusek Zhou and Dusek (2018), in addition to the smaller trajectory angle identified in the previous subsection, confirms the existence of the Vertical Periodic Regime as predicted by Zhou and Dusek Zhou and Dusek (2018). With the only difference being that the trajectory angle does not vanishes as predicted but it stays at low values of around \(1.3^{\circ}\).
Finally, Fig. 8(c) presents the perpendicular and parallel velocity PSDs of \(\Gamma\approx 2.5\) particles in the range \(\mathrm{Ga}\in(190,~{}210)\). The perpendicular velocity fluctuations PSD presented in Fig. 8(c) shows that oscillations have a broad frequency peak centred around a dominant frequency \(f^{*}=(0.054\pm 0.013)\). Additionally, note that, as the trajectories are longer than for \(\Gamma\approx 1.1\) (\(l_{\mathrm{max}}^{*}\in(33.3,~{}100)\)), the uncertainty in this case is smaller (though still larger than for \(\Gamma\approx 7.9\)). This spectral content is in agreement with the Low-Frequency Regime predicted in numerical simulations by Zhou and Dusek Zhou and Dusek (2018), and what Veldhuis and Biesheuvel (2018) have measured for particles in this area of the parameters space. A difference with the experiments of Veldhuis and Biesheuvel Veldhuis and Biesheuvel (2018) is however seen as they have found harmonic contributions at around \(f^{*}=0.27\)Veldhuis and Biesheuvel (2018).
### Settling Velocity & Drag
In this last section we investigate the terminal settling velocity of the particles which results from the balance of the drag force and net gravity (i.e. gravity plus buoyancy). The measure of terminal velocity therefore allows to estimate the drag coefficient of the falling spheres and compare it to tabulated values for fixed spheres.
As previously discussed, the dimensional analysis of the problem of a sphere falling in a quiescent viscous fluid, yields two dimensionless control parameters: \(\mathrm{Ga}-\Gamma\). When addressing the further question of the terminal vertical velocity \(v_{s}\), an additional dimensionless parameter emerges: the termi
nal particle Reynolds number \(\mathrm{Re_{p}}=v_{s}d_{p}/\nu\). It is important to note that \(\mathrm{Re_{p}}\) is a response parameter of the problem which depends on the control parameters \(\Gamma\) and \(\mathrm{Ga}\) (we shall write then \(\mathrm{Re_{p}}(\mathrm{Ga},\Gamma)\)), therefore implying a possible impact of the path instabilities (which depend on both \(\mathrm{Ga}\) and \(\Gamma\)) previously discussed on the terminal velocity of the spheres. Similarly, when it comes to address the question of the drag force experienced by the falling sphere, this introduces another dimensionless parameter, the drag coefficient \(C_{D}\), which shall also be considered _a priori_ as a function of both \(\mathrm{Ga}\) and \(\Gamma\) (we shall write \(C_{D}(\mathrm{Ga},\Gamma)\)). This situation therefore contrasts with the case of the drag force of a fixed sphere in a prescribed mean stream, as in that situation, the density ratio is not a relevant parameter, and Reynolds number is then the unique control parameter of the problem. The drag coefficient solely depends in that case on the sphere Reynolds number \(C_{D}(\mathrm{Re_{p}})\). This then raises several points for the case of settling spheres: (i) Are the usual correlations for the drag coefficient \(C_{D}(\mathrm{Re_{p}})\) (not explicitly dependent on the density ratio \(\Gamma\)) still valid for the case of falling spheres (where \(\mathrm{Re_{p}}\) and \(\mathrm{C_{D}}\) may have explicit dependencies on both \(\mathrm{Ga}\) and \(\Gamma\))? Recall that explicit dependency on density ratio is known to be potentially major for light particles with \(\Gamma\ll 1\)[20; 24]; (ii) \(\mathrm{Re_{p}}\) being a response parameter, usual correlations for the drag coefficient of fixed spheres \(C_{D}(\mathrm{Re_{p}})\) are impractical as \(\mathrm{Re_{p}}\) is not known beforehand: correlations directly implying the actual control parameters \((\mathrm{Ga},\Gamma)\) (eventually only \(\mathrm{Ga}\) if explicit dependency on density ratio is found not to be important) would be more practical; (iii) If density ratio is found to play a role, how important are the associated effects? We address here these questions.
iii.2.1 New correlation relations between Galileo number and terminal particle Reynolds number / Drag coefficient
Consider a settling particle within a given point of the parameters space \((\mathrm{Ga},\Gamma)\), with a terminal settling velocity \(v_{s}(\mathrm{Ga},\Gamma)\). From the definition of the terminal particle Reynolds number \(\mathrm{Re_{p}}=v_{s}d_{p}/\nu\) and of the Galileo number \(\mathrm{Ga}=U_{g}d_{p}/\nu\), we can define the dimensionless particle terminal velocity \(v_{s}^{*}\), which can be rewritten in terms of \(\mathrm{Ga}\) and \(\mathrm{Re_{p}}\)[2]:
\[v_{s}^{*}(\mathrm{Ga},\Gamma)=\frac{v_{s}}{U_{g}}=\frac{\mathrm{Re_{p}}( \mathrm{Ga},\Gamma)}{\mathrm{Ga}}. \tag{2}\]
Regarding drag, considering that in the terminal settling the drag force \(F_{D}=\frac{1}{6}\rho_{P}C_{D}\pi d_{p}^{2}v_{s}^{2}\) equals the gravity-buoyancy force \(F_{g}=\frac{\pi}{6}(\rho_{p}-\rho_{f})A_{p}^{3}\rho=\frac{4}{6}\rho_{f}d_{p}^{2 }U_{g}^{2}\), from relation (2) the drag coefficient can be simply expressed as [2]:
\[C_{D}(\mathrm{Ga},\Gamma)=\frac{4}{3}\bigg{(}\frac{\mathrm{Ga}}{\mathrm{Re_{p }}(\Gamma,\mathrm{Ga})}\bigg{)}^{2}. \tag{3}\]
Note that in this expression, the particle Reynolds number \(Re_{p}(\mathrm{Ga},\Gamma)\) a response parameter of the problem, which is not known _a priori_ and needs to be measured. As further discussed below it can be analytically expressed only in the vanishing Galileo number limit, which corresponds to the steady vertical Stokes settling regime.
Fig. 9 presents the measurements of \(\mathrm{Re_{p}}\) versus Galileo number, for all particles (of all density ratios and for all the settling regimes) explored in the present study. The points appear to be relatively well packed on a main common trend, implying a minor direct dependency of \(\mathrm{Re_{p}}\) on the density ratio \(\Gamma\) (note that an implicit dependency on \(\Gamma\) still exist via \(\mathrm{Ga}=\sqrt{(\Gamma-1)gd_{p}^{3}}/\nu\)). Some scatter of the points is however visible, which may still reflect a possible explicit (minor) correction to the main trend due to the density ratio (this aspect will be further discussed in the next Subsection).
Before addressing such possible corrections, let first consider as a first approximation that \(Re_{p}\) is independent on the density ratio and only explicitly dependent on \(\mathrm{Ga}\). According to (3), that implies then that the drag coefficient \(C_{D}\) is itself also independent of the density ratio, and solely dependent on \(\mathrm{Ga}\). Since \(Re_{p}\) and \(\mathrm{Ga}\) are then related, \(C_{D}\) can be equivalently considered as \(\mathrm{Ga}\)-dependent or \(\mathrm{Re_{p}}\)-dependent. This is in agreement with previous studies by [25; 27] who measured the drag coefficient of falling spheres and did not observe, within the scatter of their measurements, a significant deviation compared to the fixed sphere case.
It can be noted that the empirical finding that neither \(C_{D}\) nor \(\mathrm{Re_{p}}\) explicitly depend on \(\Gamma\), while they are univoquely related via \(\mathrm{Ga}\), is trivial in the Stokes settling regime (in the limit of vanishing \(\mathrm{Ga}\) and \(\mathrm{Re_{p}}\)). In this limit, analytical solutions of Stokes equations, lead indeed to \(C_{D}(\mathrm{Re_{p}})=24/\mathrm{Re_{p}}\), that combined with Eq. 3 yields \(\mathrm{Re_{p}}=\frac{1}{18}\mathrm{Ga^{2}}\).
For non vanishing \(\mathrm{Ga}\) and \(\mathrm{Re_{p}}\), a univoque relation be
Figure 9: Galileo number versus particle Reynolds number alongside with the empirical correlation from Eq. 5. The symbols represent the different density ratios (i.e. particle material): squares \(-\Gamma\approx 1.1\); triangles \(-\Gamma\approx 2.5\); circles \(-\Gamma\approx 7.9\). Whereas the edge colors represent the different trajectory regimes, as in Fig. 2: black – Rectilinear & Oblique; green – Low-Freq.; orange – High-Freq.; yellow – Planar or Rotating; and magenta – Chaotic & Vertical Periodic.
tween \(\mathrm{Re_{p}}\) and \(\mathrm{Ga}\) supports then the idea that an explicit correlation \(\mathrm{Re_{p}}(\mathrm{Ga})\) between these two parameters (via (6)) can be derived using classical correlations for \(C_{D}(Re_{p})\) for fixed spheres. We propose here to use the correlation by [28], which accurately fits the drag coefficient for spheres over a broad range of Reynolds number (up to \(\mathrm{Re_{p}}\lesssim 2\times 10^{5}\)):
\[C_{D}(\mathrm{Re_{p}})=\frac{24}{\mathrm{Re_{p}}}(1+0.150\mathrm{Re_{p}}^{0.6 81})+\frac{0.407}{1+\frac{8710}{\mathrm{Re_{p}}}}. \tag{4}\]
By including this expression of \(C_{D}(Re_{p})\) into (3), we can indeed provide a direct correlation for the terminal particle Reynolds number (and hence for the particle terminal velocity) only depending on the actual control parameter of the problem which is the Galileo number:
\[\mathrm{Re_{p}}^{\dagger}(\mathrm{Ga})=\frac{\mathrm{Ga}^{2}(22.5+\mathrm{Ga }^{1.364})}{0.0258\mathrm{Ga}^{2.6973}+2.81\mathrm{Ga}^{2.0306}+18\mathrm{Ga}^ {1.364}+405}. \tag{5}\]
This expression is represented in Fig. 9 by the solid line, and is found in very good agreement with the global trend measured for the settling particles in our experiments (what essentially confirms that the drag coefficient for fixed spheres reasonably applies to the case of falling spheres). Beyond this agreement, the above correlation is of great practical interest as it allows a direct determination of the settling velocity of a sphere from the sole _a priori_ knowledge of its Galileo number (which is a true control parameter, only requiring to know the particle-to-fluid density ratio, the sphere diameter, the acceleration of gravity and the ambient fluid's kinematic viscosity), without the need of using the traditional \(\mathrm{C_{D}(Re_{p})}\) correlation to solve (numerically) the non-linear equation (3): \(\mathrm{Re_{p}}^{2}\mathrm{C_{D}(Re_{p})}=\frac{4}{3}Ga^{2}\).
Similarly, a direct correlation between the drag coefficient and the actual control parameter of problem (\(\mathrm{Ga}\)) (rather than the usual correlation \(C_{D}(Re_{p})\), which connects two response parameters) can be derived by re-introducing expression (5) back into (3):
\[C_{D}^{\dagger}(\mathrm{Ga})=\frac{4}{3}\bigg{(}\frac{0.0258\mathrm{Ga}^{2.69 73}+2.81\mathrm{Ga}^{2.0306}+18Ga^{1.364}+405}{\mathrm{Ga}(22.5+\mathrm{Ga}^ {1.364})}\bigg{)}^{2}. \tag{6}\]
#### iii.2.2 Density ratio effect
The new correlations (5) and (6) we just proposed assume that both the terminal Reynolds number \(Re_{p}\) and the drag coefficient \(C_{D}\) only depend on \(\mathrm{Ga}\) and do not depend explicitly on \(\Gamma\). Based on Fig. 9, this seems a reasonable global assumption, though some scatter of the points in Fig. 9 and small deviations (in particular for the less dense particles, \(\Gamma\approx 1.1\) particles, represented as squares in the figure) with respect to relation (5) cannot rule out a possible (minor) effect of density ratio.
To better test possible deviations due to density ratio effects, we show in Fig. 10 and 11 the terminal Reynolds number and the drag coefficient compensated respectively by relations (5) and (6) such that a value of zero would correspond to a perfect match (hence with no density effects).
Fig. 10 (for the compensated terminal Reynolds number) shows that although the measurements for all different datasets obtained in this work are indeed distributed around zero, they can deviate from this density-independent trend with a scatter of typically \(\pm 10\%\). More importantly it can be seen that (apart for two outliers out of the 68 independent measurements we carried) the scatter of the points present a systematic trend with the density ratio, where less dense particles (notably \(\Gamma\approx 1.1\) particles and, to a less extent, \(\Gamma\approx 2.5\) particles) are systematically below the correlation derived from fixed spheres, while heavy particles are systematically above. The density-independence approximation seems therefore to give a reasonable average trend to predict the terminal Reynolds number using relation (5) though denser particles will have a positive bias (settling up to 10% faster in the range of densities explored here) and lighter particles a negative bias (up to 13% slower in the rage of densities explored here).
Similarly Fig. 11 shows that (apart for the same two outliers out of the 68 independent measurements we carried), a systematic effect of density ratio can be observed on the drag coefficient \(\mathrm{C_{D}}\), where less dense particles (notably \(\Gamma\approx 1.1\) particles) have a systematic positive bias (_i.e._ their drag coefficient is larger, up to +15% in the range of densities we explored) compared to the correlation derived from fixed spheres, while heavy particles have systematic negative bias (_i.e._ their drag coefficient is lower, up to -15% in the range of densities we explored) compared to the correlation derived from fixed spheres. The overall drag coefficient spread is 30%.
These results challenge the widespread idea that the drag
Figure 10: Galileo number versus particle Reynolds number compensated by the empirical correlation from Eq. 5. The symbols represent the different density ratios (i.e. particle material): squares – \(\Gamma\approx 1.1\); triangles – \(\Gamma\approx 2.5\); circles – \(\Gamma\approx 7.9\). Whereas the edge colors represent the different trajectory regimes, as in Fig. 2: black – Rectilinear & Oblique; green – Low-Freq; orange – High-Freq; yellow – Planar or Rotating; and magenta – Chaotic & Vertical Periodic.
coefficient (and eventually then its connection to the terminal settling velocity via relation (6)) of freely settling spheres (_i.e._ with \(\Gamma>1\)) do not explicitly depend on the density ratio \(\Gamma\). We find a systematic explicit dependence on \(\Gamma\), as \(C_{D}\) and \(\mathrm{Re_{p}}\) vary in 25% to 30% between the less dense (\(\Gamma\approx 1.1\)) and the denser particles (\(\Gamma\approx 7.5\)). It is worth to remark that the results from the denser particles (\(\Gamma\approx 7.9\) particles) and the intermediate density ratio ones (\(\Gamma\approx 2.5\) particles) are hardly distinguishable (in particular regarding the drag coefficient in Fig. 11). This suggests that the \(\Gamma\) dependency might be most relevant for \(\Gamma\) values close to one, i.e. closer to the rising particle case where a clear dependency with \(\Gamma\) was reported for the drag coefficient [24, 20] and has been found to be systematically larger compared to the case of fixed spheres. Deviations for light particles with \(\Gamma\lesssim 1\) remain small and comparable to the ones we report here for \(\Gamma\approx 1.1\) particles with \(\Gamma\gtrsim 1\), and become important for very light spheres with \(\Gamma\ll 1\).
## IV Conclusions
We presented in this article an experimental study on the settling of single spheres in a quiescent flow, with a systematic characterization of settling regimes, settling terminal velocity and drag coefficient of spheres with density ratios up to \(\Gamma\simeq 8\) (previous similar studies were limited to \(\Gamma<4\)). The spheres dynamics is analyzed in the parameters space \(\Gamma-\mathrm{Ga}\), with particle-to-fluid density ratios \(\Gamma\in(1.1,7.9)\) and Galileo numbers \(\mathrm{Ga}\in(100,340)\).
Overall, our results on the settling regimes are in very good agreement with the numerical simulations by Zhou and Dusek [6] and in partial agreement with previous experiments by Veldhuis & Biesheuvel [26] and Raaghav _et al._[27] over a narrower range of density ratios.
In particular, we confirm that for all situations, trajectories eventually become chaotic in the high Galileo number limit (typically for \(\mathrm{Ga}>250\)) although the details of the route to chaos depends on the density ratio of the particles. For the lowest density ratio, we observe all the regimes predicted by Zhou and Dusek [6] simulations. In particular we confirm the Low-Frequency nature of Oblique Oscillating Regime (for \(\mathrm{Ga}\lesssim 200\) for \(\Gamma=1.1\) and around \(\mathrm{Ga}\approx 200\) for \(\Gamma=2.5\)) with a dominant dimensionless frequency \(f^{*}\approx 0.06\). While this regime (predicted by Zhou and Dusek [6]) was reported by Raaghav _et al._[27], it was not clearly observed in experiments by Veldhuis & Biesheuvel. We also confirm that particles with density ratio close to unity (Plastic Particles with \(\Gamma=1.1\)) exhibit a "pocket" of vertical periodic settling in the range \(\mathrm{Ga}\in(250,300)\). This regime predicted in simulations by Zhou and Dusek [6] was also reported in experiments by Raaghav _et al._ although it was not observed by Veldhuis & Biesheuvel.
For the densest particles we investigated (Metallic Particles with \(\Gamma=7.9\)), which are also the densest reported for such experimental studies, we confirm the existence of a High-Frequency Oblique Oscillating Regime, around \(\mathrm{Ga}\approx 200\) with \(f^{*}\approx 0.18\). This regime was not observed in experiments Raaghav _et al._[27] at \(\Gamma=3.9\) who only reported helical/rotating trajectories. We also observe such helical trajectories (around \(\mathrm{Ga}\approx 220\)), which we find to co-exist with the High-Frequency Oblique Oscillating Regime for \(\mathrm{Ga}\lesssim 220\), in agreement with what Zhou and Dusek [6] identified as a multistable Planar-or-Rotating Regime, where both planar (oblique oscillating trajectories) and non-planar (helical trajectories) could be observed. We find however that the range of multistability is probably larger than what is reported in the numerical study by Zhou and Dusek [6], as helicoids were randomly observed over almost the entire range of Galileo numbers _a priori_ corresponding to the High-Frequency Oblique Oscillating Regime. This may explain why the High-Frequency Oblique Oscillating Regime was not reported in [27], who may have only (randomly) observed helical trajectories in this range. Concerning the helical trajectories, although the limited extent of the measurement volume in our experiment did not allow to fully characterize the helical properties, raw estimates of the radius (about 7 particle diameters) and the pitch (several hundreds particle diameters) of the portion of helicoids we observed are consistent with previous values reported in experiments by Raaghav _et al._[27] and simulations by Zhou and Dusek [6].
Finally, our study of the spheres terminal settling velocity (\(v_{s}\)) and drag coefficient \(C_{D}\) carries two important results. First, neglecting density ratio dependencies, we have proposed two new correlations directly relating the terminal Reynolds number \(\mathrm{Re_{p}}=v_{s}d_{p}/\nu\) and the drag coefficient \(C_{D}\) to the Galileo number \(\mathrm{Ga}\). For the case of settling spheres, these relations are more handy to use compared to classical correlations between the \(C_{D}\) and \(\mathrm{Re_{p}}\) as, contrary to \(\mathrm{Ga}\) which is a true control parameter of the problem, \(\mathrm{Re_{p}}\) is a response parameter which cannot be determined beforehand. Secondly,
Figure 11: Drag coefficient compensated by the empirical correlation from Eq. 6 versus Galileo number. The symbols represent the different density ratios (i.e. particle material): squares – \(\Gamma\approx 1.1\); triangles – \(\Gamma\approx 2.5\); circles – \(\Gamma\approx 7.9\). Whereas the edge colors represent the different trajectory regimes, as in Fig. 2: black – Rectilinear & Oblique; green – Low-Freq.; orange – High-Freq.; yellow – Planar or Rotating; and magenta – Chaotic & Vertical Periodic.
we have shown that the usual approximation to neglect an explicit dependency on the density ratio \(\Gamma\) (other than the implicit dependency through Ga of the terminal Reynolds number and drag coefficient) for settling spheres is not justified from the dimensional analysis and not fully supported by experimental findings. In particular, a trend was observed were the drag coefficient of the lightest particles was systematically larger than for the densest particles, with a difference up to about 30% over the entire range of parameters we investigated. This indicates that, at least in the range of Galileo numbers explored here (with rich and complex settling regimes), while using the drag coefficient from usual correlations tabulated for fixed spheres (which can be considered as infinitely dense) at the corresponding Reynolds number may give the good order of magnitude of the terminal velocity, an accurate estimate would require to account for finite density ratio effects. Beyond the case of spheres settling in quiescent fluid addressed here, such corrections may also play a role in the context of modeling the drag force coupling of finite size inertial particles advected and settling in turbulent flows.
## V Acknowledgements
We acknowledge the technical expertise and the help of V. Dolique for the use of the Scanning Electron Microscope. This work was supported by the French research program IDEX-LYON of the University of Lyon in the framework of the French program "Programme Investissements d'Avenir" (Grant No. hIR-16-IDEX-0005).
|
2307.07275 | Integral Laplacian graphs with a unique double Laplacian eigenvalue, II | The set
$S_{\{i,j\}_{n}^{m}}=\{0,1,2,\ldots,m-1,m,m,m+1,\ldots,n-1,n\}\setminus\{i,j\},\quad
0<i<j\leqslant n$, is called Laplacian realizable if there exists a simple
connected graph $G$ whose Laplacian spectrum is $S_{\{i,j\}_{n}^{m}}$. In this
case, the graph $G$ is said to realize $S_{\{i,j\}_{n}^{m}}$. In this paper, we
completely describe graphs realizing the sets $S_{\{i,j\}_{n}^{m}}$ with
$m=1,2$ and determine the structure of these graphs. | Abdul Hameed, Mikhail Tyaglov | 2023-07-14T11:08:11Z | http://arxiv.org/abs/2307.07275v1 | # Integral Laplacian graphs with a unique double Laplacian eigenvalue, II
###### Abstract.
The set \(S_{\{i,j\}_{m}^{m}}=\{0,1,2,\ldots,m-1,m,m,m+1,\ldots,n-1,n\}\setminus\{i,j\}, \quad 0<i<j\leqslant n\), is called Laplacian realizable if there exists a simple connected graph \(G\) whose Laplacian spectrum is \(S_{\{i,j\}_{m}^{m}}\). In this case, the graph \(G\) is said to realize \(S_{\{i,j\}_{m}^{m}}\). In this paper, we completely describe graphs realizing the sets \(S_{\{i,j\}_{m}^{m}}\) with \(m=1,2\) and determine the structure of these graphs.
Key words and phrases:Laplacian Integral graph, Laplacian matrix, Laplacian spectrum, integer eigenvalues
## 1. Introduction
In the present paper, we continue our previous work [12] where we studied graphs whose Laplacian spectrum is
\[S_{\{i,j\}_{m}^{m}}=\{0,1,2,\ldots,m-1,m,m,m+1,\ldots,n-1,n\}\setminus\{i,j\}, \quad 0<i<j\leqslant n,\]
with \(m=n-1,n\). Here we cover the cases \(m=1,2\) and give a complete description of the correspondent graphs but certain cases directly related to the so-called \(S_{n,n}\)-conjecture [7]. In this work we follow the notations and preliminaries of the work [12], however, some of them will be stated here for convenience of the reader.
Let \(G=(V(G),E(G))\) be a simple graph (without loops or multiple edges) where \(V(G)=\{v_{1},v_{2},\ldots,v_{n}\}\) is its vertex set and \(E(G)=\{e_{1},e_{2},\ldots,e_{r}\}\) its edge set. The entries of its Laplacian matrix are defined as follows
\[l_{ij}=\begin{cases}&d_{i},\quad\text{if}\quad\quad i=j,\\ &-1,\quad\text{if}\quad\quad i\neq j\text{ and }v_{i}\sim v_{j},\\ &0,\quad\text{otherwise},\end{cases}\]
where \(v_{i}\sim v_{j}\) means that the vertices \(v_{i}\) and \(v_{j}\) are adjacent.
The Laplacian matrix \(L(G)\) is positive semidefinite and singular, see e.g. [27]. A graph \(G\) whose Laplacian matrix has integer eigenvalues is called _Laplacian integral_. As we noticed in [12], there are many famous families of Laplacian integral graphs and we refer the reader to the works [1, 5, 7, 10, 13, 15, 16, 17, 18, 23, 24, 25] and references therein.
One of the most interesting families of Laplacian integral graphs considered by S. Fallat et al. [7] is defined as follows: The set
\[S_{i,n}=\{0,1,2,\ldots,n-1,n\}\setminus\{i\},\ \ i\leqslant n,\]
is called Laplacian realizable if there exists a simple connected graph \(G\) whose Laplacian spectrum is \(S_{i,n}\). We also say that \(G\) realizes \(S_{i,n}\). In [7] the authors established realizability of sets and completely described the graphs realizing \(S_{i,n}\). In addition, it is also conjectured in [7] that the set \(S_{n,n}\) is not Laplacian realizable for any \(n\geqslant 2\).. This problem is now known as the \(S_{n,n}\)-conjecture and states that \(S_{n,n}\) is _not_ Laplacian realizable for every \(n\geqslant 2\). This conjecture was proved for \(n\leqslant 11\), for prime \(n\), and for \(n\equiv 2,3\mod 4\) in [7]. Later, Goldberger and Neumann [9] showed that the conjecture is true for \(n\geqslant 6,649,688,933\). The authors of the present work established [11] that if a graph is the Cartesian product of two other graphs, then it does not realize \(S_{n,n}\).
As a way of investigating the class of Laplacian integral graphs, the authors of the present work extended this concept further and studied a certain class of Laplacian integral graphs introduced in [12] and is defined as follows.
**Definition 1.1**.: A graph \(G\) is said to realize the set
\[S_{\{i,j\}_{n}^{m}}=\{0,1,2,\ldots,m-1,m,m,m+1,\ldots,n-1,n\}\setminus\{i,j\} \tag{1.1}\]
for some \(i\) and \(j\), \(i<j\leqslant n\), if its Laplacian spectrum is the set \(S_{\{i,j\}_{n}^{m}}\). In this case, the set \(S_{\{i,j\}_{n}^{m}}\) is called Laplacian realizable. So the set \(S_{\{i,j\}_{n}^{m}}\) does not contain the numbers \(i\) and \(j\), while some number \(m\) (and only this number) is doubled.
The present work is the second part of our research on the set \(S_{\{i,j\}_{n}^{m}}\). In the first part [12], we considered graphs realizing the sets \(S_{\{i,j\}_{n}^{m}}\) for \(m=n-1\) and \(m=n\), and completely described them. Moreover, we conjectured that the graphs realizing sets \(S_{\{i,n\}_{n}^{m}}\) (that is, without \(n\)) may not exist for large \(n\). In particular, we believe that for \(n\geqslant 9\) the sets \(S_{\{i,n\}_{n}^{m}}\) are not Laplacian realizable. In this paper, we continue our study and consider the cases \(m=1\) and \(m=2\).
First, we note that graphs realizing sets \(S_{\{i,j\}_{n}^{1}}\) and \(S_{\{i,j\}_{n}^{1}}\) exist for small \(n\). The Laplacian spectra of all the graphs of order up to \(5\) are listed in [3, p. 286-289]. From that list it follows that for \(n\leqslant 5\) the only Laplacian realizable \(S_{\{i,j\}_{n}^{1}}\) sets are \(S_{\{2,3\}_{4}^{1}}\) and \(S_{\{2,4\}_{5}^{1}}\). In Figure 1, the graph \(G_{1}\) is the star graph \(K_{1,3}\) on \(4\) vertices realizing \(S_{\{2,3\}_{4}^{1}}\) (see Table 1 in Appendix). The graph \(G_{2}\) realizing the set \(S_{\{2,4\}_{5}^{1}}\) is of the form \((K_{2}\cup 2K_{1})\lor K_{1}\) (see Table 1). Similarly, for \(n\leqslant 5\), the only Laplacian realizable \(S_{\{i,j\}_{n}^{2}}\) sets are \(S_{\{1,3\}_{4}^{2}}\) and \(S_{\{1,4\}_{5}^{2}}\). In Figure 2, the graphs \(G_{3}\) and \(G_{4}\) are the complete bipartite graphs \(K_{2,2}\) (or the cycle \(C_{4}\)) and \(K_{2,3}\) on \(4\) and \(5\) vertices respectively, realizing \(S_{\{1,3\}_{4}^{2}}\) and \(S_{\{1,4\}_{5}^{2}}\) respectively (see Table 2).
Considering the case \(m=1\), we show that the set \(S_{\{i,j\}_{n}^{1}}\) is Laplacian realizable only if \(j=n-1\), that is, only \(S_{\{i,n-1\}_{n}^{1}}\) is Laplacian realizable for certain \(i\), Theorem 2.8. Further, we list all such \(i\) for fixed \(n\) and \(j=n-1\), i.e., we find all the Laplacian realizable sets of kind \(S_{\{i,n-1\}_{n}^{1}}\), Theorem 3.2. We also present an algorithm for constructing graphs realizing the sets \(S_{\{i,n-1\}_{n}^{1}}\), Theorem 3.3. For the case \(m=2\), we
show that if \(i>1\), then \(j>n-3\), Theorem 4.1, and list all such \(i\) for given \(j\) considering \(j=n-2\) and \(n-1\) separately, Theorems 4.3 and 4.7. Theorems 4.4 and 4.8 describe the structure of graphs realizing the sets \(S_{\{i,j\}_{n}^{2}}\) for \(j=n-2,n-1\). If \(i=1\) then for all admissible \(j\), the set \(S_{\{1,j\}_{n}^{2}}\) is Laplacian realizable only if \(G=K_{1}^{2}\lor F\), where the graph \(F\) realizes \(S_{\{j-1,n-1\}_{n-1}^{1}}\), Theorem 4.9. However, for such case we believe that it may not exists for large \(n\), \(n\geqslant 6\). Tables 1 and 2 in Appendix A illustrate the cases \(m=1\) and \(m=2\).
As a result of our investigation on the set \(S_{\{i,j\}_{n}^{m}}\), we conclude that the values of \(m\) are closely related to one another. For instance, if \(G\) realizes \(S_{\{i,j\}_{n}^{n}}\) (\(m=n\)) then one can obtain graphs realizing the sets \(S_{\{i,j\}_{n}^{n-1}}\) (\(m=n-1\)) by using certain graph operations such as union, join and complement. Similarly, the case \(m=n-2\) can be obtained from the case \(m=n-1\) and so on. On the other hand, if \(G\) realizes \(S_{\{i,j\}_{n}^{m}}\) for \(m=n\), then using certain graph operations, we obtain the graph realizes \(S_{\{i,j\}_{n}^{m}}\) for \(m=1\). Similarly, \(m=2\) can be obtained from the case \(m=n-1\). So, either of the cases can be obtained by using graph operations. However, it is not clear whether the operations on graphs cover all the sets realizing \(S_{\{i,j\}_{n}^{m}}\) for particular value of \(m\).
The paper is organized as follows. In Section 2, we introduce some basic definitions and review some note worthy results from the literature that we use in this work. We also prove some auxiliary theorems. In Section 3, a complete characterization of all the graphs with double Laplacian eigenvalue \(m=1\) is given. The graphs with double Laplacian eigenvalue \(m=2\) are discussed in Section 4. We summarize this work in Section 5. Finally, in Appendix A, we list all the Laplacian realizable sets \(S_{\{i,j\}_{n}^{1}}\) and \(S_{\{i,j\}_{n}^{2}}\) for \(n=4,5,6,7,8\). The associated graphs realizing those sets are presented.
## 2. Preliminaries
An _isolated_ vertex is a vertex of degree zero denoted by \(K_{1}\), while a _pendant_ vertex is a vertex of degree one. The _complement_ of a simple undirected graph \(G\) denoted by \(\overline{G}\) is a simple graph on the same set of vertices as \(G\) in which two vertices are adjacent if and only if they are not adjacent in \(G\). Given two disjoint graphs \(G_{1}\) and \(G_{2}\), the _union_ of these graphs, \(G_{1}\cup G_{2}\), is the graph formed from the unions of the edges and vertices of the graphs \(G_{1}\) and \(G_{2}\). The _join_ of the graphs \(G_{1}\) and \(G_{2}\), \(G_{1}\lor G_{2}\), is the graph formed from \(G_{1}\cup G_{2}\) by adding all possible edges between vertices in \(G_{1}\) and vertices in \(G_{2}\), that is, \(G_{1}\lor G_{2}=\overline{(\overline{G_{1}}\cup\overline{G_{2}})}\).
We denote by \(0=\mu_{1}\leqslant\mu_{2}\leqslant\ldots\leqslant\mu_{n}\) the _Laplacian eigenvalues_ of a graph \(G\). It is easy to see from the form of the Laplacian matrix that the Laplacian spectrum of the union of two graphs is the union of their Laplacian spectra. The largest eigenvalue of the Laplacian matrix, is denoted by \(\rho(G)\). The second smallest eigenvalue \(\mu_{2}\) of \(L(G)\) is usually known as the _algebraic connectivity_ of \(G\) denoted by \(a_{G}\). The _vertex connectivity_ of a connected graph \(G\) is the minimum number of vertices whose removal disconnect \(G\).
The following facts provide information on the Laplacian largest eigenvalue and the Laplacian spectrum of the complement of graph.
**Theorem 2.1** ([22]).: _Let \(G\) be a simple graph on \(n\) vertices. Then \(\rho(G)\leqslant n\)._
**Theorem 2.2** ([3, 22]).: _Let \(G\) be a graph with \(n\) vertices with Laplacian eigenvalues_
\[0=\mu_{1}\leqslant\mu_{2}\leqslant\mu_{3}\leqslant\cdots\leqslant\mu_{n-1} \leqslant\mu_{n}\]
_Then the Laplacian eigenvalues of the complement of \(G\) are the following_
\[0\leqslant n-\mu_{n}\leqslant n-\mu_{n-1}\leqslant\cdots\leqslant n-\mu_{3} \leqslant n-\mu_{2}.\]
The Laplacian spectra of the disjoint union and join of graphs are stated in the following theorems.
**Theorem 2.3** ([3]).: _If \(G\) is the disjoint union of graphs \(G_{1},G_{2},\ldots,G_{k}\), then it's Laplacian characteristic polynomial is_
\[\chi(G,\mu)=\prod_{k=1}^{n}\chi(G_{k},\mu)\]
**Theorem 2.4** (Kelman's).: _Let \(G\) and \(H\) be two graphs of order \(n\) and \(m\), respectively. Suppose that the Laplacian eigenvalues of \(G\) and \(H\) are of the form_
\[0=\mu_{1}\leqslant\mu_{2}\leqslant\mu_{3}\ldots\leqslant\mu_{n-1}\leqslant\mu_{ n}\ \ \ \ \ \text{and}\]
\[0=\lambda_{1}\leqslant\lambda_{2}\leqslant\lambda_{3}\ldots\leqslant\lambda_{ m-1}\leqslant\lambda_{m}\ \ \ \ \ \text{respectively}\]
_then the Laplacian spectrum of \(G\lor H\) is of the form_
\[\{0,m+\mu_{2},m+\mu_{3},\ldots,m+\mu_{n-1},m+\mu_{n},n+\lambda_{2},n+\lambda_ {3},\ldots,n+\lambda_{m-1},n+\lambda_{m},n+m\}. \tag{2.1}\]
We cite the above theorem in the form given in [20]. Note that the eigenvalues in (2.1) are not in increasing order, generally speaking.
The following theorem provides a necessary and sufficient condition for a graph to have \(n\) as one of its eigenvalue.
**Theorem 2.5** ([26]).: _Let \(G\) be a connected graph of order \(n\). Then \(n\) is a Laplacian eigenvalue of \(G\) if and only if \(G\) is the join of two graphs._
If the order \(n\) is a double Laplacian eigenvalue, then the following theorem holds.
**Theorem 2.6** ([12]).: _Let \(G\) be a connected graph of order \(n\), and let \(n\) be the Laplacian eigenvalue of \(G\) of multiplicity \(2\). Then \(G=F\lor H\) where \(F\) is a join of two graphs, while \(H\) is not a join. Moreover, the eigenvalue \(1\) is not in the Laplacian spectrum of \(G\)._
The next proposition provides a necessary and sufficient condition for a graph to have \(1\) as one of its Laplacian eigenvalues.
**Proposition 2.7** ([11]).: _Let a graph \(G\) be a join. The number \(1\) is a Laplacian eigenvalue of \(G\) if and only if \(G=F\lor K_{1}\) where \(F\) is a disconnected graph of order at least \(2\)._
To complement to the previous result, we establish the following.
**Theorem 2.8**.: _Let \(G\) be a connected graph of order \(n\) that is a join. Let the number \(1\) be the Laplacian eigenvalue of \(G\) of multiplicity \(2\). Then \(G=(H_{1}\cup 2K_{1})\lor K_{1}\) where \(H_{1}\) is a connected graph of order \(n-3\). Moreover, the number \(n-1\) is not in the Laplacian spectrum of \(G\)._
**Proof**. Let \(G\) be a join that has a Laplacian eigenvalue \(1\) of multiplicity \(2\). According to Theorem 2.5, \(G=F\lor H\) where the graph \(F\) is of order \(p\) while the graph \(H\) is of order \(n-p\) for some \(1\leqslant p\leqslant n-1\).
Let us denote the Laplacian spectra of the graphs \(F\) and \(H\), respectively, as follows
\[0=\mu_{1}\leqslant\mu_{2}\leqslant\mu_{3}\ldots\leqslant\mu_{p-1}\leqslant\mu _{p}\ \ \ \text{and}\ \ \ 0=\lambda_{1}\leqslant\lambda_{2}\leqslant\lambda_{3}\ldots\leqslant\lambda_{ n-p-1}\leqslant\lambda_{n-p}.\]
Then by Theorem 2.4, the Laplacian spectrum of \(G\) has the form
\[\{0,(n-p)+\mu_{2},\ldots,(n-p)+\mu_{p},p+\lambda_{2},\ldots,p+\lambda_{n-p},n\}. \tag{2.2}\]
We remind the reader that the eigenvalues are not in the increasing order here.
Since \(G\) has exactly one multiple eigenvalue \(1\) by assumption, (2.2) provides only two possible situations.
**1)** If \(n-p+\mu_{2}=p+\lambda_{2}=1\), then \(n-p=p=1\), so both graphs \(F\) and \(H\) are single isolated vertices. Thus, \(G=K_{1}\lor K_{1}\) and the Laplacian spectrum of \(G\) is equal to \(\{0,2\}\), a contradiction.
**2)** Let \(n-p+\mu_{2}=n-p+\mu_{3}=1\) or \(p+\lambda_{2}=p+\lambda_{3}=1\). Without loss of generality, we can suppose that \(n-p+\mu_{2}=n-p+\mu_{3}=1\), so \(\mu_{2}=\mu_{3}=0\) and \(n-p=1\). As well, the inequality \(n-p+\mu_{4}>1\) gives us \(\mu_{4}>0\). Consequently, the graph \(F\) is a disconnected graph of the form \(F=H_{1}\cup 2K_{1}\), whereas the graph \(H\) is an isolated vertex \(K_{1}\). Thus, \(G=(H_{1}\cup 2K_{1})\lor K_{1}\) where \(H_{1}\) is a connected graph of order \(n-3\).
Now to prove that \(n-1\) is not in the Laplacian spectrum of \(G\), it is enough to show that \(n-p+\mu_{p}\neq n-1\). Let us suppose that \(n-p+\mu_{p}=n-1\). Then \(\mu_{p}=n-2\), and so the Laplacian spectrum of the graph
of order \(n-1\) has the form \(\sigma_{L}(F)=\{0,0,0,\mu_{4},\ldots,n-3,n-2\}\). According to Theorem 2.2, the Laplacian spectrum of \(\overline{F}\) is \(\sigma_{L}(\overline{F})=\{0,1,2,\ldots,n-1,n-1\}\). Thus, the Laplacian spectrum of the graph \(\overline{F}\) of order \(n-1\) contains the eigenvalue \(1\) and the double eigenvalue \(n-1\). This contradicts Theorem 2.6. Therefore, the eigenvalue \(n-1\) is not in the Laplacian spectrum of \(G\).
Now, we remind the reader some results on the sets \(S_{i,n}\) established in [7].
**Theorem 2.9**.: _Suppose \(n\geqslant 2\)._
* _If_ \(n\equiv 0\mod 4\)_, then for each_ \(i=1,2,3,\ldots,\dfrac{n-2}{2}\)_,_ \(S_{2i,n}\) _is Laplacian realizable;_
* _If_ \(n\equiv 1\mod 4\)_, then for each_ \(i=1,2,3,\ldots,\dfrac{n-1}{2}\)_,_ \(S_{2i-1,n}\) _is Laplacian realizable;_
* _If_ \(n\equiv 2\mod 4\)_, then for each_ \(i=1,2,3,\ldots,\dfrac{n}{2}\)_,_ \(S_{2i-1,n}\) _is Laplacian realizable;_
* _If_ \(n\equiv 3\mod 4\)_, then for_ \(i=1,2,\ldots,\dfrac{n-1}{2}\)_,_ \(S_{2i,n}\) _is Laplacian realizable._
**Proposition 2.10**.: _Suppose that \(n\geqslant 6\) and that \(G\) is a graph on \(n\) vertices. Then \(G\) realizes \(S_{1,n}\) if and only if \(G\) is formed in one of the following two ways:_
* \(G=(K_{1}\cup K_{1})\vee(K_{1}\cup G_{1})\)_, where_ \(G_{1}\) _is a graph on_ \(n-3\) _vertices that realizes_ \(S_{n-4,n-3};\)__
* \(G=K_{1}\lor H\)_, where_ \(H\) _is a graph on_ \(n-1\) _vertices that realizes_ \(S_{n-1,n-1}\)_._
In the sequel, we use also the following results established in [12].
**Theorem 2.11** ([12]).: _If \(S_{\{i,j\}_{n}^{n-1}}\) is Laplacian realizable then the number \(i\) is either \(1\) or \(2\)._
**Theorem 2.12** ([12]).: _Suppose that \(n\geqslant 3\) and \(G\) is a graph of order \(n\) realizing \(S_{\{i,j\}_{n}^{m}}\) for \(i<j\). Then_
* _for_ \(n\equiv 0\;or\;3\mod 4\)_, the numbers_ \((i+j)\) _and_ \(m\) _are of the same parities;_
* _for_ \(n\equiv 1\;or\;2\mod 4\)_, the numbers_ \((i+j)\) _and_ \(m\) _are of opposite parities._
**Proposition 2.13** ([12]).: _The graph \(G\) realizes \(S_{\{1,2\}_{n}^{n}}\) if and only if \(G\) is formed in one of the following two ways:_
* \(G=P_{3}\vee(K_{1}\cup H)\)_, where_ \(H\) _realizes_ \(S_{n-5,n-4}\) _and_ \(P_{3}\) _is the path graph on_ \(3\) _vertices;_
* \(G=K_{2}\lor H\)_, where_ \(H\) _realizes_ \(S_{n-2,n-2}\)_._
**Theorem 2.14** ([12]).: _Let \(G\) be a graph of order \(n\), \(n\geqslant 5\)._
* _The graph_ \(G\) _realizes_ \(S_{\{1,2\}_{n}^{n}}\) _if and only if_ \(G\) _is formed in one of the following two ways:_
* \(G=P_{3}\vee(K_{1}\cup H)\)_, where_ \(H\) _realizes_ \(S_{n-5,n-4}\) _and_ \(P_{3}\) _is the path graph on_ \(3\) _vertices;_
* \(G=K_{2}\lor H\)_, where_ \(H\) _realizes_ \(S_{n-2,n-2}\)_._
* _If_ \(3\leqslant j\leqslant n-2\)_, then_ \(G\) _realizes_ \(S_{\{1,j\}_{n}^{n}}\) _if and only if_ \(G=K_{2}\vee(K_{1}\cup H)\)_, where the graph_ \(H\) _realizes_ \(S_{j-2,n-3}\)_._
* _The graph_ \(G\) _realizes_ \(S_{\{1,n-1\}_{n}^{n}}\) _if and only if_ \(G\) _is formed in one of the following two ways:_
* \(G=K_{2}\vee(K_{2}\cup H)\)_, where_ \(H\) _realizes_ \(S_{2,n-4}\)_;_
* \(G=K_{2}\vee(K_{1}\cup H)\)_, where_ \(H\) _realizes_ \(S_{n-3,n-3}\)_._
**Theorem 2.15** ([12]).: _Let \(G\) be a simple connected graph of order \(n\), \(n\geqslant 6\)._
* _For_ \(n\equiv 0\) _or_ \(1\mod 4\)_, the set_ \(S_{\{1,j\}_{n}^{n-1}}\) _is Laplacian realizable if and only if_ \(j=2\)_._
* _For_ \(n\equiv 2\) _or_ \(3\mod 4\)_, the set_ \(S_{\{1,j\}_{n}^{n-1}}\) _is Laplacian realizable if and only if_ \(j=3\)_._
**Theorem 2.16** ([12]).: _Let \(G\) be a graph of order \(n\), \(n\geqslant 6\)._
* _The graph_ \(G\) _realizes_ \(S_{\{1,2\}_{n}^{n-1}}\) _if and only if_ \(n\equiv 0\) _or_ \(1\mod 4\)_, and_ \(G=(K_{1}\cup K_{2})\vee(K_{1}\cup H)\)_, where_ \(H\) _realizes_ \(S_{n-6,n-4}\)
_._
2. _The graph_ \(G\) _realizes_ \(S_{\{1,3\}_{n}^{n-1}}\) _if and only if_ \(n\equiv 2\) _or_ \(3\mod 4\)_, and_ \(G\) _is formed in one of the following two ways:_ 1. \(G=(K_{1}\cup K_{1})\vee(K_{1}\cup H)\)_, where the graph_ \(H\) _realizes_ \(S_{\{1,n-4\}_{n-3}^{n-3}}\)_;_ 2. \(G=K_{1}\lor F\)_, where the graph_ \(F\) _realizes_ \(S_{\{2,n-1\}_{n-1}^{n-2}}\)_._
## 3. Graphs realizing the sets \(S_{\{i,j\}_{n}^{1}}\)
In this section, we describe the graphs realizing the sets \(S_{\{i,j\}_{n}^{1}}\), and present an algorithm for constructing graphs realizing \(S_{\{i,j\}_{n}^{1}}\). As we mentioned in Introduction, in [3, p. 286-289] the authors listed the Laplacian spectra of all the graphs of order up to \(5\). Thus, it follows that for \(n\leqslant 5\) the only Laplacian realizable \(S_{\{i,j\}_{n}^{1}}\) sets are \(S_{\{2,3\}_{4}^{1}}\) and \(S_{\{2,4\}_{3}^{1}}\), see Table 1 in Appendix A. So in what follows, we consider \(n\geqslant 6\).
First we present the following auxiliary fact.
**Lemma 3.1**.: _If \(S_{i,n}\) is Laplacian realizable, then so is \(S_{\{i+1,n+2\}_{n+3}^{1}}\)._
Indeed, if \(G\) is a connected graph of order \(n\) realizing \(S_{i,n}\), then the graph \((G\cup 2K_{1})\lor K_{1}\) realizes \(S_{\{i+1,n+2\}_{n+3}^{1}}\).
In the next theorem, we describe all the Laplacian realizable sets \(S_{\{i,j\}_{n}^{1}}\).
**Theorem 3.2**.: _Suppose \(n\geqslant 6\). The only Laplacian realizable sets \(S_{\{i,j\}_{n}^{1}}\), \(i<j<n\), are the following ones._
1. _If_ \(n\equiv 0\mod 4\)_, then for each_ \(k=1,2,\ldots,\dfrac{n-2}{2}\)_,_ \(S_{\{2k,n-1\}_{n}^{1}}\) _is Laplacian realizable;_
2. _If_ \(n\equiv 1\mod 4\)_, then for each_ \(k=1,2,\ldots,\dfrac{n-3}{2}\)_,_ \(S_{\{2k,n-1\}_{n}^{1}}\) _is Laplacian realizable;_
3. _If_ \(n\equiv 2\mod 4\)_, then for each_ \(k=1,2,\ldots,\dfrac{n-4}{2}\)_,_ \(S_{\{2k+1,n-1\}_{n}^{1}}\) _is Laplacian realizable;_
4. _If_ \(n\equiv 3\mod 4\)_, then for each_ \(k=1,2,\ldots,\dfrac{n-3}{2}\)_,_ \(S_{\{2k+1,n-1\}_{n}^{1}}\) _is Laplacian realizable._
**Proof**. According to Theorem 2.8, if \(S_{\{i,j\}_{n}^{1}}\) is Laplacian realizable, then the eigenvalue \(n-1\) is not in the Laplacian spectrum of \(G\), so \(j=n-1\). Consequently, the sets \(S_{\{i,j\}_{n}^{1}}\) are not Laplacian realizable if \(j<n-1\).
1. If \(n\equiv 0\mod 4\), then \(n-3\equiv 1\mod 4\), so for each \(k=1,2,\ldots,\dfrac{n-4}{2}\), \(S_{2k-1,n-3}\) is Laplacian realizable by Theorem 2.9 (ii). According to Lemma 3.1, \(S_{\{2k,n-1\}_{n}^{1}}\) is Laplacian realizable for any \(k=1,2,\ldots,\dfrac{n-4}{2}\). Moreover, since \(n\) is even and \(m=1\), \(S_{\{2k+1,n-1\}_{n}^{1}}\) is not Laplacian realizable by Theorem 2.12 (i). Let us now deal with the set \(S_{\{n-2,n-1\}_{n}^{1}}\). The set \(S_{2,n-4}\) is Laplacian realizable by Theorem 2.9 (i). Suppose that a graph \(F_{1}\) realizes \(S_{2,n-4}\). If \(P_{3}\) is the path graph on \(3\) vertices, then \(\sigma_{L}(\overline{P_{3}})=\{0,0,2\}\), and \(\sigma_{L}(\overline{P_{3}}\cup F_{1})=\{0,0,0,1,2,\ldots,n-5,n-4\}\). According to Theorem 2.4, the graph \(K_{1}\vee(\overline{P_{3}}\cup F_{1})\) realizes \(S_{\{n-2,n-1\}_{n}^{1}}\).
2. If \(n\equiv 1\mod 4\), then \(n-3\equiv 2\mod 4\). Therefore, by Theorem 2.9 (iii), the set \(S_{2k-1,n-3}\) is Laplacian realizable for \(k=1,2,\ldots,\dfrac{n-3}{2}\). By Lemma 3.1, \(S_{\{2k,n-1\}_{n}^{1}}\) is Laplacian realizable for any \(k=\ 1,2,\ldots,\dfrac{n-3}{2}\). As \(n\) is odd and \(m=1\), therefore, by Theorem 2.12 (ii), \(S_{\{2k+1,n-1\}_{n}^{1}}\) is not Laplacian realizable. The case (iii) can be proved analogously.
3. If \(n\equiv 3\mod 4\), then \(n-3\equiv 0\mod 4\). Thus, for \(k=1,2,\ldots,\dfrac{n-5}{2}\), the sets \(S_{2k,n-3}\) is Laplacian realizable by Theorem 2.9 (i). Now Lemma 3.1 implies that \(S_{\{2k+1,n-1\}_{n}^{1}}\) is Laplacian realizable for
any \(k=1,2,\ldots,\dfrac{n-5}{2}\), while Theorem 2.12 (i) gives us that \(S_{\{2k,n-1\}_{n}^{1}}\) is not Laplacian realizable, since \(n\) is even and \(m=1\).
Next, consider the set \(S_{\{n-2,n-1\}_{n}^{1}}\). By Theorem 2.9 (i), the set \(S_{2,n-4}\) is Laplacian realizable. Let a graph, say, \(F_{1}\) realize \(S_{2,n-4}\). Then for the path graph \(P_{3}\), the graph \(K_{1}\vee(\overline{P_{3}}\cup F_{1})\) realizes \(S_{\{n-2,n-1\}_{n}^{1}}\) by Theorem 2.4. Therefore, \(S_{\{n-2,n-1\}_{n}^{1}}\) is Laplacian realizable.
In the following theorem, we discuss the structure of graphs realizing \(S_{\{i,n-1\}_{n}^{1}}\) for various possible values of \(i\). We remind the reader that the sets \(S_{\{i,j\}_{n}^{1}}\) for \(j<n-1\) are not Laplacian realizable as we found out above.
**Theorem 3.3**.: _Let \(G\) be a graph of order \(n\), \(n\geqslant 6\)._
* _If_ \(2\leqslant i\leqslant n-3\)_, then_ \(G\) _realizes_ \(S_{\{i,n-1\}_{n}^{1}}\) _if and only if_ \(G=K_{1}\vee(2K_{1}\cup(K_{1}\vee\overline{H}))\)_, where the graph_ \(H\) _realizes the set_ \(S_{n-i-2,n-4}\)_._
* _The graph_ \(G\) _realizes_ \(S_{\{n-2,n-1\}_{n}^{1}}\) _if and only if_ \(G\) _is formed in one of the following two ways:_
* \(G=K_{1}\vee(\overline{P_{3}}\cup(K_{1}\vee\overline{H}))\)_, where the graph_ \(H\) _of order_ \(n-5\) _realizes_ \(S_{n-6,n-5}\)_;_
* \(G=K_{1}\vee(2K_{1}\cup\overline{H})\)_, where the graph_ \(H\) _realizes_ \(S_{n-3,n-3}\)_._
**Proof**.
(a) Suppose that \(G\) realizes \(S_{\{i,n-1\}_{n}^{1}}\) for \(2\leqslant i\leqslant n-3\). By Theorem 2.2, one has \(\sigma_{L}(\overline{G})=\{0\}\cup S_{\{1,n-i\}_{n-1}^{n-1}}\). Therefore, the complement of the graph \(G\) has the form \(\overline{G}=K_{1}\cup\overline{F}\), where \(\overline{F}\) is connected and \(\sigma_{L}(\overline{F})=S_{\{1,n-i\}_{n-1}^{n-1}}\). According to Theorem 2.14 (b), \(\overline{F}\) realizes \(S_{\{1,n-i\}_{n-1}^{n-1}}\) if and only if \(\overline{F}=K_{2}\vee(K_{1}\cup H)\), where the graph \(H\) realizes \(S_{n-i-2,n-4}\). Consequently, \(F=2K_{1}\cup(K_{1}\vee\overline{H})\), so \(G=K_{1}\vee(2K_{1}\cup(K_{1}\vee\overline{H}))\).
Conversely, if \(G=K_{1}\vee(2K_{1}\cup(K_{1}\vee\overline{H}))\), where the graph \(H\) realizes the set \(S_{n-i-2,n-4}\), then from Theorems 2.3 and 2.4, it follows that \(G\) realizes \(S_{\{i,n-1\}_{n}^{1}}\).
(b) Let \(G\) be a graph realizing \(S_{\{n-2,n-1\}_{n}^{1}}\). Then from Theorem 2.2, we obtain \(\sigma_{L}(\overline{G})=\{0\}\cup S_{\{1,2\}_{n-1}^{n-1}}\). So the complement of the graph \(G\) can be represented as follows \(\overline{G}=K_{1}\cup\overline{F}\), where \(\overline{F}\) is connected and \(\sigma_{L}(\overline{F})=S_{\{1,2\}_{n-1}^{n-1}}\). According to Theorem 2.14 (a), the graph \(\overline{F}\) must be in one of the following form:
* \(\overline{F}=P_{3}\vee(K_{1}\cup H)\), where \(H\) realizes \(S_{n-6,n-5}\) and \(P_{3}\) is the path graph on \(3\) vertices;
* \(\overline{F}=K_{2}\lor H\), where \(H\) realizes \(S_{n-3,n-3}\).
Thus, from (i) one gets \(F=\overline{P_{3}}\cup\left(K_{1}\vee\overline{H}\right)\). According to Theorem 2.2, one has \(\sigma_{L}(\overline{H})=\{0\}\cup S_{1,n-6}\). So \(\overline{H}\) can be represented as follows \(\overline{H}=K_{1}\cup H_{1}\), where \(H_{1}\) realizes \(S_{1,n-6}\) (for the construction of \(H_{1}\), see Proposition 2.10). Thus, \(G=K_{1}\vee(\overline{P_{3}}\cup(K_{1}\vee\overline{H}))\), where the graph \(H\) of order \(n-5\) realizes \(S_{n-6,n-5}\).
Also from (ii), we have \(\overline{F}=K_{2}\lor H\). Therefore, \(F=2K_{1}\cup\overline{H}\), and the graph \(H\) is a connected graph realizing \(S_{n-3,n-3}\). Thus, \(G=K_{1}\vee(2K_{1}\cup\overline{H})\), where \(H\) realizes \(S_{n-3,n-3}\).
Conversely, if \(G=K_{1}\vee(\overline{P_{3}}\cup(K_{1}\vee\overline{H}))\), where the graph \(H\) of order \(n-5\) realizes \(S_{n-6,n-5}\). Then from Theorems 2.3 and 2.4 it follows that \(G\) realizes \(S_{\{n-2,n-1\}_{n}^{1}}\). Similarly, if \(G=K_{1}\vee(2K_{1}\cup\overline{H})\), where \(H\) realizes \(S_{n-3,n-3}\), then again by Theorems 2.3 and 2.4, the graph \(G\) realizes \(S_{\{n-2,n-1\}_{n}^{1}}\).
**Remark 3.4**.: In the above theorem, the structure of the graph \(G\) shows that there exist at least two pendant vertices in \(G\). As well, Theorems 3.2 and 3.3 completely resolve the existence of graphs realizing the spectrum \(S_{\{i,j\}_{n}^{1}}\), where \(2\leqslant i<j<n\). Furthermore, it is easily deduced from Theorem 3.3 that if the sets \(S_{n,n}\) were not realizable for any \(n\), then there is a unique graph, which realizes \(S_{\{i,j\}_{n}^{1}}\), for \(2\leqslant i<j<n\).
For the case \(j=n\), we conjectured that the set \(S_{\{i,n\}_{n}^{m}}\), \(n\geqslant 9\), does not exist [12]. However, unless otherwise is proved, we must include this case in our consideration.
**Proposition 3.5**.: _Let \(G\) realize \(S_{\{i,n\}_{n}^{1}}\). Then the following holds:_
1. \(n\geqslant 9\)_;_
2. \(n\) _is not a prime number;_
3. \(2\leqslant\min\limits_{1\leqslant i\leqslant n}\ d_{i}\leqslant\max\limits_{1 \leqslant i\leqslant n}d_{i}\leqslant n-3\)_, where_ \(d_{i}\) _is the degree of vertex_ \(i\)_._
Proof.: Let \(G\) realize \(S_{\{i,n\}_{n}^{1}}\). Then Properties (a) and (b) follow from [12, Proposition 5.2].
If we suppose, on the contrary, that \(G\) has a pendant vertex, then the vertex connectivity of \(G\) equals \(1\). At the same time, its algebraic connectivity is also \(1\). So, according to [19, Theorem 2.1] if the algebraic and vertex connectivity are equal, then \(G\) must be a join, a contradiction.
Property (c) in the above proposition coincides with the one of the \(S_{n,n}\)-conjecture proposed in [7, Observation 3.5].
The _Cartesian product_ of the graphs \(G_{1}\) and \(G_{2}\) is the graph \(G_{1}\times G_{2}\) whose vertex set is the Cartesian product \(V(G_{1})\times V(G_{2})\), and for \(v_{1},v_{2}\in V(G_{1})\) and \(u_{1},u_{2}\in V(G_{2})\), the vertices \((v_{1},u_{1})\) and \((v_{2},u_{2})\) are adjacent in \(G_{1}\times G_{2}\) if and only if either
* \(v_{1}=v_{2}\) and \(\{u_{1},u_{2}\}\in E(G_{1})\);
* \(\{v_{1},v_{2}\}\in E(G_{2})\) and \(u_{1}\)=\(u_{2}\).
The authors of the present work established [12] that if \(G=G_{1}\times G_{2}\) (i.e., \(G\) is a Cartesian product of two graphs) then it does not realize the set \(S_{\{i,n\}_{n}^{m}}\) for \(n\geqslant 9\) and any \(m\neq i,n\). According to Proposition 3.5, the set \(S_{\{i,n\}_{n}^{1}}\) is not Laplacian realizable if \(n\) is a prime number, and so the Cartesian product does not realizes the set \(S_{\{i,n\}_{n}^{1}}\) for a prime number \(n\). All these facts motivated us to state the following Conjecture.
**Conjecture 3.6**.: _For \(n\geqslant 3\), the set \(S_{\{i,n\}_{n}^{1}}\) is not Laplacian realizable._
## 4. Graphs realizing the sets \(S_{\{i,j\}_{n}^{2}}\)
This section establishes the existence of graphs realizing the sets \(S_{\{i,j\}_{n}^{2}}\). As we already mentioned in Section 1, for \(n\leqslant 5\), the only Laplacian realizable sets of type \(S_{\{i,j\}_{n}^{2}}\) are \(S_{\{1,3\}_{n}^{2}}\) and \(S_{\{1,4\}_{n}^{2}}\), see Table 2 in Appendix A. So throughout this section we consider \(n\geqslant 6\). In Sections 4.1-4.2 we study the case \(1<i<j<n\), while Section 4.3 is devoted to the sets \(S_{\{1,j\}_{n}^{2}}\) (\(i=1\)) and \(S_{\{i,n\}_{n}^{2}}\) (\(j=n\)). We believe that these sets are not Laplacian realizable for large \(n\). To proceed with the main results of this section, we first establish a relation between the numbers \(i\) and \(j\) to realize \(S_{\{i,j\}_{n}^{2}}\) when \(i>1\) as follows:
**Theorem 4.1**.: _Let \(G\) realize \(S_{\{i,j\}_{n}^{2}}\). If \(i>1\), then \(j>n-3\)._
Proof.: Suppose on the contrary that \(S_{\{i,j\}_{n}^{2}}\) is Laplacian realizable for some \(j\leqslant n-3\), and let \(G\) be a graph realizing this set. Clearly, the numbers \(n-2\), \(n-1\) and \(n\) belong to \(\sigma_{L}(G)\). According to Theorem 2.2,
\[\sigma_{L}(\overline{G})=\{0,0,1,2,\ldots,n-j-1,n-j+1,n-i-1,n-i+1,\ldots,n-2,n -2,n-1\}. \tag{4.1}\]
By Theorem 2.1 one of the components of \(\overline{G}\), say, \(\overline{G_{2}}\) has \(n-1\) vertices, so \(G_{1}=K_{1}\). Now from (4.1) one has
\[\sigma_{L}(\overline{G_{2}})=\{0,1,2,\ldots,n-j-1,n-j+1,n-i-1,n-i+1,\ldots,n-2,n-2,n-1\}.\]
This contradicts Theorem 2.11. Therefore, for \(i>1\), the set \(S_{\{i,j\}_{n}^{2}}\) is Laplacian realizable only if \(j>n-3\).
The above theorem claims that for \(i>1\) either \(j=n-2\), \(n-1\) or \(n\). In the sequel, we consider these three cases separately.
### Graphs realizing the sets \(S_{\{i,n-2\}_{n}^{2}}\)
We start with the following fact established in [12].
**Proposition 4.2** ([12]).: _Let \(n\geqslant 3\), if \(S_{\{i,j\}_{n}^{m}}\) is Laplacian realizable, then so is \(S_{\{i+1,j+1\}_{n+2}^{m+1}}\)._
The following theorem lists all the Laplacian realizable sets \(S_{\{i,n-2\}_{n}^{2}}\). We, remind that \(i>1\).
**Theorem 4.3**.: _Suppose \(n\geqslant 6\). The only Laplacian realizable sets \(S_{\{i,n-2\}_{n}^{2}}\) are the following ones._
1. _If_ \(n\equiv 0\mod 4\)_, then for each_ \(k=1,2,\ldots,\dfrac{n-6}{2}\)_,_ \(S_{\{2k+2,n-2\}_{n}^{2}}\) _is Laplacian realizable;_
2. _If_ \(n\equiv 1\mod 4\)_, then for each_ \(k=1,2,\ldots,\dfrac{n-5}{2}\)_,_ \(S_{\{2k+2,n-2\}_{n}^{2}}\) _is Laplacian realizable;_
3. _If_ \(n\equiv 2\mod 4\)_, then for each_ \(k=1,2,\ldots,\dfrac{n-4}{2}\)_,_ \(S_{\{2k+1,n-2\}_{n}^{2}}\) _is Laplacian realizable;_
4. _If_ \(n\equiv 3\mod 4\)_, then for each_ \(k=1,2,\ldots,\dfrac{n-5}{2}\)_,_ \(S_{\{2k+1,n-2\}_{n}^{2}}\) _is Laplacian realizable._
**Proof**.
1. If \(n\equiv 0\mod 4\), then \(n-2\equiv 2\mod 4\), then for \(k=1,2,\ldots,\dfrac{n-6}{2}\), so the sets \(S_{\{2k+1,n-3\}_{n-2}^{1}}\) are Laplacian realizable by Theorem 3.2 (iii). From Proposition 4.2 it follows that \(S_{\{2k+2,n-2\}_{n}^{2}}\) is Laplacian realizable for each \(k=1,2,\ldots,\dfrac{n-6}{2}\). At the same time, the sets \(S_{\{2k+1,n-2\}_{n}^{2}}\) are not Laplacian realizable for any \(k\) by Theorem 2.12 (i), since the double eigenvalue \(m=2\) is an even number in this case.
2. For \(n\equiv 1\mod 4\), we have \(n-2\equiv 3\mod 4\). Thus, for \(k=1,2,\ldots,\dfrac{n-5}{2}\), the sets \(S_{\{2k+1,n-3\}_{n-2}^{1}}\) are Laplacian realizable by Theorem 3.2 (iv). Now Proposition 4.2 implies that \(S_{\{2k+2,n-2\}_{n}^{2}}\) are Laplacian realizable for \(k=1,2,\ldots,\dfrac{n-5}{2}\). As \(m=2\) is even, from Theorem 2.12 (ii) it follows that the sets \(S_{\{2k+1,n-2\}_{n}^{2}}\) are not Laplacian realizable for any \(k\).
The cases (iii) and (iv) can be proved analogously with the use of Theorem 3.2 and Proposition 4.2.
The next theorem deals with the construction of graphs realizing the sets \(S_{\{i,n-2\}_{n}^{2}}\), \(i>1\).
**Theorem 4.4**.: _Let \(n\geqslant 6\), and let \(G\) be a connected graph of order \(n\). Then \(G\) realizes \(S_{\{i,n-2\}_{n}^{2}}\) if and only if \(G=K_{1}\vee(K_{1}\cup H)\), where \(H\) is a graph on \(n-2\) vertices realizing \(S_{\{i-1,n-3\}_{n-2}^{1}}\)._
**Proof**. Let \(G\) realize \(S_{\{i,n-2\}_{n}^{2}}\). Then \(G=G_{1}\lor G_{2}\) by Theorem 2.5, so \(\overline{G}=\overline{G_{1}}\cup\overline{G_{2}}\). From Theorem 2.2 it follows that \(\sigma_{L}(\overline{G})=\{0\}\cup S_{\{2,n-i\}_{n-1}^{n-2}}\). Thus, by Theorem 2.1 we obtain that \(G_{1}=K_{1}\), and \(G_{2}\) is of order \(n-1\), so that \(\sigma_{L}(\overline{G_{2}})=S_{\{2,n-i\}_{n-1}^{n-2}}\). Using Theorem 2.2, we get \(\sigma_{L}(G_{2})=\{0\}\cup S_{\{i-1,n-3\}_{n-2}^{1}}\). Again Theorem 2.1 gives us that \(G_{2}=K_{1}\cup H\), where \(H\) is a graph on \(n-2\) vertices realizing \(S_{\{i-1,n-3\}_{n-2}^{1}}\). Consequently, \(G=K_{1}\vee(K_{1}\cup H)\), as required.
Conversely, if \(G=K_{1}\vee(K_{1}\cup H)\), where \(H\) is a graph on \(n-2\) vertices realizing \(S_{\{i-1,n-3\}_{n-2}^{1}}\), then from Theorem 2.3 and 2.4 it follows that \(G\) realizes \(S_{\{i,n-2\}_{n}^{2}}\). \(\square\)
**Remark 4.5**.: Theorems 4.3 and 4.4 completely resolve the existence of graphs realizing the spectrum \(S_{\{i,n-2\}_{n}^{2}}\), where \(2\leqslant i\leqslant n-3\).
### Graphs realizing the sets \(S_{\{i,n-1\}_{n}^{2}}\)
To determine all the graphs realizing the set \(S_{\{i,n-1\}_{n}^{2}}\) for various possible values of \(i\), \(i>1\), first we establish the following auxiliary lemma.
**Lemma 4.6**.: _The set \(S_{\{i,j\}_{n}^{m}}\) is Laplacian realizable if and only if the set \(S_{\{n-j+1,n-i+1\}_{n+1}^{n+1-m}}\) is Laplacian realizable._
**Proof**. Let \(S_{\{i,j\}_{n}^{m}}\) be Laplacian realizable and let a graph \(G\) realize \(S_{\{i,j\}_{n}^{m}}\). Consider the graph \(H=\overline{G}\lor K_{1}\). Since the Laplacian spectrum of the graph \(\overline{H}=G\cup K_{1}\) has the form \(\{0\}\cup S_{\{i,j\}_{n}^{m}}\), the spectrum of \(H\) is \(S_{\{n-j+1,n-i+1\}_{n+1}^{n+1-m}}\) by Theorem 2.2.
Conversely, suppose that \(S_{\{n-j+1,n-i+1\}_{n+1}^{n-m+1}}\) is Laplacian realizable, and a graph \(H\) realizes it. According to Theorem 2.2, we have
\[\sigma_{L}(\overline{H})=\{0\}\cup S_{\{i,j\}_{n}^{m}}. \tag{4.2}\]
Thus, \(\overline{H}\) is the union of two disjoint graphs, say, \(\overline{H}=G\cup F\), and the Laplacian spectrum of \(\overline{H}\) is the union of the spectra of \(G\) and \(F\). Consequently, one of these graphs, say, \(G\), has \(n\) as its Laplacian eigenvalue, so that its order is at least \(n\) by Theorem 2.1. Since the order of \(\overline{H}\) is \(n+1\), we have \(F=K_{1}\). Therefore, the graph \(G\) of order \(n\) realizes \(S_{\{i,j\}_{n}^{m}}\), according to (4.2).
Now, we are in a position to describe all the Laplacian realizable sets \(S_{\{i,n-1\}_{n}^{2}}\). Remind that \(i>1\).
**Theorem 4.7**.: _Suppose \(n\geqslant 6\). The only Laplacian realizable sets \(S_{\{i,n-1\}_{n}^{2}}\) are the following ones._
* _If_ \(n\equiv 0\ or\ 3\mod 4\)_, then_ \(S_{\{i,n-1\}_{n}^{2}}\) _is Laplacian realizable if and only if_ \(i=n-3\)_._
* _If_ \(n\equiv 1\ or\ 2\mod 4\)_, then_ \(S_{\{i,n-1\}_{n}^{2}}\) _is Laplacian realizable if and only if_ \(i=n-2\)_._
**Proof**. Let \(G\) realizes \(S_{\{i,n-1\}_{n}^{2}}\).
* If \(n\equiv 0\ or\ 3\mod 4\), then \(n-1\equiv 2\ or\ 3\mod 4\), so by Theorem 2.15 (ii), the set \(S_{\{1,j\}_{n-1}^{n-2}}\) is Laplacian realizable if and only if \(j=3\). Using Lemma 4.6, the set \(S_{\{n-j,n-1\}_{n}^{2}}\) is also Laplacian realizable if and only if \(j=3\). Thus, \(S_{\{i,n-1\}_{n}^{2}}\) is Laplacian realizable only, if \(i=n-3\).
* If \(n\equiv 1\ or\ 2\mod 4\), then \(n-1\equiv 0\ or\ 1\mod 4\). Therefore, by Theorem 2.15 (i), the set \(S_{\{1,j\}_{n-1}^{n-2}}\) is Laplacian realizable if and only if \(j=2\). Using Lemma 4.6, the set \(S_{\{n-j,n-1\}_{n}^{2}}\) is Laplacian realizable if and only if \(j=2\). Consequently, \(S_{\{i,n-1\}_{n}^{2}}\) is Laplacian realizable only, if \(i=n-2\).
Our next result, discuss the structure of graphs realizing the sets \(S_{\{i,n-1\}_{n}^{2}}\), \(i>1\) as follows.
**Theorem 4.8**.: _Let \(G\) be a graph of order \(n\), \(n\geqslant 6\)._
* _The graph_ \(G\) _realizes_ \(S_{\{n-3,n-1\}_{n}^{2}}\) _if and only if_ \(n\equiv 0\ or\ 3\mod 4\)_, and_ \(G\) _is formed in one of the following two ways:_
* \(G=K_{1}\vee(K_{2}\cup(K_{1}\vee(2K_{1}\cup H_{1})))\)_, where the graph_ \(H_{1}\) _of order_ \(n-6\) _realizes_ \(S_{1,n-6}\)_;_
* \(G=K_{1}\vee\big{(}K_{1}\cup\overline{F}\big{)}\)_, where the graph_ \(F\) _realizes_ \(S_{\{2,n-2\}_{n-2}^{n-3}}\)_._
* _The graph_ \(G\) _realizes_ \(S_{\{n-2,n-1\}_{n}^{2}}\) _if and only if_ \(n\equiv 1\ or\ 2\mod 4\)_, and_ \(G=K_{1}\vee\big{(}P_{3}\cup(K_{1}\vee\overline{H})\big{)}\)_, where_ \(H\) _realizes_ \(S_{n-7,n-5}\)_._
**Proof**. (a) Let \(G\) realizes \(S_{\{n-3,n-1\}_{n}^{2}}\), and let \(n\equiv 0\ or\ 3\mod 4\). By Theorem 2.5, \(G=G_{1}\lor G_{2}\), so \(\overline{G}=\overline{G_{1}}\cup\overline{G_{2}}\). From Theorem 2.2 it follows that \(\sigma_{L}(\overline{G})=\{0\}\cup S_{\{1,3\}_{n-1}^{n-2}}\). Thus, by Theorem 2.1 we obtain that \(G_{1}=K_{1}\), and \(G_{2}\) is of order \(n-1\), so that \(\sigma_{L}(\overline{G_{2}})=S_{\{1,3\}_{n-1}^{n-2}}\). According to Theorem 2.16, the graph \(\overline{G_{2}}\) can be formed in one of the following two ways.
* \(\overline{G_{2}}=(K_{1}\cup K_{1})\vee(K_{1}\cup H)\), where the graph \(H\) realizes \(S_{\{1,n-4\}_{n-3}^{n-3}}\);
* \(\overline{G_{2}}=K_{1}\lor F\), where the graph \(F\) realizes \(S_{\{2,n-2\}_{n-2}^{n-3}}\).
Thus, from (i) one gets \(G_{2}=K_{2}\cup\big{(}K_{1}\vee\overline{H}\big{)}\). According to Theorem 2.2, one has \(\sigma_{L}(\overline{H})=\{0,0\}\cup S_{1,n-5}\). Thus, \(\overline{H}\) can be represented as follows \(\overline{H}=2K_{1}\cup H_{1}\), where \(H_{1}\) realizes \(S_{1,n-5}\) (for the construction of \(H_{1}\) see Proposition 2.10). Thus, \(G=K_{1}\vee(K_{2}\cup(K_{1}\vee(2K_{1}\cup H_{1})))\), where the graph \(H_{1}\) of order \(n-6\) realizes \(S_{1,n-6}\).
Also from (ii), we have \(G_{2}=K_{1}\cup\overline{F}\). According to Theorem 2.2, one has \(\sigma_{L}(\overline{F})=S_{\{n-4,n-2\}_{n-2}^{1}}\). Thus, \(G=K_{1}\vee\big{(}K_{1}\cup\overline{F}\big{)}\) where \(F\) realizes \(S_{\{2,n-2\}_{n-2}^{n-3}}\) (for the construction of \(H_{1}\) see Proposition 2.10).
Conversely, if \(G=K_{1}\vee(K_{2}\cup(K_{1}\vee(2K_{1}\cup H_{1})))\), where the graph \(H_{1}\) of order \(n-6\) realizes \(S_{1,n-6}\). Then, from Theorem 2.3 and 2.4, it follows that \(G\) realizes \(S_{\{n-3,n-1\}_{n}^{2}}\). Similarly, if \(G=K_{1}\vee\big{(}K_{1}\cup\overline{F}\big{)}\) where the graph \(F\) realizes \(S_{\{2,n-2\}_{n-2}^{n-3}}\), then again from Theorem 2.3 and 2.4, the graph \(G\) realizes \(S_{\{n-3,n-1\}_{n}^{2}}\).
(b) Let \(G\) realizes \(S_{\{n-2,n-1\}_{n}^{2}}\) and let \(n\equiv 1\) or \(2\mod 4\). Then \(G=G_{1}\lor G_{2}\) by Theorem 2.5, so \(\overline{G}=\overline{G_{1}}\cup\overline{G_{2}}\). From Theorem 2.2 it follows that \(\sigma_{L}(\overline{G})=\{0\}\cup S_{\{1,2\}_{n-1}^{n-2}}\). Thus, by Theorem 2.1 we obtain that \(G_{1}=K_{1}\), and \(G_{2}\) is of order \(n-1\), so that \(\sigma_{L}(\overline{G_{2}})=S_{\{1,2\}_{n-1}^{n-2}}\). According to Theorem 2.16, the graph \(\overline{G_{2}}\) is of the following form. \(\overline{G_{2}}=(K_{1}\cup K_{2})\vee(K_{1}\cup H)\), where \(H\) realizes \(S_{n-7,n-5}\). Thus, \(G_{2}=P_{3}\cup(K_{1}\sqrt{H})\). According to Theorem 2.2, one has \(\sigma_{L}(\overline{H})=\{0\}\cup S_{2,n-6}\). Thus, \(\overline{H}\) can be represented as follows \(\overline{H}=K_{1}\cup H_{1}\), where \(H_{1}\) realizes \(S_{2,n-6}\) (for the construction of \(H_{1}\) see Proposition 2.10). Consequently, \(G=K_{1}\vee\big{(}P_{3}\cup(K_{1}\vee\overline{H})\big{)}\), where \(H\) realizes \(S_{n-7,n-5}\).
Conversely, if \(G=K_{1}\vee\big{(}P_{3}\cup(K_{1}\vee\overline{H})\big{)}\), where \(H\) realizes \(S_{n-7,n-5}\). Then, from Theorem 2.3 and 2.4 it follows that \(G\) realizes \(S_{\{n-2,n-1\}_{n}^{2}}\).
### Graphs realizing the sets \(S_{\{1,j\}_{n}^{2}}\) and \(S_{\{i,n\}_{n}^{2}}\)
Now we are in a position to study graphs realizing the sets \(S_{\{1,j\}_{n}^{2}}\) (\(i=1\)) and \(S_{\{i,n\}_{n}^{2}}\) (\(j=n\)). Since all the correspondent graphs on less than \(6\) vertices have been listed, see Appendix A, in what follows we consider \(n\geqslant 6\). The structure of graph realizing the set \(S_{\{1,j\}_{n}^{2}}\) is described by the following theorem.
**Theorem 4.9**.: _Let \(G\) be a graph of order \(n\). Then for each admissible \(j\), \(1<j<n\) the graph \(G\) realizes \(S_{\{1,j\}_{n}^{2}}\) if and only if \(G=K_{1}\lor F\) where \(F\) realizes \(S_{\{j-1,n-1\}_{n-1}^{1}}\)._
Proof.: Let \(S_{\{1,j\}_{n}^{2}}\) for \(1<j<n\) be Laplacian realizable by a graph \(G\). Then Theorem 2.2 implies
\[\sigma_{L}(\overline{G})=\{0,0,1,2,\ldots,n-j-1,n-j+1,\ldots,n-2,n-2\}. \tag{4.3}\]
Here \(\overline{G}\) is a disconnected graph of the form \(\overline{G}=\overline{G_{1}}\cup\overline{G_{2}}\). By Theorem 2.1, one of the components, say, \(\overline{G_{2}}\), must be of order at least \(n-2\).
If \(\overline{G_{2}}\) has \(n-2\) vertices, then \(\overline{G_{1}}\) has two vertices, and therefore \(\sigma_{L}(\overline{G_{1}})=\{0,2\}\). By Theorem 2.2, we obtain \(\sigma_{L}(G_{1})=\{0,0\}\), so \(G_{1}=2K_{1}\). Now from (4.3), we have
\[\sigma_{L}(\overline{G_{2}})=\{0,1,\ldots,n-j-1,n-j+1\ldots,n-2,n-2\}.\]
This contradicts Theorem 2.6.
If \(\overline{G_{2}}\) has \(n-1\) vertices, then \(\overline{G_{1}}=K_{1}\). Again from (4.3)
\[\sigma_{L}(\overline{G_{2}})=\{0,1,2,\ldots,n-j-1,n-j+1\ldots,n-2,n-2\},\]
by Theorem 2.2,
\[\sigma_{L}(G_{2})=\{0,1,1,2,\ldots,j-2,j\ldots,n-3,n-2\}.\]
Here the graph \(G_{2}\) is of order \(n-1\) realizes \(S_{\{j-1,n-1\}_{n-1}^{1}}\). Thus, \(G=K_{1}\lor F\), where \(F=G_{2}\) realizes \(S_{\{j-1,n-1\}_{n-1}^{1}}\).
Conversely, if \(G=K_{1}\lor F\), where the graph \(F\) of order \(n-1\) realizes \(S_{\{j-1,n-1\}_{n-1}^{1}}\). Then from Theorems 2.4 it follows that \(G\) realizes \(S_{\{1,j\}_{n}^{2}}\).
Theorem 4.9, guarantees that if the set \(S_{\{i,n\}_{n}^{m}}\) is not Laplacian realizable, then no graphs realizing \(S_{\{1,j\}_{n}^{2}}\) exist. From Theorem 4.9 it follows that for all admissible \(j\), the sets \(S_{\{1,j\}_{n}^{2}}\) are Laplacian realizable only if the Conjecture 3.6 is not true. Moreover, for \(n=p+1\) where \(p\) is a prime number, the set \(S_{\{1,j\}_{n}^{2}}\) is not Laplacian realizable according to Proposition 3.5. For instance, if \(G\) is of order \(n=6,8,12,14,18\) etc, then it does not realize \(S_{\{1,j\}_{n}^{2}}\).
Finally, we consider the case when \(j=n\). Recall that for \(n\leqslant 5\), there are no graphs realizing the set \(S_{\{i,n\}_{n}^{2}}\), see Appendix A. As well, we conjectured in [12] that graphs realizing the set \(S_{\{i,n\}_{n}^{m}},n\geqslant 9\) do not exist. In that paper, it is also shown that the set set \(S_{\{i,n\}_{n}^{m}}\) is not Laplacian realizable for a prime number \(n\), and so the set \(S_{\{i,n\}_{n}^{2}}\) is not Laplacian realizable if \(n\) is prime. Moreover, as we mentioned in Section 3 above, the set \(S_{\{i,n\}_{n}^{2}}\) is not Laplacian realizable by the Cartesian product for \(n\geqslant 9\). Additionally,
for \(i>1\), one can see from Proposition 3.5 that \(G\) has the minimum and maximum degree of the following form:
\[2\leqslant\min_{1\leqslant i\leqslant n}\ d_{i}\leqslant\max_{1\leqslant i \leqslant n}d_{i}\leqslant n-3,\]
where \(d_{i}\) is the degree of vertex \(i\). So the minimum degree of graphs realizing the sets \(S_{\{i,n\}_{n}^{2}}\), \(i>1\) and, more generally, the sets \(S_{\{i,n\}_{n}^{m}}\) (if any) is greater than or equal to \(2\) and by graph complement one can obtain the maximum degree \(n-3\). If the minimum degree equals \(1\), then \(G\) is a join, a contradiction. Note that this property is analogous to the one of \(S_{n,n}\)-conjecture proposed in [7, Section 3] the authors showed that for \(n=8,9\) the set \(S_{n,n}\) is not Laplacian realizable. We believe that the same is true for sets \(S_{\{i,n\}_{n}^{m}}\). Thus, if \(S_{n,n}\)-conjecture is true, then our \(S_{\{i,n\}_{n}^{m}}\)-conjecture will also hold true. According to these observations, we believe that the set \(S_{\{i,n\}_{n}^{2}}\), is not Laplacian realizable. For more details on \(S_{\{i,n\}_{n}^{m}}\)-conjecture, we refer the reader to [12, Section 5].
## 5. Conclusion
In [12], we established the existence of graphs realizing the sets \(S_{\{i,j\}_{n}^{m}}\) for \(m=n\) and \(m=n-1\) and completely described them. The present work continues our study on the realizability of sets \(S_{\{i,j\}_{n}^{m}}\) for cases \(m=1,2\). We completely characterized the graphs realizing those sets and developed an algorithm for constructing them but the sets \(S_{\{1,j\}_{n}^{2}}\) which are conjectured to be not Laplacian realizable for large \(n\).
**Conjecture 5.1**.: _If \(n\geqslant 3\), then the only Laplacian realizable set of kind \(S_{\{1,j\}_{n}^{2}}\) are \(S_{\{1,3\}_{4}^{2}}\) and \(S_{\{1,4\}_{5}^{2}}\)._
In addition, we believe that Lemma 4.6 and Proposition 4.2 provide a way forward to find graphs realizing the sets \(S_{\{i,j\}_{n}^{m}}\) for other particular values of \(m\) from already existing values of \(m\). For instance, the case \(m=1\) can be obtained from the case \(m=n\) using Lemma 4.6. In a similar way, one can describe graphs realizing \(S_{\{i,j\}_{n}^{m}}\) for \(m=2\) from those with \(m=n-1\) using Lemma 4.6. As well, Proposition 4.2 can help to find the repeated eigenvalue \(m\) from previously known values of \(m\). However, it is not clear whether the use of Proposition 4.2 and Lemma 4.6 may cover all the sets realizing \(S_{\{i,j\}_{n}^{m}}\) for fixed \(m\) and \(n\).
## Appendix A List of Laplacian integral graphs realizing \(S_{\{i,n-1\}_{n}^{1}}\) up to order \(8\)
In the paper by \(K_{n}\), \(P_{n}\), \(K_{1,n-1}\) and \(K_{p,q}\) (\(p+q=n\)) we denote the complete graph, the path graph, the star graph and the complete bipartite graph on \(n\) vertices, respectively. For the concepts and results about graphs not presented here, see, e.g., Bondy and Murty [2], and Diestel [6].
In [3, p. 301-304] the authors found the Laplacian spectra of all graphs up to order \(5\). Also in [4], the authors depicted all the graphs of order \(6\) without calculating their Laplacian spectra. We lists all the graphs realizing the sets \(S_{\{i,j\}_{n}^{1}}\) and \(S_{\{i,j\}_{n}^{2}}\) up to order \(6\). Also, we list few graphs of order \(7\) and \(8\) of such types. Note that we use the notation \(A_{n}\) for the anti-regular graph of order \(n\), see, e.g., [23].
**Table 1. Laplacian integral graphs realizing \(S_{\{i,j\}_{n}^{1}}\) for \(n=4,5,6,7,8\).**
\begin{tabular}{|c|c|c|} \hline
**Construction** & **Laplacian Spectrum** & \(S_{\{i,j\}_{n}^{m}}\) \\ \hline \(S_{4}\cong K_{3,1}\) & \(\{0,1,1,4\}\) & \(S_{\{2,3\}_{4}^{1}}\) \\ \hline \(K_{1}\vee\overline{K_{1,1,2}}\) & \(\{0,1,1,3,5\}\) & \(S_{\{2,4\}_{5}^{1}}\) \\ \hline \(K_{1}\vee(2K_{1}\cup P_{3})\) & \(\{0,1,1,2,4,6\}\) & \(S_{\{3,5\}_{6}^{1}}\) \\ \hline \(K_{1}\vee(P_{3}\cup\overline{P_{3}})\) & \(\{0,1,1,2,3,4,7\}\) & \(S_{\{5,6\}_{7}^{1}}\) \\ \hline \end{tabular}
**Table 2. Laplacian integral graphs realizing \(S_{\{i,j\}_{n}^{1}}\) for \(n=4,5,6,7,8\).**
## Appendix B Acknowledgements
The work of M. Tyaglov was partially supported by National Natural Science Foundation of China under grant no. 11901384.
|
2308.06936 | The Commutant of Multiplication by z on the Closure of Rational
Functions in $L^t(μ)$ | For a compact set $K\subset \mathbb C,$ a finite positive Borel measure $\mu$
on $K,$ and $1 \le t < \i,$ let $\text{Rat}(K)$ be the set of rational
functions with poles off $K$ and let $R^t(K, \mu)$ be the closure of
$\text{Rat}(K)$ in $L^t(\mu).$ For a bounded Borel subset $\mathcal D\subset
\mathbb C,$ let $\area_{\mathcal D}$ denote the area (Lebesgue) measure
restricted to $\mathcal D$ and let $H^\i (\mathcal D)$ be the weak-star closed
sub-algebra of $L^\i(\area_{\mathcal D})$ spanned by $f,$ bounded and analytic
on $\mathbb C\setminus E_f$ for some compact subset $E_f \subset \mathbb
C\setminus \mathcal D.$ We show that if $R^t(K, \mu)$ contains no non-trivial
direct $L^t$ summands, then there exists a Borel subset $\mathcal R \subset K$
whose closure contains the support of $\mu$ and there exists an isometric
isomorphism and a weak-star homeomorphism $\rho$ from $R^t(K, \mu) \cap
L^\infty(\mu)$ onto $H^\infty(\mathcal R)$ such that $\rho(r) = r$ for all
$r\in\text{Rat}(K).$ Consequently, we obtain some structural decomposition
theorems for $\rtkmu$. | Liming Yang | 2023-08-14T04:47:11Z | http://arxiv.org/abs/2308.06936v1 | # The commutant of multiplication by Z on the closure of rational functions in \(L^{t}(\mu)\)
###### Abstract.
For a compact set \(K\subset\mathbb{C}\), a finite positive Borel measure \(\mu\) on \(K,\) and \(1\leq t<\infty,\) let \(\operatorname{Rat}(K)\) be the set of rational functions with poles off \(K\) and let \(R^{t}(K,\mu)\) be the closure of \(\operatorname{Rat}(K)\) in \(L^{t}(\mu).\) For a bounded Borel subset \(\mathcal{D}\subset\mathbb{C},\) let \(\mathfrak{m}_{\mathcal{D}}\) denote the area (Lebesgue) measure restricted to \(\mathcal{D}\) and let \(H^{\infty}(\mathcal{D})\) be the weak-star closed sub-algebra of \(L^{\infty}(\mathfrak{m}_{\mathcal{D}})\) spanned by \(f,\) bounded and analytic on \(\mathbb{C}\setminus E_{f}\) for some compact subset \(E_{f}\subset\mathbb{C}\setminus\mathcal{D}.\) We show that if \(R^{t}(K,\mu)\) contains no non-trivial direct \(L^{t}\) summands, then there exists a Borel subset \(\mathcal{R}\subset K\) whose closure contains the support of \(\mu\) and there exists an isometric isomorphism and a weak-star homeomorphism \(\rho\) from \(R^{t}(K,\mu)\cap L^{\infty}(\mu)\) onto \(H^{\infty}(\mathcal{R})\) such that \(\rho(r)=r\) for all \(r\in\operatorname{Rat}(K).\) Consequently, we obtain some structural decomposition theorems for \(R^{t}(K,\mu).\)
Key words and phrases:Analytic Capacity, Cauchy Transform, Analytic Bounded Point Evaluations, and Bounded Point Evaluations 2010 Mathematics Subject Classification: Primary 46E15; Secondary 30C85, 31A15, 47B38
## 1. **Introduction**
For a Borel subset \(E\) of the complex plane \(\mathbb{C},\) let \(M_{0}(E)\) denote the set of finite complex-valued Borel measures that are compactly supported in \(E\) and let \(M_{0}^{+}(E)\) be the set of positive measures in \(M_{0}(E).\) For \(\nu\in M_{0}(\mathbb{C}),\) we use \(\operatorname{spt}(\nu)\) for the support of \(\nu,\)\(\nu_{B}\) for \(\nu\) restricted to \(B\subset\mathbb{C},\) and \(\mathcal{C}(\nu)\) for the Cauchy transform of \(\nu\) (see section 2 for its definition).
For a compact subset \(K\subset\mathbb{C},\)\(\mu\in M_{0}^{+}(K),\)\(1\leq t<\infty,\) and \(\frac{1}{t}+\frac{1}{s}=1,\) functions in \(\operatorname{Rat}(K):=\{q:q\text{ is a rational function with poles off }K\}\) are members of \(L^{t}(\mu)\). Let \(R^{t}(K,\mu)\) denote the closure of \(\operatorname{Rat}(K)\) in \(L^{t}(\mu)\) norm. We say \(g\perp R^{t}(K,\mu)\) (or \(g\in R^{t}(K,\mu)^{\perp}\)) if \(g\in L^{s}(\mu)\) and \(\int rgd\mu=0\) for all \(r\in\operatorname{Rat}(K).\) Let \(S_{\mu}\) denote the multiplication by \(z\) on \(R^{t}(K,\mu).\) The operator \(S_{\mu}\) is pure if \(R^{t}(K,\mu)\) does not have a non-trivial direct \(L^{t}\) summand. The commutant of \(S_{\mu}\) is denoted
\[R^{t,\infty}(K,\mu)=R^{t}(K,\mu)\cap L^{\infty}(\mu).\]
For a bounded Borel subset \(\mathcal{D}\subset\mathbb{C},\) let \(H(\mathcal{D})\) be the set of functions \(f,\) where \(f\) is a bounded and analytic function on \(\mathbb{C}\setminus E_{f}\) for some compact subset \(E_{f}\subset\mathbb{C}\setminus\mathcal{D}\) (depending on \(f\)). Let \(H^{\infty}(\mathcal{D})\) denote the weak-star closure of \(H(\mathcal{D})\) in \(L^{\infty}(\mathfrak{m}_{\mathcal{D}}),\) where \(\mathfrak{m}\) stands for the area (Lebesgue) measure on \(\mathbb{C}\). If \(\mathcal{D}\) is a bounded open subset, then \(H^{\infty}(\mathcal{D})\) is actually the algebra of bounded and analytic functions on \(\mathcal{D}.\) In general, \(\mathcal{D}\) may not have interior (i.e. a Swiss cheese set, see Example 3.14). We use \(\bar{E}\) for the closure of a subset \(E.\)
Our main theorem in this paper is the following.
**Theorem 1.1**.: _Let \(\mu\in M_{0}^{+}(K)\) for a compact set \(K\subset\mathbb{C}\). If \(1\leq t<\infty\) and \(S_{\mu}\) on \(R^{t}(K,\mu)\) is pure, then there is a Borel subset \(\mathcal{R}\subset K\) with spt\(\mu\subset\overline{\mathcal{R}}\) and there is an isometric isomorphism and a weak-star homeomorphism \(\rho\) from \(R^{t,\infty}(K,\mu)\) onto \(H^{\infty}(\mathcal{R})\) such that (1) \(\rho(r)=r\) for \(r\in\text{Rat}(K),\) (2) \(\mathcal{C}(g\mu)(z)=0,\ z\in\mathbb{C}\setminus\mathcal{R},\ \mathfrak{m}-a.a.\) for \(g\perp R^{t}(K,\mu),\) and (3) \(\rho(f)(z)\mathcal{C}(g\mu)(z)=\mathcal{C}(fg\mu)(z),\ \mathfrak{m}-a.a.\) for \(f\in R^{t,\infty}(K,\mu)\) and \(g\perp R^{t}(K,\mu).\)_
As applications, we obtain some structural decomposition theorems (Theorem 7.2 and Theorem 7.7) for \(R^{t}(K,\mu).\) Consequently, Corollary 7.10 extends the results of [16], [9], and [4] when the boundary of \(K\) is not too wild.
We briefly outline how we define the map \(\rho\) that associates to each function in \(R^{t}(K,\mu)\) a point function on \(\mathcal{R}.\) For \(g\in R^{t}(K,\mu)^{\perp}\) and \(r\in\text{Rat}(K),\) we have \(\frac{r(z)-r(\lambda)}{z-\lambda}\in R^{t}(K,\mu)\) for \(\lambda\in K,\) so \(\int\frac{r(z)-r(\lambda)}{z-\lambda}g(z)d\mu(z)=0.\) By Corollary 2.3,
\[r(\lambda)\mathcal{C}(g\mu)(\lambda)=\mathcal{C}(rg\mu)(\lambda),\ \gamma-a.a., \tag{1.1}\]
where \(\gamma\) stands for analytic capacity (see section 2 for its definition) and we use \(\gamma-a.a.\) for a property that holds everywhere except possibly on a set of analytic capacity zero. Let \(\{g_{j}\}\subset R^{t}(K,\mu)^{\perp}\) be a \(L^{s}(\mu)\) norm dense subset of \(R^{t}(K,\mu)^{\perp}.\) Let \(\mathcal{N}\) denote the collection of \(\lambda\in\mathbb{C}\) satisfying: there exists \(j\) such that \(\mathcal{C}(g_{j}\mu)(\lambda)=\lim_{\epsilon\to 0}\mathcal{C}_{\epsilon}(g_{j} \mu)(\lambda)\) (principal value, see section 2) exists and \(\mathcal{C}(g_{j}\mu)(\lambda)\neq 0.\) We fix \(f\in R^{t}(K,\mu).\) Choose a sequence \(\{r_{n}\}\subset\text{Rat}(K)\) such that
\[\|r_{n}-f\|_{L^{t}(\mu)}\to 0\text{ and }r_{n}(z)\to f(z),\ \mu-a.a..\text{.}\]
As an application of Tolsa's Theorem ( see Theorem 2.1 and Lemma 2.4 for details), we infer that there exists a subsequence \(\{r_{n_{k}}\}\) such that for all \(j\geq 1,\)
\[\mathcal{C}(r_{n_{k}}g_{j}\mu)(\lambda)\to\mathcal{C}(fg_{j}\mu)(\lambda),\ \gamma-a.a.\text{ as }k\to\infty.\]
Using (1.1), we conclude that \(r_{n_{k}}\) converges to a function, denoted \(\rho(f),\) on \(\mathcal{N},\ \gamma-a.a.\). Then \(\rho(f)\) satisfies
\[\rho(f)(\lambda)\mathcal{C}(g_{j}\mu)(\lambda)=\mathcal{C}(fg_{j}\mu)( \lambda),\ \gamma-a.a.\text{ for }j\geq 1.\]
We will prove that the set \(\mathcal{R}\) in Theorem 1.1 consists of \(\lambda\in\mathcal{N}\) such that there exists an integer \(N_{\lambda}\geq 1\) satisfying
\[\lim_{\delta\to 0}\frac{\gamma(\mathbb{D}(\lambda,\delta)\cap\{\max_{1\leq j \leq N_{\lambda}}|\mathcal{C}(g_{j}\mu)|\leq N_{\lambda}^{-1}\})}{\delta}=0,\]
where \(\mathbb{D}(\lambda,\delta):=\{z:\ |z-\lambda|<\delta\}\) (see Theorem 3.13).
For \(t=\infty,\) let \(R^{\infty}(K,\mu)\) be the weak-star closure of \(\text{Rat}(K)\) in \(L^{\infty}(\mu).\) Chaumat's Theorem [6] states that if \(R^{\infty}(K,\mu)\) contains no non-trivial direct \(L^{\infty}\) summands, then there exists a subset \(E\subset K\) such that the identity map \(\rho(r)=r\) for \(r\in\text{Rat}(K)\) extends an isometric isomorphism and a weak-star homeomorphism from \(R^{\infty}(K,\mu)\) onto \(R^{\infty}(\overline{E},\mathfrak{m}_{E})\) (also see [8, Chaumat's Theorem on page 288]). The envelope \(E\) consists of points \(\lambda\in K\) satisfying: there exists \(g_{\lambda}\perp R^{\infty}(K,\mu)\) such that \(\int\frac{|g_{\lambda}|}{|z-\lambda|}d\mu<\infty\) and \(\mathcal{C}(g_{\lambda}\mu)(\lambda)\neq 0.\) However, the concept of \(E\) can not be used for studying \(R^{t,\infty}(K,\mu)\) as there are points \(\lambda\in\mathcal{N}\) and \(j\) such that
\(\int\frac{|g_{j}|}{|z-\lambda|}d\mu=\infty,\) the principal value of \(\mathcal{C}(g_{j}\mu)(\lambda)\) exists, and \(\mathcal{C}(g_{j}\mu)(\lambda)\neq 0.\) We will see that those points are important for the structure of \(R^{t,\infty}(K,\mu).\)
Tolsa's Theorems (see [17], [18], and [19]) on analytic capacity and Cauchy transform provide us necessary tools in our approach. In section 2, we review some results of analytic capacity and Cauchy transform that are needed in our analysis. We define in details the map \(\rho\) that maps each function in \(R^{t}(K,\mu)\) to a point function on \(\mathcal{N},\ \gamma-a.a.\) (see Lemma 2.6). We also discuss some basic properties of \(\rho.\) In section 3, we define the non-removable boundary \(\mathcal{F}\) for an arbitrary compact subset \(K\) and \(\mu\in M_{0}^{+}(K)\). Intuitively, the set \(\mathcal{F}\) splits into three sets, \(\mathcal{F}_{0}\) and \(\mathcal{F}_{+}\cup\mathcal{F}_{-}\) such that (1) Cauchy transforms of annihilating measures \(g\mu\) (\(g\perp R^{t}(K,\mu)\)) are zero on \(\mathcal{F}_{0}\) and (2) Cauchy transforms of annihilating measures \(g\mu\) have zero "one side non-tangential limits" on \(\mathcal{F}_{+}\cup\mathcal{F}_{-}.\) The removable set is defined by \(\mathcal{R}=\mathbb{C}\setminus\mathcal{F}.\) The set \(\mathcal{N}\) is decomposed into \(\mathcal{R}\cup\mathcal{F}_{+}\cup\mathcal{F}_{-}.\) Theorem 4.1 proves that there exists a subset \(\mathcal{Q}\) with \(\gamma(\mathcal{Q})=0\) such that every \(\lambda\in\mathcal{R}\setminus\mathcal{Q}\) satisfies \(\lim_{\delta\to 0}\frac{\gamma(\mathbb{D}(\lambda,\delta)\setminus \mathcal{R})}{\delta}=0\) (full analytic capacity for \(\mathcal{R}\)), that is, the set \(\mathcal{R}\) is a "nearly open" subset. Theorem 5.4 characterizes \(H^{\infty}(\mathcal{R}),\) the algebra of "bounded and analytic" functions on a "nearly open" subset. The proof is basically given by [20, Theorem 4.3]. Using Theorem 5.4 and Lemma 5.7, we prove that \(\rho\) is surjective (Lemma 5.5) in section 5. In section 6, combining Lemma 6.2 with Theorem 5.4, we conclude that \(\rho\) maps \(R^{t,\infty}(K,\mu)\) onto \(H^{\infty}(\mathcal{R})\) and prove Theorem 1.1. We discuss some applications and prove Theorem 7.2, Theorem 7.7, Corollary 7.9, and Corollary 7.10 in section 7.
## 2. **Preliminaries and Definition of the Map \(\rho\)**
For \(A_{1},A_{2}>0,\) the statement \(A_{1}\lesssim A_{2}\) (resp. \(A_{1}\gtrsim A_{2}\)) means: there exists an absolute constant \(C>0\) (resp. \(c>0\)), independent of \(A_{1}\) and \(A_{2},\) such that \(A_{1}\leq CA_{2}\) (resp. \(A_{1}\geq cA_{2}\)). We say \(A_{1}\approx A_{2}\) if \(A_{1}\lesssim A_{2}\) and \(A_{1}\gtrsim A_{2}.\)
If \(B\subset\mathbb{C}\) is a compact subset, then we define the analytic capacity of \(B\) by
\[\gamma(B)=\sup|f^{\prime}(\infty)|,\]
where the supremum is taken over all those functions \(f\) that are analytic in \(\mathbb{C}_{\infty}\setminus B\) (\(\mathbb{C}_{\infty}:=\mathbb{C}\cup\{\infty\}\)) such that \(|f(z)|\leq 1\) for all \(z\in\mathbb{C}_{\infty}\setminus B;\) and \(f^{\prime}(\infty):=\lim_{z\to\infty}z(f(z)-f(\infty)).\) The analytic capacity of a general subset \(E\) of \(\mathbb{C}\) is given by: \(\gamma(E)=\sup\{\gamma(B):B\subset E\text{ compact}\}.\) The following elementary property can be found in [13, Theorem VIII.2.3],
\[\mathfrak{m}(E)\leq 4\pi\gamma(E)^{2}, \tag{2.1}\]
where \(\mathfrak{m}\) is the area (Lebesgue) measure on \(\mathbb{C}\). For \(\nu\in M_{0}(\mathbb{C})\) and \(\epsilon>0,\) define
\[\mathcal{C}_{\epsilon}(\nu)(z)=\int_{|w-z|>\epsilon}\frac{1}{w-z}d\nu(w).\]
The (principal value) Cauchy transform of \(\nu\) is defined by
\[\mathcal{C}(\nu)(z)=\lim_{\epsilon\to 0}\mathcal{C}_{\epsilon}(\nu)(z) \tag{2.2}\]
for all \(z\in\mathbb{C}\) for which the limit exists. From Corollary 2.3, we see that (2.2) is defined for all \(z\) except for a set of zero analytic capacity. In particular, by (2.1),
it is defined \(\mathfrak{m}-a.a..\) Throughout this paper, the Cauchy transform of a measure always means the principal value of the transform. In the sense of distributions,
\[\bar{\partial}\mathcal{C}(\nu)=-\pi\nu. \tag{2.3}\]
Good sources for basic information about analytic capacity and Cauchy transform are [19], [11], [13], and [8].
The maximal Cauchy transform is defined by
\[\mathcal{C}_{*}(\nu)(z)=\sup_{\epsilon>0}|\mathcal{C}_{\epsilon}(\nu)(z)|.\]
A related capacity, \(\gamma_{+},\) is defined for subsets \(E\) of \(\mathbb{C}\) by:
\[\gamma_{+}(E)=\sup\|\mu\|,\]
where the supremum is taken over \(\mu\in M_{0}^{+}(E)\) for which \(\|\mathcal{C}(\mu)\|_{L^{\infty}(\mathbb{C})}\leq 1.\) Since \(\mathcal{C}\mu\) is analytic in \(\mathbb{C}_{\infty}\setminus\mathrm{spt}(\mu)\) and \(|(\mathcal{C}(\mu)^{\prime}(\infty)|=\|\mu\|,\) we have: \(\gamma_{+}(E)\leq\gamma(E).\)
X. Tolsa has established the following astounding results. See [18] (also Theorem 6.1 and Corollary 6.3 in [19]) for (1) and (2). See [17, Proposition 2.1] (also [19, Proposition 4.16]) for (3).
**Theorem 2.1**.: _(Tolsa 2003) There exists an absolute constant \(C_{T}>0\) such that:_
_(1) \(\gamma_{+}\) and \(\gamma\) are actually equivalent. That is,_
\[\gamma(E)\leq C_{T}\gamma_{+}(E)\text{ for all }E\subset\mathbb{C}.\]
_(2) Semiadditivity of analytic capacity: for \(E_{1},E_{2},...,E_{m}\subset\mathbb{C}\) (\(m\) may be \(\infty\)),_
\[\gamma\left(\bigcup_{i=1}^{m}E_{i}\right)\leq C_{T}\sum_{i=1}^{m}\gamma(E_{i}).\]
_(3) For \(a>0\), we have:_
\[\gamma(\{\mathcal{C}_{*}(\nu)\geq a\})\leq C_{T}\frac{\|\nu\|}{a}.\]
For \(\eta\in M_{0}^{+}(\mathbb{C}),\) define
\[N_{2}(\eta)=\sup_{\epsilon>0}\sup_{\|f\|_{L^{2}(\eta)}=1}\|\mathcal{C}_{ \epsilon}(f\eta)\|_{L^{2}(\eta)}.\]
\(\eta\) is of \(c\)-linear growth if \(\eta(\mathbb{D}(\lambda,\delta))\leq c\delta,\) for \(\lambda\in\mathbb{C}\text{ and }\delta>0.\) The following Proposition is from [19, Theorem 4.14] and its proofs.
**Proposition 2.2**.: _There exists an absolute constant \(C_{T}>0\) (we use the same constant as in Theorem 2.1) such that if \(F\subset\mathbb{C}\) is a compact subset and \(\eta\in M_{0}^{+}(F),\) then the following properties are true. (1) If \(\|\mathcal{C}\eta\|_{L^{\infty}(\mathbb{C})}\leq 1,\) then \(\eta\) is of \(1\)-linear growth and \(\sup_{\epsilon>0}\|\mathcal{C}_{\epsilon}(\eta)\|_{L^{\infty}(\mathbb{C})} \leq C_{T}.\) (2) If \(\eta\) is of \(1\)-linear growth and \(\|\mathcal{C}_{\epsilon}(\eta)\|_{L^{\infty}(\mathbb{C})}\leq 1\) for all \(\epsilon>0,\) then there exists a subset \(A\subset F\) such that \(\eta(F)\leq 2\eta(A)\) and \(N_{2}(\eta|_{A})\leq C_{T}.\) (3) If \(N_{2}(\eta)\leq 1,\) then there exists some function \(w\) supported on \(F\), with \(0\leq w\leq 1\) such that \(\eta(F)\ \leq\ 2\int wd\eta\) and \(\sup_{\epsilon>0}\|\mathcal{C}_{\epsilon}(w\eta)\|_{L^{\infty}(\mathbb{C})} \ \leq C_{T}.\)_
Combining Theorem 2.1 (1), Proposition 2.2, and [19, Theorem 8.1], we get the following corollary. The reader may also see [1, Corollary 3.1].
**Corollary 2.3**.: _If \(\nu\in M_{0}(\mathbb{C}),\) then there exists \(\mathcal{Q}\subset\mathbb{C}\) with \(\gamma(\mathcal{Q})=0\) such that \(\lim_{\epsilon\to 0}\mathcal{C}_{\epsilon}(\nu)(z)\) exists for \(z\in\mathbb{C}\setminus\mathcal{Q}.\)_
**Lemma 2.4**.: _Let \(\mu\in M_{0}^{+}(\mathbb{C}).\) Suppose \(f_{n},f\in L^{1}(\mu)\) satisfying: \(\|f_{n}-f\|_{L^{1}(\mu)}\to 0\) and \(f_{n}\to f,\ \mu\)-a.a. Then:_
_(1) For \(\epsilon>0\), there exists a subset \(A_{\epsilon}\) with \(\gamma(A_{\epsilon})<\epsilon\) and a subsequence \(\{f_{n_{m}}\}\) such that \(\{\mathcal{C}(f_{n_{m}}\mu)\}\) uniformly converges to \(\mathcal{C}(f\mu)\) on \(\mathbb{C}\setminus A_{\epsilon}.\)_
_(2) There exists a subset \(\mathcal{Q}\) with \(\gamma(\mathcal{Q})=0\) and a subsequence \(\{f_{n_{m}}\}\) such that \(\mathcal{C}(f_{n_{m}}\mu)(z)\) converges to \(\mathcal{C}(f\mu)(z)\) for \(z\in\mathbb{C}\setminus\mathcal{Q}.\)_
Proof.: Applying Theorem 2.1 and Corollary 2.3, [20, Lemma 2.5] proves (1).
(2): Set \(\mathcal{Q}=\cap_{k=1}^{\infty}A_{\frac{1}{k}}\). Clearly, \(\gamma(\mathcal{Q})=0\) and there exists a subsequence \(\{f_{n_{m}}\}\) such that \(\mathcal{C}(f_{n_{m}}\mu)(z)\) converges to \(\mathcal{C}(f\mu)(z)\) for \(z\in\mathbb{C}\setminus\mathcal{Q}.\)
From section 2 to section 6, we assume that \(K\subset\mathbb{C}\) is a compact subset, \(\mu\in M_{0}^{+}(K),\)\(1\leq t<\infty,\)\(\frac{1}{t}+\frac{1}{s}=1,\)\(S_{\mu}\) on \(R^{t}(K,\mu)\) is pure, and \(\Lambda=\{g_{j}\}_{j=1}^{\infty}\subset R^{t}(K,\mu)^{\perp}\) is a \(L^{s}(\mu)\) norm dense subset of \(R^{t}(K,\mu)^{\perp}.\) Define the non-zero set \(\mathcal{N}\) of \(\{\mathcal{C}(g_{j}\mu)\}\) as the following:
\[\mathcal{N}=\bigcup_{j=1}^{\infty}\left\{z:\ \lim_{\epsilon\to 0} \mathcal{C}_{\epsilon}(g_{j}\mu)(z)\ \text{exists and}\ \mathcal{C}(g_{j}\mu)(z)\neq 0\right\}. \tag{2.4}\]
Define the zero set \(\mathcal{F}_{0}\) of \(\{\mathcal{C}(g_{j}\mu)\}\) as the following:
\[\mathcal{F}_{0}=\bigcap_{j=1}^{\infty}\{z:\ \lim_{\epsilon\to 0} \mathcal{C}_{\epsilon}(g_{j}\mu)(z)\ \text{exists and}\ \mathcal{C}(g_{j}\mu)(z)=0\}. \tag{2.5}\]
**Proposition 2.5**.: _The following basic properties hold._
_(1) \(\mathcal{N}\subset K,\)\(\mathbb{C}\setminus K\subset\mathcal{F}_{0},\) and \(\mathcal{N}\cup\mathcal{F}_{0}\approx\mathbb{C},\ \gamma-a.a.\)_
_(2) For \(g\perp R^{t}(K,\mu),\) we have \(\mathcal{C}(g\mu)(z)=0,\ \gamma|_{\mathcal{F}_{0}}-a.a.\)_
_(3) The sets \(\mathcal{N}\) and \(\mathcal{F}_{0}\) are independent of the choices of \(\Lambda\) up to a set of zero analytic capacity._
Proof.: (1) follows from Corollary 2.3. There exists a subsequence \(\{g_{j_{k}}\}\) such that \(\|g_{j_{k}}-g\|_{L^{s}(\mu)}\to 0\) and \(g_{j_{k}}(z)\to g(z),\ \mu-a.a..\) Applying Lemma 2.4 (2), we get (2). (3) is implied by (1) and (2).
**Lemma 2.6**.: _If \(f\in R^{t}(K,\mu),\) then there exists a unique function \(\rho(f)\) defined on \(\mathbb{C},\ \gamma-a.a.\) and there exists a subset \(\mathcal{Q}_{f}\subset\mathcal{N}\) with \(\gamma(\mathcal{Q}_{f})=0\) such that \(\rho(f)(z)=0\) for \(z\in\mathcal{F}_{0},\)_
\[\rho(f)(z)\mathcal{C}(g\mu)(z)=\mathcal{C}(fg\mu)(z),\ \gamma-a.a.\ \text{for}\ g\perp R^{t}(K,\mu), \tag{2.6}\]
_and_
\[\rho(f)(z)=f(z),\ \mu_{\mathcal{N}\setminus\mathcal{Q}_{f}}-a.a. \tag{2.7}\]
_Clearly, \(\rho(r)(z)=r(z)\) for \(z\in\mathcal{N}\) and \(r\in\text{Rat}(K)\)._
Proof.: Choose \(\{r_{n}\}\subset\text{Rat}(K)\) such that \(\|r_{n}-f\|_{L^{t}(\mu)}\to 0\) and \(r_{n}\to f,\ \mu-a.a.\) For \(\lambda\in K,\) we see that \(\frac{r_{n}(z)-r_{n}(\lambda)}{z-\lambda}\in\text{Rat}(K).\) So \(\int\frac{r_{n}(z)-r_{n}(\lambda)}{z-\lambda}g(z)d\mu(z)=0\) and applying Corollary 2.3, we have
\[r_{n}(z)\mathcal{C}(g_{j}\mu)(z)=\mathcal{C}(r_{n}g_{j}\mu)(z),\ \gamma-a.a.\ \text{for}\ n,j\geq 1.\]
From Lemma 2.4 (2), we can choose a subsequence \(\{r_{n_{k}}\}\) such that
\[r_{n_{k}}(z)\mathcal{C}(g_{j}\mu)(z)\to\mathcal{C}(fg_{j}\mu)(z),\ \gamma-a.a.\ \text{for}\ j\geq 1.\]
Therefore, \(r_{n_{k}}\) converges to a function, denoted \(\rho(f)\), on \(\mathcal{N}\) and
\[\rho(f)(z)\mathcal{C}(g_{j}\mu)(z)=\mathcal{C}(fg_{j}\mu)(z),\ \gamma-a.a.\ \text{for}\ j\geq 1. \tag{2.8}\]
Set \(\rho(f)(z)=0\) for \(z\in\mathbb{C}\setminus\mathcal{N}.\) It is clear that \(\rho(f)\) is unique up to a set of zero analytic capacity. Choose a subsequence \(\{g_{j_{k}}\}\) such that \(\|g_{j_{k}}-g\|_{L^{s}(\mu)}\to 0\) and \(g_{j_{k}}(z)\to g(z),\ \mu-a.a..\) Using Lemma 2.4 (2) and (2.8), we get (2.6). Let \(\mathcal{Q}_{f}\) be a subset with \(\gamma(\mathcal{Q}_{f})=0\) such that \(r_{n_{k}}(z)\to\rho(f)(z)\) for \(z\in\mathcal{N}\setminus\mathcal{Q}_{f}.\) (2.7) follows since \(r_{n_{k}}(z)\to f(z),\ \mu-a.a.\)
**Proposition 2.7**.: _The following statements are true for \(\rho.\)_
_(1) If \(f_{1},f_{2}\in R^{t,\infty}(K,\mu),\) then \(\rho(f_{1}f_{2})(z)=\rho(f_{1})(z)\rho(f_{2})(z),\ \gamma-a.a..\)_
_(2) If \(f\in R^{t,\infty}(K,\mu)\), then \(\|\rho(f)\|_{L^{\infty}(\mathfrak{m}_{\mathcal{N}})}\leq\|f\|_{L^{\infty}(\mu)}\)._
Proof.: (1): For \(f_{1},f_{2}\in R^{t,\infty}(K,\mu)\) and \(g\perp R^{t}(K,\mu),\) we see that \(f_{2}g\perp R^{t}(K,\mu).\) From (2.6), we get
\[\rho(f_{1}f_{2})(z)\mathcal{C}(g\mu)(z)=\rho(f_{1})(z)\mathcal{C}(f_{2}g\mu)( z)=\rho(f_{1})(z)\rho(f_{2})(z)\mathcal{C}(g\mu)(z),\ \gamma-a.a.\]
Hence, (1) follows.
(2): For \(f\in R^{t,\infty}(K,\mu)\) and \(g\perp R^{t}(K,\mu),\) using (1) and (2.6), we have
\[\int_{\mathcal{N}}|\rho(f)(z)|^{n}|\mathcal{C}(g\mu)(z)|d\mathfrak{ m}(z)= \int_{\mathcal{N}}|\mathcal{C}(f^{n}g\mu)(z)|d\mathfrak{m}(z)\] \[\leq \int_{\mathcal{N}}\int\frac{|f(w)|^{n}|g(w)|}{|w-z|}d\mu(w)d \mathfrak{m}(z)\] \[\leq 2\pi d\|g\|_{L^{1}(\mu)}\|f\|_{L^{\infty}(\mu)}^{n},\]
where \(d\) is the diameter of \(\mathcal{N},\) which implies
\[\left(\int_{\mathcal{N}}|\rho(f)(z)|^{n}|\mathcal{C}(g\mu)(z)|d\mathfrak{m}(z )\right)^{\frac{1}{n}}\leq\left(2\pi d\|g\|_{L^{1}(\mu)}\right)^{\frac{1}{n}} \|f\|_{L^{\infty}(\mu)}.\]
Taking \(n\to\infty,\) we get \(\|\rho(f)\|_{L^{\infty}(\mathfrak{m}|_{\{\mathcal{C}(g\mu)\neq 0\}})}\leq\|f\|_{L^{ \infty}(\mu)}\). Now (2) follows from the definition (2.4) of \(\mathcal{N}.\)
## 3. **The Structure of \(\mathcal{N}\)**
For \(\nu\in M_{0}(\mathbb{C}),\) define the zero and non-zero linear density sets:
\[\mathcal{Z}\mathcal{D}(\nu)=\{\lambda:\ \Theta_{\nu}(\lambda)=0\}\ \text{and}\ \mathcal{N}\mathcal{D}(\nu)=\{\lambda:\ \Theta_{\nu}^{*}(\lambda)>0\},\]
where \(\Theta_{\nu}(\lambda)=\lim_{\delta\to 0}\frac{|\nu|(\mathbb{D}(\lambda, \delta))}{\delta}\) if the limit exists and \(\Theta_{\nu}^{*}(\lambda)=\frac{\lim_{\delta\to 0}|\nu|(\mathbb{D}(\lambda, \delta))}{\delta}.\) Set \(\mathcal{N}\mathcal{D}(\nu,n)=\{\lambda:\ n^{-1}\leq\Theta_{\nu}^{*}(\lambda) \leq n\}.\) Define
\[\mathcal{R}_{0}=\mathcal{N}\cap\mathcal{Z}\mathcal{D}(\mu). \tag{3.1}\]
The 1-dimensional Hausdorff measure of \(F\) is:
\[\mathcal{H}^{1}(F)=\sup_{\epsilon>0}\inf\left\{\sum_{i}\text{diam}(F_{i}):\ F \subset\cup_{i}F_{i},\ \text{diam}(F_{i})\leq\epsilon\right\}.\]
For basic information about Hausdorff measures, the reader may take a look at the books Carleson [5] and Mattila [14].
Let \(\mathbb{R}\) be the real line. Let \(A:\mathbb{R}\to\mathbb{R}\) be a Lipschitz function with its graph \(\Gamma=\{(x,A(x))\,:\,\,x\in\mathbb{R}\}.\) The following lemma, which describes the decomposition of \(\mu\in M_{0}^{+}(K),\) is important for us to understand \(\mathcal{N}.\)
**Lemma 3.1**.: _For \(\nu\in M_{0}(\mathbb{C}),\) there is a sequence of Lipschitz functions \(A_{n}:\mathbb{R}\to\mathbb{R}\) such that their (rotated) graphs \(\Gamma_{n}\) with rotation angles \(\beta_{n}\) satisfying:_
_(1) Let \(\Gamma=\cup_{n}\Gamma_{n}\). Then \(\nu=h\mathcal{H}^{1}|_{\Gamma}+\nu_{s}\) is the Radon-Nikodym decomposition with respect to \(\mathcal{H}^{1}|_{\Gamma}\), where \(h\in L^{1}(\mathcal{H}^{1}|_{\Gamma})\) and \(\nu_{s}\perp\mathcal{H}^{1}|_{\Gamma};\)_
_(2) \(\mathcal{ND}(\nu)\approx\mathcal{N}(h),\,\,\gamma-a.a.,\) where \(\mathcal{N}(h):=\{h\neq 0\};\)_
_(3) \(\Theta_{\nu_{s}}(z)=0,\,\,\gamma-a.a..\)_
_As a result, if \(\eta\in M_{0}^{+}(\mathbb{C})\) is of \(c\)-linear growth and \(\eta\perp|\nu|\), then \(\eta(\mathcal{ND}(\nu))=0.\)_
Proof.: Without loss of generality, we assume \(\nu\in M_{0}^{+}(\mathbb{C}).\) Using [19, Lemma 8.12], we see that
\[\mathcal{H}^{1}(\{\Theta_{\nu}^{*}(z)\geq n\})\lesssim\frac{\|\nu\|}{n} \tag{3.2}\]
and \(\nu|_{\mathcal{ND}(\nu,n)}=h\mathcal{H}^{1}|_{\mathcal{ND}(\nu,n)},\) where \(h\) is some Borel function such that \(\frac{1}{n}\lesssim h(z)\lesssim n,\,\,\,\mathcal{H}^{1}|_{\mathcal{ND}(\mu,n) }-a.a.\) and \(\mathcal{H}^{1}(\mathcal{ND}(\nu,n))<\infty.\) Let \(\mathcal{Q}_{\nu}=\{z:\,\,\Theta_{\nu}^{*}(z)=\infty\}.\) Then \(\gamma(\mathcal{Q}_{\nu})=\mathcal{H}^{1}(\mathcal{Q}_{\nu})=0\) by (3.2) and
\[\mathcal{ND}(\nu)=\bigcup_{n}\mathcal{ND}(\nu,n)\cup\mathcal{Q}_{\nu}.\]
From [19, Theorem 1.26], \(\mathcal{ND}(\nu,n)=E_{r}\cup E_{u},\) where \(E_{r}\) is rectifiable and \(E_{u}\) is purely unrectifiable. From David's Theorem (see [10] or [19, Theorem 7.2]), we see that \(\gamma(E_{u})=0.\) Applying [19, Proposition 4.13], we find a sequence of (rotated) Lipschitz graphs \(\{\Gamma_{nm}\}\) and \(E_{1}\) with \(\mathcal{H}^{1}(E_{1})=0\) such that \(E_{r}\subset E_{1}\cup\cup_{m=1}^{\infty}\Gamma_{nm}.\) Let \(\mathcal{Q}=E_{u}\cup E_{1}.\) Clearly, \(\gamma(\mathcal{Q})=0.\) Hence, there exists a sequence of \(\{\Gamma_{n}\}\) such that (1), (2), and (3) hold.
**Definition 3.2**.: Let \(\mathcal{Q}\) be a set with \(\gamma(\mathcal{Q})=0\). Let \(f(z)\) be a function defined on \(\mathbb{D}(\lambda,\delta_{0})\setminus\mathcal{Q}\) for some \(\delta_{0}>0.\) The function \(f\) has a \(\gamma\)-limit \(a\) at \(\lambda\) if
\[\lim_{\delta\to 0}\frac{\gamma(\mathbb{D}(\lambda,\delta)\cap\{|f(z)-a|> \epsilon\})}{\delta}=0\]
for all \(\epsilon>0\). If in addition, \(f(\lambda)\) is well defined and \(a=f(\lambda),\) then \(f\) is \(\gamma\)-continuous at \(\lambda.\)
The following lemma is from [1, Lemma 3.2].
**Lemma 3.3**.: _Let \(\nu\in M_{0}(\mathbb{C})\) and assume that for some \(\lambda\in\mathbb{C},\)\(\Theta_{\nu}(\lambda)=0\) and \(\mathcal{C}(\nu)(\lambda)=\lim_{\epsilon\to 0}\mathcal{C}_{\epsilon}(\nu)(\lambda)\) exists. Then \(\mathcal{C}(\nu)(z)\) is \(\gamma\)-continuous at \(\lambda.\)_
**Lemma 3.4**.: _Let \(A:\mathbb{R}\to\mathbb{R}\) be a Lipschitz function with its graph \(\Gamma\). Then \(N_{2}(\mathcal{H}^{1}|_{\Gamma})<\infty,\) while \(N_{2}(\mathcal{H}^{1}|_{\Gamma})\) only depends on \(\|A^{\prime}\|_{\infty}\). Consequently, there is a constant \(c_{\Gamma}>0\) that depends only on \(\|A^{\prime}\|_{\infty}\) such that for \(E\subset\Gamma\),_
\[c_{\Gamma}\mathcal{H}^{1}|_{\Gamma}(E)\leq\gamma(E)\leq\mathcal{H}^{1}|_{ \Gamma}(E). \tag{3.3}\]
Proof.: See [7] or [19, Theorem 3.11] for \(N_{2}(\mathcal{H}^{1}|_{\Gamma})<\infty.\) Therefore, from Proposition 2.2 (3) and Theorem 2.1 (1), we conclude that \(c_{\Gamma}\mathcal{H}^{1}|_{\Gamma}(E)\leq\gamma(E)\) holds. See [19, Theorem 1.21] for \(\gamma(E)\leq\mathcal{H}^{1}|_{\Gamma}(E).\)
Let \(A\) be a Lipschitz function with its graph \(\Gamma\). Define \(U_{\Gamma}=\{Im(z)>A(Re(z))\}\) and \(L_{\Gamma}=\{Im(z)<A(Re(z))\}.\) Denote \(I=\sqrt{-1}.\) We consider the usual complex-valued measure
\[\frac{1}{2\pi I}dz_{\Gamma}=\frac{1+IA^{\prime}(Re(z))}{2\pi I(1+A^{\prime}(Re (z))^{2})^{\frac{1}{2}}}d\mathcal{H}^{1}|_{\Gamma}=L(z)d\mathcal{H}^{1}|_{ \Gamma}.\]
Notice that \(|L(z)|=\frac{1}{2\pi}.\) Plemelj's formula for Lipschitz graphs can be found as in [19, Theorem 8.8]. We now improve Plemelj's formula for an arbitrary measure as the following theorem.
**Theorem 3.5**.: _Let \(A\) be a Lipschitz function with its graph \(\Gamma\) and let \(\nu\in M_{0}(\mathbb{C})\). Suppose that \(\nu=b\mathcal{H}^{1}|_{\Gamma}+\nu_{s}\) is the Radon-Nikodym decomposition with respect to \(\mathcal{H}^{1}|_{\Gamma}\), where \(b\in L^{1}(\mathcal{H}^{1}|_{\Gamma})\) and \(\nu_{s}\perp\mathcal{H}^{1}|_{\Gamma}\). Then there exists a subset \(\mathcal{Q}\subset\mathbb{C}\) with \(\gamma(\mathcal{Q})=0\), such that the following hold:_
_(a) \(\mathcal{C}(\nu)(\lambda)=\lim_{\epsilon\to 0}\mathcal{C}_{\epsilon}(\nu)(\lambda)\) exists for \(\lambda\in\mathbb{C}\setminus\mathcal{Q}\)._
_(b) For \(\lambda\in\Gamma\setminus\mathcal{Q}\) and \(\epsilon>0\), \(v^{+}(\nu,\Gamma)(\lambda):=\mathcal{C}(\nu)(\lambda)+\frac{b(\lambda)}{2L( \lambda)},\)_
\[\lim_{\delta\to 0}\frac{\gamma(U_{\Gamma}\cap\mathbb{D}(\lambda,\delta)\cap\{| \mathcal{C}(\nu)(z)-v^{+}(\nu,\Gamma)(\lambda)|>\epsilon\})}{\delta}=0.\]
_(c) For \(\lambda\in\Gamma\setminus\mathcal{Q}\) and \(\epsilon>0\), \(v^{-}(\nu,\Gamma)(\lambda):=\mathcal{C}(\nu)(\lambda)-\frac{b(\lambda)}{2L( \lambda)},\)_
\[\lim_{\delta\to 0}\frac{\gamma(L_{\Gamma}\cap\mathbb{D}(\lambda,\delta)\cap\{| \mathcal{C}(\nu)(z)-v^{-}(\nu,\Gamma)(\lambda)|>\epsilon\})}{\delta}=0.\]
_(d) For \(\lambda\in\Gamma\setminus\mathcal{Q}\) and \(\epsilon>0\), \(v^{0}(\nu,\Gamma)(\lambda):=\mathcal{C}(\nu)(\lambda),\)_
\[\lim_{\delta\to 0}\frac{\gamma(\Gamma\cap\mathbb{D}(\lambda,\delta)\cap\{| \mathcal{C}(\nu)(z)-v^{0}(\nu,\Gamma)(\lambda)|>\epsilon\})}{\delta}=0.\]
Proof.: As \(\nu\) is compactly supported, we will just consider the portion of the graph of \(\Gamma\) that lies in \(\{z:\ |Re(z)|<M\}\) for some \(M>0\).
From Corollary 2.3, (a) follows and we assume that \(\mathcal{C}(\nu_{s})(\lambda)=\lim_{\epsilon\to 0}\mathcal{C}_{\epsilon}(\nu_{s})(\lambda),\)\(\mathcal{C}(b\mathcal{H}^{1}|_{\Gamma})(\lambda)=\lim_{\epsilon\to 0}\mathcal{C}_{ \epsilon}(b\mathcal{H}^{1}|_{\Gamma})(\lambda),\) and \(\mathcal{C}(dz_{\Gamma})(\lambda)=\lim_{\epsilon\to 0}\mathcal{C}_{ \epsilon}(dz_{\Gamma})(\lambda)\) exist. Using Lemma 3.1, we assume that \(\Theta_{\nu_{s}}(\lambda)=0.\) We also assume that \(\lambda\) is a differentiable point for \(\Gamma\) and is a Lebesgue point for \(\frac{b}{L}\), that is,
\[\lim_{\delta\to 0}\frac{1}{\delta}\int_{\mathbb{D}(\lambda,\delta)}\left|\frac{b( z)}{L(z)}-\frac{b(\lambda)}{L(\lambda)}\right|\left|dz_{\Gamma}\right|=0 \tag{3.4}\]
as \(\int_{\mathbb{D}(\lambda,\delta)}|dz_{\Gamma}|\approx\delta.\) Set \(\nu=\nu_{1}+\frac{b(\lambda)}{2\pi IL(\lambda)}dz_{\Gamma},\) where \(\nu_{1}=(\frac{b}{2\pi IL}-\frac{b(\lambda)}{2\pi IL(\lambda)})dz_{\Gamma}+\nu_ {s}.\) From (3.4), we see that \(\Theta_{\nu_{1}}(\lambda)=0.\) Applying Lemma 3.3 to \(\nu_{1},\) we conclude that \(\mathcal{C}(\nu_{1})\) is \(\gamma\)-continuous at \(\lambda.\) Therefore, using Theorem 2.1 (2), we just need to prove (b), (c), and (d) for \(\nu=dz_{\Gamma}\) below.
(b): Let \(\lambda_{l}\) and \(\lambda_{r}\) be the left and right intersections of \(\Gamma\) and \(\partial\mathbb{D}(\lambda,\delta_{0}),\) respectively. Let \(B_{u}\) denote the boundary of \(U_{\Gamma}\cap\mathbb{D}(\lambda,\delta_{0}).\) Then for \(\delta<\delta_{0}\) and
\(w\in U_{\Gamma}\cap\mathbb{D}(\lambda,\delta)\),
\[\left|\int_{B_{u}\backslash\Gamma}\frac{dz}{z-\lambda}-\int_{B_{u}\backslash\Gamma }\frac{dz}{z-w}\right|\lesssim\frac{\delta}{\delta_{0}-\delta}.\]
Hence,
\[\mathcal{C}(dz_{\Gamma})(w)-\int_{|z-\lambda|>\delta_{0}}\frac{dz_ {\Gamma}}{z-w} =\int_{B_{u}}\frac{dz}{z-w}-\int_{B_{u}\backslash\Gamma}\frac{dz}{ z-w}\] \[\to 2\pi I-\int_{B_{u}\backslash\Gamma}\frac{dz}{z-\lambda}\] \[=2\pi I-I(arg(\lambda_{l}-\lambda)-arg(\lambda_{r}-\lambda))\]
as \(\delta\to 0\) (\(w\to\lambda\)). Since \(arg(\lambda_{l}-\lambda)-arg(\lambda_{r}-\lambda)\to\pi\) and \(\mathcal{C}_{\delta_{0}}(dz_{\Gamma})(\lambda)\to\mathcal{C}(dz_{\Gamma})(\lambda)\) as \(\delta_{0}\to 0\), we have
\[\lim_{\delta\to 0,\ w\in U_{\Gamma}\cap\mathbb{D}(\lambda,\delta)}\mathcal{C}( dz_{\Gamma})(w)=\mathcal{C}(dz_{\Gamma})(\lambda)+\pi I.\]
This completes the proof of (b). The proof of (c) is the same.
From Lemma 3.4, we see that the Cauchy transform of \(\mathcal{H}^{1}|_{\Gamma}\) is bounded on \(L^{2}(\mathcal{H}^{1}|_{\Gamma})\). So we get \(\mathcal{C}(dz_{\Gamma})\in L^{2}(\mathcal{H}^{1}|_{\Gamma})\) as \(\Gamma\subset\{z:\ |Re(z)|<M\}\). If \(\lambda\) is a Lebesgue point for \(\mathcal{C}(dz_{\Gamma})\), then we have
\[\frac{\mathcal{H}^{1}|_{\Gamma}\left(\{|\mathcal{C}(dz_{\Gamma}) -\mathcal{C}(dz_{\Gamma})(\lambda)|>\epsilon\}\cap\mathbb{D}(\lambda,\delta) \right)}{\delta}\] \[\lesssim \frac{1}{\epsilon\delta}\int_{\mathbb{D}(\lambda,\delta)\cap \Gamma}|\mathcal{C}(dz_{\Gamma})(z)-\mathcal{C}(dz_{\Gamma})(\lambda)|d \mathcal{H}^{1}\to 0\]
as \(\delta\to 0\), which proves (d) by (3.3).
We point out if \(\Gamma\) is a rotated Lipschitz graph (with rotation angle \(\beta\) and Lipschitz function \(A\)) at \(\lambda_{0}\), then
\[v^{+}(\nu,\Gamma,\beta)(\lambda):=\mathcal{C}(\nu)(\lambda)+\frac{e^{-i\beta} b(\lambda)}{2L((\lambda-\lambda_{0})e^{-i\beta}+\lambda_{0})}. \tag{3.5}\]
Similarly,
\[v^{-}(\nu,\Gamma,\beta)(\lambda):=\mathcal{C}(\nu)(\lambda)-\frac{e^{-i\beta} b(\lambda)}{2L((\lambda-\lambda_{0})e^{-i\beta}+\lambda_{0})}. \tag{3.6}\]
Hence, \(v^{+}(\nu,\Gamma)(\lambda)=v^{+}(\nu,\Gamma,0)(\lambda)\) and \(v^{-}(\nu,\Gamma)(\lambda)=v^{-}(\nu,\Gamma,0)(\lambda)\).
From now on, we fix \(\Gamma_{n}\) with rotation angle \(\beta_{n}\), \(\Gamma\), and \(h\) as in Lemma 3.1 for \(\mu\in M_{0}^{+}(K)\). We may assume that \(\mathcal{H}^{1}(\Gamma_{n}\cap\Gamma_{m})=0\) for \(n\neq m.\) Otherwise, we just consider the portion \(\Gamma_{n}\setminus\cup_{k=1}^{n-1}\Gamma_{k}.\) Define
\[\mathcal{Z}_{+}(\Gamma_{n})= \bigcap_{j=1}^{\infty}\left\{z\in\Gamma_{n}\cap\mathcal{ND}(\mu): \ v^{+}(g_{j}\mu,\Gamma_{n},\beta_{n})(z)=0\right\},\] \[\mathcal{Z}_{-}(\Gamma_{n})= \bigcap_{j=1}^{\infty}\left\{z\in\Gamma_{n}\cap\mathcal{ND}(\mu): \ v^{-}(g_{j}\mu,\Gamma_{n},\beta_{n})(z)=0\right\},\]
\[\mathcal{N}_{+}(\Gamma_{n})=\bigcup_{j=1}^{\infty}\left\{z\in\Gamma_{n}\cap \mathcal{ND}(\mu):\ v^{+}(g_{j}\mu,\Gamma_{n},\beta_{n})(z)\neq 0\right\},\]
and
\[\mathcal{N}_{-}(\Gamma_{n})=\bigcup_{j=1}^{\infty}\left\{z\in\Gamma_{n}\cap \mathcal{ND}(\mu):\ v^{-}(g_{j}\mu,\Gamma_{n},\beta_{n})(z)\neq 0\right\}.\]
**Proposition 3.6**.: _For \(n\geq 1,\) the following statements are true._
_(1) If \(g\perp R^{t}(K,\mu),\) then_
\[v^{+}(g\mu,\Gamma_{n},\beta_{n})(z)=0,\ \mathcal{H}^{1}|_{ \mathcal{Z}_{+}(\Gamma_{n})}-a.a.,\] \[v^{-}(g\mu,\Gamma_{n},\beta_{n})(z)=0,\ \mathcal{H}^{1}|_{ \mathcal{Z}_{-}(\Gamma_{n})}-a.a.\]
_Consequently, \(\mathcal{Z}_{+}(\Gamma_{n})\) and \(\mathcal{Z}_{-}(\Gamma_{n})\) are independent of the choices of \(\{g_{j}\}\) up to a set of zero analytic capacity._
_(2) \(\mathcal{N}_{+}(\Gamma_{n})\cap\mathcal{N}_{-}(\Gamma_{n})\approx(\Gamma_{n} \cap\mathcal{ND}(\mu))\setminus(\mathcal{Z}_{+}(\Gamma_{n})\cup\mathcal{Z}_{- }(\Gamma_{n})),\ \mathcal{H}^{1}|_{\Gamma_{n}}-a.a..\) Consequently, \(\mathcal{N}_{+}(\Gamma_{n})\cap\mathcal{N}_{-}(\Gamma_{n})\) is independent of the choices of \(\{g_{j}\}\) up to a set of zero analytic capacity._
Proof.: There exists a subsequence \(\{g_{j_{k}}\}\) such that \(\|g_{j_{k}}-g\|_{L^{s}(\mu)}\to 0\) and \(g_{j_{k}}(z)\to g(z),\ \mu-a.a..\) Applying Lemma 2.4 (2), we see (1) and (2) follow.
We now define:
\[\mathcal{F}_{+}=\bigcup_{n=1}^{\infty}\mathcal{Z}_{+}(\Gamma_{n})\ \text{and}\ \mathcal{F}_{-}=\bigcup_{n=1}^{\infty}\mathcal{Z}_{-}(\Gamma_{n}) \tag{3.7}\]
and
\[\mathcal{R}_{1}=\bigcup_{n=1}^{\infty}\left(\mathcal{N}_{+}(\Gamma_{n})\cap \mathcal{N}_{-}(\Gamma_{n})\right). \tag{3.8}\]
Recall \(\mathcal{R}_{0}\) and \(\mathcal{F}_{0}\) are defined as in (3.1) and (2.5), respectively. We define:
\[\mathcal{F}=\mathcal{F}_{0}\cup\mathcal{F}_{+}\cup\mathcal{F}_{-}\ \text{and}\ \mathcal{R}=\mathcal{R}_{0}\cup\mathcal{R}_{1}. \tag{3.9}\]
We call \(\mathcal{F}\) and \(\mathcal{R}\) the non-removable boundary and removable set for \(R^{t}(K,\mu),\) respectively. As a simple application of Proposition 2.5 (2) and the fact that \(\mathfrak{m}(\mathcal{F}_{+}\cup\mathcal{F}_{-})=0,\) we have the following property.
\[\mathcal{C}(g\mu)(z)=0,\ \mathfrak{m}|_{\mathcal{F}}-a.a.\ \text{for}\ g\perp R^{t}(K,\mu). \tag{3.10}\]
The corollary below follows from Propositions 2.5 & 3.6.
**Corollary 3.7**.: _The sets \(\mathcal{F}_{0},\ \mathcal{F}_{+},\ \mathcal{F}_{-},\ \mathcal{F},\ \mathcal{R}_{0},\ \mathcal{R}_{1},\) and \(\mathcal{R}\) are independent of the choices of \(\{g_{j}\}\) up to a set of zero analytic capacity._
In the remaining section, we prove the following decomposition for \(\mathcal{N}\) and discuss a characterization of \(\mathcal{F}\) and \(\mathcal{R}\) which implies \(\mathcal{F}\) and \(\mathcal{R}\) are independent of the choices of \(\{\Gamma_{n}\}\) up to a set of zero analytic capacity.
**Theorem 3.8**.: _Let \(\mathcal{R}_{0},\)\(\mathcal{R},\)\(\mathcal{F}_{0},\)\(\mathcal{F}_{+},\) and \(\mathcal{F}_{-}\) be defined as above. Then_
\[\mathcal{N}\approx\mathcal{R}\cup\mathcal{F}_{+}\cup\mathcal{F}_{-}\ \text{and}\ \mathcal{F}_{0}\cup\mathcal{R}_{0}\approx\mathcal{ZD}(\mu),\ \gamma-a.a.\]
Before proving Theorem 3.8, we need a couple of lemmas. For \(\nu\in M_{0}(\mathbb{C}),\) the maximal function of \(\nu\) is defined by
\[\mathcal{M}_{\nu}(z)=\sup_{\epsilon>0}\frac{|\nu|(\mathbb{D}(z,\epsilon))}{ \epsilon}. \tag{3.11}\]
Combining [19, Theorem 2.5], Proposition 2.2, and Theorem 2.1 (1), we see that there exists an absolute constant \(C_{T}\) (we use the same constant as in Theorem 2.1) such that for \(a>0,\)
\[\gamma\{\lambda:\ \mathcal{M}_{\nu}(\lambda)>a\}\leq\frac{C_{T}}{a}\|\nu\|. \tag{3.12}\]
**Lemma 3.9**.: _Let \(\{\nu_{j}\}\subset M_{0}(\mathbb{C}).\) Then for \(\epsilon>0\), there exists a Borel subset \(F\) such that \(\gamma(F^{c})<\epsilon\) and \(\mathcal{C}_{*}(\nu_{j})(z),\ \mathcal{M}_{\nu_{j}}(z)\leq M_{j}<\infty\) for \(z\in F\)._
Proof.: Let \(A_{j}=\{\mathcal{C}_{*}(\nu_{j})(z)\leq M_{j}\}\) and \(B_{j}=\{\mathcal{M}_{\nu_{j}}(z)\leq M_{j}\}.\) By Theorem 2.1 (3) and (3.12), we can select \(M_{j}>0\) so that \(\gamma(A_{j}^{c})<\frac{\epsilon}{2^{j+2}C_{T}}\) and \(\gamma(B_{j}^{c})<\frac{\epsilon}{2^{j+2}C_{T}}.\) Set \(F=\cap_{j=1}^{\infty}(A_{j}\cap B_{j})\). Then applying Theorem 2.1 (2), we get
\[\gamma(F^{c})\leq C_{T}\sum_{j=1}^{\infty}(\gamma(A_{j}^{c})+\gamma(B_{j}^{c}) )<\epsilon.\]
The following lemma is a simple application of Theorem 2.1 and Proposition 2.2 (also see [20, Corollary 2.4]).
**Lemma 3.10**.: _Let \(\eta\in M_{0}^{+}(\mathbb{C})\) such that \(\|\mathcal{C}(\eta)\|\leq 1\). If \(F\) is a compact subset and \(\gamma(F)=0\), then \(\eta(F)=0\)._
**Lemma 3.11**.: _Suppose that \(\{u_{n}\}\subset L^{1}(\mu)\) and \(E\) is a compact subset with \(\gamma(E)>0\). Then there exists \(\eta\in M_{0}^{+}(E)\) satisfying:_
_(1) \(\eta\) is of \(1\)-linear growth, \(\|\mathcal{C}_{\epsilon}(\eta)\|_{L^{\infty}(\mathbb{C})}\leq 1\) for all \(\epsilon>0,\) and \(\gamma(E)\lesssim\|\eta\|\);_
_(2) \(\mathcal{C}_{*}(u_{n}\mu),\ \mathcal{M}_{u_{n}\mu}\in L^{\infty}(\eta)\);_
_(3) there exists a subsequence \(f_{k}(z)=\mathcal{C}_{\epsilon_{k}}(\eta)(z)\) such that \(f_{k}\) converges to \(f\in L^{\infty}(\mu)\) in weak-star topology and \(f_{k}(\lambda)\) converges to \(f(\lambda)=\mathcal{C}(\eta)(\lambda)\) uniformly on any compact subset of \(\mathbb{C}\setminus\text{spt}\eta\) as \(\epsilon_{k}\to 0\). Moreover for \(n\geq 1,\)_
\[\int f(z)u_{n}(z)d\mu(z)=-\int\mathcal{C}(u_{n}\mu)(z)d\eta(z), \tag{3.13}\]
_and for \(\lambda\in\mathbb{C}\setminus\text{spt}\eta\),_
\[\int\frac{f(z)-f(\lambda)}{z-\lambda}u_{n}(z)d\mu(z)=-\int\mathcal{C}(u_{n} \mu)(z)\frac{d\eta(z)}{z-\lambda}. \tag{3.14}\]
Proof.: From Lemma 3.9, we find \(E_{1}\subset E\) such that \(\gamma(E\setminus E_{1})<\frac{\gamma(E)}{2\mathcal{C}_{T}}\) and \(\mathcal{C}_{*}(u_{n}\mu)(z),\ \mathcal{M}_{u_{n}\mu}(z)\leq M_{n}<\infty\) for \(z\in E_{1}\). Using Theorem 2.1 (2), we get \(\gamma(E_{1})\geq\frac{1}{C_{T}}\gamma(E)-\gamma(E\setminus E_{1})\geq\frac{1 }{2C_{T}}\gamma(E).\) Using Theorem 2.1 (1) and Proposition 2.2 (1), we infer that there is \(\eta\in M_{0}^{+}(E_{1})\) satisfying (1). So (2) holds. Clearly,
\[\int\mathcal{C}_{\epsilon}(\eta)(z)u_{n}d\mu=-\int\mathcal{C}_{\epsilon}(u_{n} \mu)(z)d\eta \tag{3.15}\]
for \(n\geq 1\). We can choose a sequence \(f_{k}(\lambda)=\mathcal{C}_{\epsilon_{k}}(\eta)(\lambda)\) that converges to \(f\) in \(L^{\infty}(\mu)\) weak-star topology and \(f_{k}(\lambda)\) uniformly tends to \(f(\lambda)\) on any compact subset of \(\mathbb{C}\setminus\mathrm{spt}\eta\). On the other hand, \(|\mathcal{C}_{\epsilon_{k}}(u_{n}\mu)(z)|\leq M_{n},\ \eta-a.a.\) and by Corollary 2.3 and Lemma 3.10, \(\lim_{k\to\infty}\mathcal{C}_{\epsilon_{k}}(u_{n}\mu)(z)=\bar{\mathcal{C}}(u_{ n}\mu)(z),\ \eta-a.a.\). We apply the Lebesgue dominated convergence theorem to the right hand side of (3.15) and get (3.13) for \(n\geq 1\). For (3.14), let \(\lambda\notin\mathrm{spt}\eta\) and \(d=\mathrm{dist}(\lambda,\mathrm{spt}\eta)\), for \(z\in\mathbb{D}(\lambda,\frac{d}{2})\) and \(\epsilon<\frac{d}{2}\), we have
\[\left|\frac{\mathcal{C}_{\epsilon}(\eta)(z)-f(\lambda)}{z-\lambda}\right| \leq\left|\mathcal{C}_{\epsilon}\left(\frac{\eta(s)}{s-\lambda}\right)(z) \right|\leq\frac{2}{d^{2}}\|\eta\|.\]
For \(z\notin\mathbb{D}(\lambda,\frac{d}{2})\) and \(\epsilon<\frac{d}{2}\),
\[\left|\frac{\mathcal{C}_{\epsilon}(\eta)(z)-f(\lambda)}{z-\lambda}\right|\leq \frac{4}{d}.\]
Thus, we replace the above proof for the measure \(\frac{\eta(s)}{s-\lambda}\). In fact, we choose a subsequence \(\{\mathcal{C}_{\epsilon_{k_{j}}}(\eta)\}\) such that \(e_{k_{j}}(z)=\frac{\mathcal{C}_{\epsilon_{k_{j}}}(\eta)(z)-f(\lambda)}{z-\lambda}\) converges to \(e(z)\) in weak-star topology. Clearly, \((z-\lambda)e_{k_{j}}(z)+f(\lambda)=\mathcal{C}_{\epsilon_{k_{j}}}(\eta)(z)\) converges to \((z-\lambda)e(z)+f(\lambda)=f(z)\) in weak-star topology. On the other hand, (3.15) becomes
\[\int\mathcal{C}_{\epsilon_{k_{j}}}(\frac{\eta(s)}{s-\lambda})(z)u_{n}d\mu=- \int\mathcal{C}_{\epsilon_{k_{j}}}(u_{n}\mu)(z)\frac{d\eta(z)}{z-\lambda} \tag{3.16}\]
and for \(\epsilon_{k_{j}}<\frac{d}{2}\), we have
\[\left|\mathcal{C}_{\epsilon_{k_{j}}}(\frac{\eta(s)}{s-\lambda})(z)-e_{k_{j}}(z )\right|\ \leq\begin{cases}0,&z\in\mathbb{D}(\lambda,\frac{d}{2}),\\ \frac{2}{d^{2}}\eta(\mathbb{D}(z,\epsilon_{k_{j}})),&z\notin\mathbb{D}( \lambda,\frac{d}{2}),\end{cases} \tag{3.17}\]
which goes to zero as \(\epsilon_{k_{j}}\to 0\). Combining (3.16), (3.17), and the Lebesgue dominated convergence theorem, we prove the equation (3.14). (3) is proved.
For \(n\geq 1\), we define
\[\mathcal{N}_{0}(\Gamma_{n})=\bigcup_{j=1}^{\infty}\{z\in\Gamma_{n}\cap \mathcal{ND}(\mu):\ \mathcal{C}(g_{j}\mu)(z)\neq 0\}.\]
**Lemma 3.12**.: _For \(n\geq 1,\) we have_
\[\mathcal{N}_{+}(\Gamma_{n})\cup\mathcal{N}_{-}(\Gamma_{n})\approx\mathcal{N} _{0}(\Gamma_{n}),\ \mathcal{H}^{1}|_{\Gamma_{n}}-a.a..\]
Proof.: Without loss of generality, we assume \(n=1\) and \(\beta_{1}=0.\) Suppose there exists a compact subset \(E\subset\mathcal{N}_{+}(\Gamma_{1})\cup\mathcal{N}_{-}(\Gamma_{1})\setminus \mathcal{N}_{0}(\Gamma_{1})\) such that \(\mathcal{H}^{1}(E)>0.\) Then \(\mathcal{C}(g_{j}\mu)(z)=0,\ \mathcal{H}^{1}|_{E}-a.a.\) and
\[v^{+}(g_{j}\mu,\Gamma_{1})(z)=\frac{(g_{j}h)(z)}{2L(z)},\ v^{-}(g_{j}\mu,\Gamma _{1})(z)=-\frac{(g_{j}h)(z)}{2L(z)},\ \mathcal{H}^{1}|_{E}-a.a. \tag{3.18}\]
for all \(j\geq 1.\) Let \(\eta\) and \(f\) be from Lemma 3.11 for \(\{g_{j}\}\) and \(E\) as \(\gamma(E)>0\) by (3.3). We may assume \(\eta=w\mathcal{H}^{1}|_{E},\) where \(0\leq w(z)\leq 1\) on \(E\), since \(\eta\) is of
1-linear growth. From Lemma 3.11 (3), we see \(f\in R^{t,\infty}(K,\mu)\) as \(\{g_{j}\}\) is dense in \(R^{t}(K,\mu)^{\perp}\) and
\[\mathcal{C}\eta(\lambda)\mathcal{C}(g_{j}\mu)(\lambda)=\mathcal{C}(fg_{j}\mu)( \lambda),\ \gamma|_{E^{c}}-a.a.,\ \ \text{for}\ j\geq 1. \tag{3.19}\]
From Proposition 2.5 (2), since \(fg_{j}\perp R^{t}(K,\mu)\), we get \(\mathcal{C}(fg_{j}\mu)(z)=0,\ \eta-a.a.,\)
\[v^{+}(fg_{j}\mu,\Gamma_{1})(z)=\frac{(fg_{j}h)(z)}{2L(z)},\ v^{-}(fg_{j}\mu, \Gamma_{1})(z)=-\frac{(fg_{j}h)(z)}{2L(z)},\ \eta-a.a. \tag{3.20}\]
for \(j\geq 1.\) Applying Theorem 3.5 to \(\mathcal{C}\eta(\lambda)\) for (3.19), we get
\[v^{+}(w\mathcal{H}^{1},\Gamma_{1})(z)v^{+}(g_{j}\mu,\Gamma_{1}) (z)= v^{+}(fg_{j}\mu,\Gamma_{1})(z),\ \eta-a.a.,\] \[v^{-}(w\mathcal{H}^{1},\Gamma_{1})(z)v^{-}(g_{j}\mu,\Gamma_{1}) (z)= v^{-}(fg_{j}\mu,\Gamma_{1})(z),\ \eta-a.a.\]
Combining with (3.18) and (3.20), we have \(g_{j}(z)h(z)=0,\ \eta-a.a.\) for \(j\geq 1,\) which implies \(v^{+}(g_{j}\mu,\Gamma_{1})(z)=v^{-}(g_{j}\mu,\Gamma_{1})(z)=0,\ \eta-a.a.\) for \(j\geq 1.\) This is a contradiction.
On the other hand, if \(\mathcal{C}(g_{j}\mu)(\lambda)\neq 0,\) then \(v^{+}(g_{j}\mu,\Gamma_{1})(\lambda)\neq 0\) or \(v^{-}(g_{j}\mu,\Gamma_{1})(\lambda)\neq 0.\) Therefore, \(\mathcal{N}_{0}(\Gamma_{n})\subset\mathcal{N}_{+}(\Gamma_{n})\cup\mathcal{N} _{-}(\Gamma_{n}),\ \mathcal{H}^{1}|_{\Gamma_{n}}-a.a.\). The lemma is proved.
Proof.: (Theorem 3.8) If \(\lambda\in\mathcal{Z}_{+}(\Gamma_{n})\cap\mathcal{Z}_{-}(\Gamma_{n})\), then \(g_{j}(\lambda)h(\lambda)=0\) for \(j\geq 1.\) Hence, \(\mathcal{H}^{1}(\mathcal{Z}_{+}(\Gamma_{n})\cap\mathcal{Z}_{-}(\Gamma_{n}))=\emptyset\) since \(S_{\mu}\) is pure. From Lemma 3.12, we have
\[\mathcal{N}_{0}(\Gamma_{n}) \approx\mathcal{N}_{+}(\Gamma_{n})\cup\mathcal{N}_{-}(\Gamma_{n} )\approx\Gamma_{n}\cap\mathcal{ND}(\mu)\] \[\approx(\mathcal{N}_{+}(\Gamma_{n})\cap\mathcal{N}_{-}(\Gamma_{n }))\cup\mathcal{Z}_{+}(\Gamma_{n})\cup\mathcal{Z}_{-}(\Gamma_{n}),\ \mathcal{H}^{1}|_{\Gamma_{n}}-a.a.\]
Therefore, \(\mathcal{ND}(\mu)\subset\mathcal{N}.\) The theorem follows since \(\mathcal{N}=\mathcal{R}_{0}\cup\cup_{n=1}^{\infty}\mathcal{N}_{0}(\Gamma_{n})\).
Define
\[\mathcal{E}_{N}=\left\{\lambda:\ \lim_{\epsilon\to 0}\mathcal{C}_{\epsilon}(g_{j}\mu)( \lambda)\ \text{exists},\ \max_{1\leq j\leq N}|\mathcal{C}(g_{j}\mu)(\lambda)|\leq\frac{1}{N}\right\}.\]
**Theorem 3.13**.: _There is a subset \(\mathcal{Q}\subset\mathbb{C}\) with \(\gamma(\mathcal{Q})=0\) such that if \(\lambda\in\mathbb{C}\setminus\mathcal{Q}\), then \(\lambda\in\mathcal{F}\) if and only if_
\[\overline{\lim_{\delta\to 0}}\frac{\gamma(\mathbb{D}(\lambda,\delta)\cap \mathcal{E}_{N})}{\delta}>0 \tag{3.21}\]
_for all \(N\geq 1\). Consequently, \(\mathcal{F}\) and \(\mathcal{R}\) do not depend on the choices of \(\{\Gamma_{n}\}\) up to a set of zero analytic capacity._
Proof.: We first prove that there exists \(\mathcal{Q}_{1}\) with \(\gamma(\mathcal{Q}_{1})=0\) such that if \(\lambda\in\mathcal{Z}\mathcal{D}(\mu)\setminus\mathcal{Q}_{1}\), then \(\lambda\in\mathcal{F}\) if and only if \(\lambda\) satisfies (3.21).
From Theorem 3.8, \(\mathcal{ZD}(\mu)\approx\mathcal{R}_{0}\cup\mathcal{F}_{0},\ \gamma-a.a.\). There exists \(\mathcal{Q}_{1}\) with \(\gamma(\mathcal{Q}_{1})=0\) such that for \(\lambda\in\mathcal{ZD}(\mu)\setminus\mathcal{Q}_{1}\), \(\Theta_{g_{j}\mu}(\lambda)=0\), \(\mathcal{C}(g_{j}\mu)(\lambda)=\lim_{\epsilon\to 0}\mathcal{C}_{\epsilon}(g_{j}\mu)(\lambda)\) exists, and \(\mathcal{C}(g_{j}\mu)(z)\) is \(\gamma\)-continuous at \(\lambda\) (Lemma 3.3) for all \(j\geq 1.\)
If \(\lambda\in\mathcal{R}_{0}\), then there exists \(j_{0}\) such that \(\mathcal{C}(g_{j_{0}}\mu)(\lambda)\neq 0\). Let \(\epsilon_{0}=\frac{1}{2}|\mathcal{C}(g_{j_{0}}\mu)(\lambda)|\), we obtain that, for \(N>N_{0}:=\max(j_{0},\frac{1}{\epsilon_{0}}+1)\),
\[\mathcal{E}_{N}\subset\{\left|\mathcal{C}(g_{j_{0}}\mu)(z)-\mathcal{C}(g_{j_{0} }\mu)(\lambda)\right|>\epsilon_{0}\}.\]
Therefore, by Lemma 3.3, for \(N>N_{0}\),
\[\lim_{\delta\to 0}\frac{\gamma(\mathbb{D}(\lambda,\delta)\cap\mathcal{E}_{N})}{ \delta}=0.\]
Thus, \(\lambda\) does not satisfy (3.21).
Now for \(\lambda\in\mathcal{F}_{0}\), \(\mathcal{C}(g_{j}\mu)(\lambda)=0\) for all \(j\geq 1\). Using Lemma 3.3 and Theorem 2.1 (2), we get
\[\lim_{\delta\to 0}\frac{\gamma(\mathbb{D}(\lambda,\delta)\setminus\mathcal{E}_ {N})}{\delta}\leq C_{T}\sum_{j=1}^{N}\lim_{\delta\to 0}\frac{\gamma(\mathbb{D}( \lambda,\delta)\cap\{|\mathcal{C}(g_{j}\mu)-\mathcal{C}(g_{j}\mu)(\lambda)|\geq \frac{1}{N}\})}{\delta}=0.\]
Hence, \(\lambda\) satisfies (3.21).
We now prove that there exists \(\mathcal{Q}_{2}\) with \(\gamma(\mathcal{Q}_{2})=0\) such that if \(\lambda\in\mathcal{ND}(\mu)\setminus\mathcal{Q}_{2}\), then \(\lambda\in\mathcal{F}\) if and only if \(\lambda\) satisfies (3.21).
From (3.7) and (3.8), we get \(\mathcal{ND}(\mu)\approx\mathcal{R}_{1}\cup(\mathcal{F}_{+}\cup\mathcal{F}_{- }),\;\gamma-a.a.\). There exists \(\mathcal{Q}_{2}\) with \(\gamma(\mathcal{Q}_{2})=0\) such that for \(\lambda\in\mathcal{ND}(\mu)\cap\Gamma_{n}\setminus\mathcal{Q}_{2}\), \(v^{0}(g_{j}\mu,\Gamma_{n})(\lambda)=\mathcal{C}(g_{j}\mu)(\lambda)=\lim_{ \epsilon\to 0}\mathcal{C}_{\epsilon}(g_{j}\mu)(\lambda)\), \(v^{+}(g_{j}\mu,\Gamma_{n},\beta_{n})(\lambda)\), and \(v^{-}(g_{j}\mu,\Gamma_{n},\beta_{n})(\lambda)\) exist for all \(j,n\geq 1\) and Theorem 3.5 (b), (c), and (d) hold. Fix \(n=1\) and without loss of generality, we assume \(\beta_{1}=0\).
If \(\lambda\in\mathcal{R}_{1}\), then there exist integers \(j_{0}\), \(j_{1}\), and \(j_{2}\) such that \(v^{0}(g_{j_{0}}\mu,\Gamma_{1})(\lambda)\neq 0\) by Lemma 3.12, \(v^{+}(g_{j_{1}},\Gamma_{1})(\lambda)\neq 0\), and \(v^{-}(g_{j_{2}}\mu,\Gamma_{1})(\lambda)\neq 0\). Set
\[\epsilon_{0}=\frac{1}{2}\min(|v^{0}(g_{j_{0}}\mu,\Gamma_{1})(\lambda)|,|v^{+}( g_{j_{1}},\Gamma_{1})(\lambda)|,|v^{-}(g_{j_{2}}\mu,\Gamma_{1})(\lambda)|),\]
then for \(N>N_{0}:=\max(j_{0},j_{1},j_{2},\frac{1}{\epsilon_{0}}+1)\),
\[\Gamma_{1}\cap\mathcal{E}_{N}\subset D:=\Gamma_{1}\cap\{|\mathcal{C}(g_{j_{0} }\mu)(z)-v^{0}(g_{j_{0}}\mu,\Gamma_{1})(\lambda)|\geq\epsilon_{0}\},\]
\[U_{\Gamma_{1}}\cap\mathcal{E}_{N}\subset E:=U_{\Gamma_{1}}\cap\{|\mathcal{C}(g_{j_{1} }\mu)(z)-v^{+}(g_{j_{1}}\mu,\Gamma_{1})(\lambda)|\geq\epsilon_{0}\},\text{ and}\]
\[L_{\Gamma_{1}}\cap\mathcal{E}_{N}\subset F:=L_{\Gamma_{1}}\cap\{|\mathcal{C}(g_ {j_{2}}\mu)(z)-v^{-}(g_{j_{2}}\mu,\Gamma_{1})(\lambda)|\geq\epsilon_{0}\}.\]
Therefore, using Theorem 2.1 (2) and Theorem 3.5, we get for \(N>N_{0}\),
\[\lim_{\delta\to 0}\frac{\gamma(\mathbb{D}(\lambda,\delta)\cap \mathcal{E}_{N})}{\delta}\] \[\leq C_{T}\left(\lim_{\delta\to 0}\frac{\gamma(\mathbb{D}(\lambda, \delta)\cap D)}{\delta}+\lim_{\delta\to 0}\frac{\gamma(\mathbb{D}(\lambda, \delta)\cap E)}{\delta}+\lim_{\delta\to 0}\frac{\gamma(\mathbb{D}(\lambda, \delta)\cap F)}{\delta}\right)\] \[= 0.\]
Hence, \(\lambda\) does not satisfy (3.21).
For \(\lambda\in(\mathcal{F}_{+}\cup\mathcal{F}_{-})\cap\Gamma_{1}\), we may assume that \(\lambda\in\mathcal{Z}_{+}(\Gamma_{1})\subset\Gamma_{1}\). Using Theorem 2.1 (2) and Theorem 3.5, we get \((v^{+}(g_{j}\mu,\Gamma_{1})(\lambda)=0)\)
\[\lim_{\delta\to 0}\frac{\gamma(\mathbb{D}(\lambda,\delta)\cap U_{ \Gamma_{1}}\setminus\mathcal{E}_{N})}{\delta}\] \[\leq C_{T}\sum_{j=1}^{N}\lim_{\delta\to 0}\frac{\gamma(\mathbb{D}( \lambda,\delta)\cap U_{\Gamma_{1}}\cap\{|\mathcal{C}(g_{j}\mu)(z)-v^{+}(g_{j} \mu,\Gamma_{1})(\lambda)|\geq\frac{1}{N}\})}{\delta}\] \[= 0.\]
This implies
\[\varlimsup_{\delta\to 0}\frac{\gamma(\mathbb{D}(\lambda,\delta)\cap\mathcal{E}_{N })}{\delta}\geq\varlimsup_{\delta\to 0}\frac{\gamma(\mathbb{D}(\lambda,\delta) \cap U_{\Gamma_{1}}\cap\mathcal{E}_{N})}{\delta}>0.\]
Hence, \(\lambda\) satisfies (3.21).
Finally, (3.21) does not depend on choices of \(\{\Gamma_{n}\}\), therefore, \(\mathcal{F}\) and \(\mathcal{R}\) are independent of choices of \(\{\Gamma_{n}\}\) up to a set of zero analytic capacity.
**Example 3.14**.: Let \(K=\overline{\mathbb{D}}\setminus\cup_{n=1}^{\infty}\mathbb{D}(\lambda_{n}, \delta_{n})\), where \(\mathbb{D}=\mathbb{D}(0,1)\), \(\mathbb{D}(\lambda_{n},\delta_{n})\subset\mathbb{D}\), \(\overline{\mathbb{D}(\lambda_{n},\delta_{n})}\cap\overline{\mathbb{D}(\lambda _{m},\delta_{m})}=\emptyset\) for \(n\neq m\), and \(\sum\delta_{n}<\infty\). Set \(\partial_{e}K=\partial\mathbb{D}\cup\cup_{n=1}^{\infty}\partial\mathbb{D}( \lambda_{n},\delta_{n})\) (the exterior boundary of \(K\)). If \(\mu\) is the sum of the arclength measures of the unit circle and all small circles \(\partial\mathbb{D}(\lambda_{n},\delta_{n})\), then \(S_{\mu}\) on \(R^{t}(K,\mu)\) is pure, \(\mathcal{F}_{0}=\mathbb{C}\setminus K,\ \mathcal{F}_{-}=\partial_{e}K,\ \mathcal{R}_{0}=K \setminus\partial_{e}K,\ \text{and}\ \mathcal{F}_{+}=\mathcal{R}_{1}=\emptyset\) (if \(K\) has no interior, then \(K\) is a Swiss cheese set).
Proof.: Let \(dz\) be the usual complex measure on \(\partial\mathbb{D}\) (counter clockwise) and on each \(\partial\mathbb{D}(\lambda_{n},\delta_{n})\) (clockwise). Then \(\int r(z)dz=0\) for \(r\in Rat(K)\). If \(g=\frac{dz}{d\mu}\), then \(g\perp R^{t}(K,\mu)\). \(S_{\mu}\) is pure as \(|g|>0,\ \mu-a.a.\)..
Let \(g_{n}=g\chi_{\partial\mathbb{D}\cup\cup_{k=1}^{n}\partial\mathbb{D}(\lambda_{ k},\delta_{k})}\), where \(\chi_{A}\) denotes the characteristic function for a subset \(A.\) Then \(\|g_{n}-g\|_{L^{1}(\mu)}\to 0\) and \(\frac{1}{2\pi}\mathcal{C}(g_{n}\mu)(\lambda)=1\) for \(\lambda\in\mathbb{D}\setminus\cup_{k=1}^{n}\overline{\mathbb{D}(\lambda_{k}, \delta_{k})}\). Using Lemma 2.4, we have \(\mathcal{C}(g_{n}\mu)(\lambda)\to\mathcal{C}(g\mu)(\lambda),\ \gamma-a.a.\). Therefore, the principal value \(\mathcal{C}(g\mu)(\lambda)=2\pi I,\ \lambda\in K\setminus\partial_{e}K,\ \gamma-a.a.\). This implies that \(K\setminus\partial_{e}K\subset\mathcal{R}_{0}\). It is clear that \(\partial_{e}K\subset\mathcal{F}_{-}\) since \(\mathcal{C}(g\mu)(z)=0\) for \(g\perp R^{t}(K,\mu)\) and \(z\in\mathbb{C}\setminus K\). This completes the proof.
## 4. **Full Analytic Capacity Density for \(\mathcal{R}\)**
The aim of this section is to prove the following full analytic capacity density property for \(\mathcal{R}\), which is important for us to characterize \(H^{\infty}(\mathcal{R})\) in next section.
**Theorem 4.1**.: _There is \(\mathcal{Q}\subset\mathbb{C}\) with \(\gamma(\mathcal{Q})=0\) such that for \(\lambda\in\mathcal{R}\setminus\mathcal{Q},\) we have_
\[\lim_{\delta\to 0}\frac{\gamma(\mathbb{D}(\lambda,\delta)\cap\mathcal{F})}{ \delta}=\lim_{\delta\to 0}\frac{\gamma(\mathbb{D}(\lambda,\delta)\setminus \mathcal{R})}{\delta}=0. \tag{4.1}\]
It is straightforward to prove Theorem 4.1 if \(\mathcal{ND}(\mu)\approx\emptyset,\ \gamma-a.a..\) Let us provide a proof for this simple case below.
Proof.: (Theorem 4.1 assuming \(\mathcal{ND}(\mu)\approx\emptyset,\ \gamma-a.a.\)) In this case, \(\mathcal{F}\approx\mathcal{F}_{0},\ \gamma-a.a.\) and \(\mathcal{R}\approx\mathcal{R}_{0},\ \gamma-a.a..\) By Corollary 2.3, there exists \(\mathcal{Q}\subset\mathbb{C}\) with \(\gamma(\mathcal{Q})=0\) such that for \(\lambda\in\mathcal{R}\setminus\mathcal{Q}\) and \(j\geq 1\), \(\lim_{\epsilon\to 0}\mathcal{C}_{\epsilon}(g_{j}\mu)(\lambda)=\mathcal{C}(g_{j}\mu)(\lambda)\) exists and \(\Theta_{g_{j}\mu}(\lambda)=0.\) Then, by Lemma 3.3, \(\mathcal{C}(g_{j}\mu)(z)\) is \(\gamma\)-continuous at \(\lambda.\) For \(\lambda\in\mathcal{R}\setminus\mathcal{Q},\) there exists \(j_{0}\) such that \(\mathcal{C}(g_{j_{0}}\mu)(\lambda)\neq 0.\) Now the proof follows from the inclusion below
\[\mathcal{F}\subset\left\{z:\ |\mathcal{C}(g_{j_{0}}\mu)(z)-\mathcal{C}(g_{j_{0}}\mu) (\lambda)|>\frac{|\mathcal{C}(g_{j_{0}}\mu)(\lambda)|}{2}\right\}. \tag{4.2}\]
In the case when \(\gamma(\mathcal{ND}(\mu))>0,\) (4.2) may not hold as \(|\mathcal{C}(g_{j}\mu)|\) may not be small on \(E_{1}\subset\mathcal{F}_{+}\cup\mathcal{F}_{-}\) with \(\gamma(E_{1})>0.\) However, we will show in the next lemmas that there exist \(\gamma\) comparable subsets \(E_{N}\) near \(E_{1}\) such that \(|\mathcal{C}(g_{j}\mu)|\) is small on \(E_{N}.\)
**Lemma 4.2**.: _Let \(g\perp R^{t}(K,\mu)\) and let \(E\subset\mathcal{F}_{+}\) (or \(E\subset\mathcal{F}_{-}\)) be a compact subset with \(\gamma(E)>0\). Then there exists a sequence of compact subsets \(\{E_{N}\}_{N=1}^{\infty}\) such that \(|\mathcal{C}(g\mu)(z)|\leq N^{-1},\ z\in E_{N},\)\(\sup_{x\in E_{N}}\text{dist}(x,E)<\epsilon\) for a given \(\epsilon>0,\) and \(\gamma(E_{N})\gtrsim\gamma(E).\)_
Proof.: From Theorem 2.1 (1) and Proposition 2.2 (2), we find \(\eta\in M_{0}^{+}(E)\) with \(1\)-linear growth such that \(N_{2}(\eta)\leq 1\) and \(\gamma(E)\lesssim\|\eta\|\). Then
\[\bigcup_{n=1}^{\infty}\Gamma_{n}=\bigcup_{k=1}^{36}\bigcup_{(k-1)\frac{\pi}{18 }\leq\beta_{n}<k\frac{\pi}{18}}\Gamma_{n}.\]
Hence,
\[\eta(E)\leq\sum_{k=1}^{36}\lim_{M\to\infty}\eta\left(\bigcup_{(k-1)\frac{\pi}{ 18}\leq\beta_{n}<k\frac{\pi}{18},\ n\leq M}\Gamma_{n}\cap E\right).\]
We can find a \(k\) and \(M\), without loss of generality, assuming \(k=0,\) such that
\[G:=\bigcup_{0\leq\beta_{n}<\frac{\pi}{18},\ n\leq M}\Gamma_{n}\cap E\]
satisfying \(\eta(G)\geq\frac{1}{72}\eta(E)\gtrsim\frac{1}{72}\gamma(E).\) Hence, we assume \(E\) satisfies the following:
(A) Corresponding Lipschitz function \(A_{n}\) of \(\Gamma_{n}\) satisfying \(\|A_{n}^{\prime}\|_{\infty}\leq\frac{1}{4}\) (see the proof of [19, Lemma 4.11]);
(B) The rotation angles \(\beta_{n}\) of \(\Gamma_{n}\) are between \(0\) and \(\frac{\pi}{18}\);
(C) \(E\subset\cup_{n=1}^{M}\Gamma_{n}\) and \(\eta(E)\gtrsim\frac{1}{144}\gamma(E)\); and
(D) We fix an open subset \(O\supset E\) such that \(\mathcal{H}^{1}(O_{\Gamma})\leq 2\mathcal{H}^{1}(E),\) where \(O_{\Gamma}=\cup_{n=1}^{M}\Gamma_{n}\cap O\).
Claim: There exists \(\epsilon_{1}>0\) (depending on \(N\)) such that for \(\epsilon_{2}<\epsilon_{1},\) there exists \(E_{1}\subset E\) with \(\eta(E_{1})\geq\frac{3}{4}\eta(E)\) satisfying (\(I=\sqrt{-1}\)):
\[|\mathcal{C}(g\mu)(z)|\leq N^{-1},\ z\in E_{N}:=E_{1}+\epsilon_{2}I. \tag{4.3}\]
Proof of the claim: Set \(U(\lambda,\delta)=U_{\Gamma}\cap\mathbb{D}(\lambda,\delta).\) By Proposition 3.6 and Theorem 3.5, for \(\lambda\in E\cap\Gamma_{l}\) (\(1\leq l\leq M\)), we have
\[\lim_{\delta\to 0}\frac{\gamma(U(\lambda,\delta)\cap\{|\mathcal{C}(g\mu)(z)|>N^{ -1}\})}{\delta}=0. \tag{4.4}\]
Let \(B_{n}\) be a subset consisting of \(\lambda\in E\) satisfying
\[\gamma(U(\lambda,\delta)\cap\{|\mathcal{C}(g\mu)(z)|>N^{-1}\})\leq\frac{\eta( E)\delta}{80C_{T}^{2}\mathcal{H}^{1}(E)}\ \text{for}\ \delta\leq\frac{1}{n}. \tag{4.5}\]
We require \(n>n_{0},\) where \(\frac{1}{n_{0}}\) is less than the smallest distance between \(\mathbb{C}\setminus O\) and \(E.\) The sets \(B_{n}\subset B_{n+1}\) and by (4.4), we see that \(\eta(E\setminus\cup_{n=n_{0}}^{\infty}B_{n})=0.\)
Choose \(m>n_{0}\) large enough such that there is a compact subset \(F_{m}\subset B_{m}\) with \(\eta(F_{m})\geq\frac{7}{8}\eta(E).\) For \(\delta<\frac{1}{m},\) using \(5r\)-covering theorem (see [19, Theorem 2.2]), there exists a sequence \(\{\lambda_{k}\}_{k=1}^{M_{\delta}}\subset F_{m}\) such that
\[F_{m}\subset\cup_{k=1}^{M_{\delta}}\mathbb{D}(\lambda_{k},\delta)\text{ and }\mathbb{D}(\lambda_{k_{1}},\frac{1}{5}\delta)\cap\mathbb{D}(\lambda_{k_{2}}, \frac{1}{5}\delta)=\emptyset\text{ for }k_{1}\neq k_{2}.\]
From (D), we see that \(M_{\delta}\delta\leq 5\mathcal{H}^{1}(E).\) Set \(U_{\delta}=\cup_{k=1}^{M_{\delta}}U(\lambda_{k},\delta)\) and \(V_{\delta}=U_{\delta}\cap\{|\mathcal{C}(g\mu)(z)|>N^{-1}\}.\) Applying (4.5) and Theorem 2.1 (2), we get
\[\gamma(V_{\delta})\leq C_{T}\sum_{k=1}^{M_{\delta}}\gamma(U(\lambda_{k}, \delta)\cap\{|\mathcal{C}(g\mu)(z)|>N^{-1}\})\leq\frac{\eta(E)}{16C_{T}}. \tag{4.6}\]
By (A) and (B), we see that there exists \(\epsilon_{1}>0\) such that if \(\epsilon_{2}<\epsilon_{1}\) and \(L_{m}=F_{m}+\epsilon_{2}I,\) then \(L_{m}\subset U_{\delta}.\) Let
\[E_{1}=L_{m}\cap\{|\mathcal{C}(g\mu)(z)|\leq N^{-1}\}-\epsilon_{2}I\text{ and }E_{0}=L_{m}\cap\{|\mathcal{C}(g\mu)(z)|>N^{-1}\}-\epsilon_{2}I.\]
Then
\[\eta(E_{1})=\eta\left((L_{m}-\epsilon_{2}I)\setminus E_{0}\right)\geq\eta(F_{ m})-\eta(E_{0}).\]
\(\eta|_{E_{0}}\) is of \(1\)-linear growth and \(N_{2}(\eta|_{E_{0}})\leq 1.\) From (4.6), Theorem 2.1 (1), and Proposition 2.2 (3), we get
\[\eta(E_{0})\leq 2C_{T}\gamma(E_{0})=2C_{T}\gamma\left(L_{m}\cap\{|\mathcal{C}(g \mu)(z)|>N^{-1}\}\right)\leq 2C_{T}\gamma(V_{\delta})\leq\frac{1}{8}\eta(E).\]
Combining above two inequalities, we get \(\eta(E_{1})\geq\frac{3}{4}\eta(E)\) and (4.3) holds. This completes the proof of the claim. The lemma now follows from Theorem 2.1 (1), Proposition 2.2 (3), and the claim.
**Corollary 4.3**.: _Let \(g\perp R^{t}(K,\mu)\) and let \(E\subset\mathcal{F}\) be a compact subset with \(\gamma(E)>0\). Then there exists a sequence of compact subsets \(\{E_{N}\}_{N=1}^{\infty}\) such that \(|\mathcal{C}(g\mu)(z)|\leq\frac{1}{N},\ z\in E_{N},\)\(\sup_{x\in E_{N}}\text{dist}(x,E)<\epsilon\) for a given \(\epsilon>0,\) and \(\gamma(E_{N})\gtrsim\gamma(E).\)_
Proof.: If \(E\subset\mathcal{F}_{0},\) then by Proposition 2.5 (2), we have \(\mathcal{C}(g\mu)(z)=0,\ \gamma|_{E}-a.a..\) So we can choose \(E_{N}=E\) in this case. In general, by Theorem 2.1 (2), we have
\[\gamma(E)\leq C_{T}(\gamma(E\cap\mathcal{F}_{0})+\gamma(E\cap\mathcal{F}_{+}) +\gamma(E\cap\mathcal{F}_{-})).\]
The proof now follows from Lemma 4.2.
The following lemma is a generalization of (4.2) when \(\gamma(\mathcal{ND}(\mu))>0.\)
**Lemma 4.4**.: _Let \(O\) be an open subset of \(\mathbb{C}\), \(g\perp R^{t}(K,\mu)\), and \(a\neq 0\). If for some \(0<\epsilon<\frac{|a|}{2}\) and \(\lambda\in\mathbb{C}\),_
\[\lim_{\delta\to 0}\frac{\gamma(\mathbb{D}(\lambda,\delta)\cap O\cap\{|\mathcal{C} (g\mu)(z)-a|>\epsilon\})}{\delta}=0, \tag{4.7}\]
_then_
\[\lim_{\delta\to 0}\frac{\gamma(\mathbb{D}(\lambda,\delta)\cap O\cap\mathcal{F}) }{\delta}=0.\]
Proof.: Suppose that there exists \(\epsilon_{0}>0\) and \(\delta_{n}\to 0\) such that
\[\gamma(\mathbb{D}(\lambda,\delta_{n})\cap O\cap\mathcal{F})\geq 2\epsilon_{0} \delta_{n}.\]
Let \(F_{n}\subset\mathbb{D}(\lambda,\delta_{n})\cap O\cap\mathcal{F}\) be a compact subset such that \(\gamma(F_{n})\geq\epsilon_{0}\delta_{n}\). Let \(d_{n}\) be the smallest distance between \((\mathbb{D}(\lambda,\delta_{n})\cap O)^{c}\) and \(F_{n}\). From Corollary 4.3, we find a compact subset \(E_{N}^{n}\subset\mathbb{D}(\lambda,\delta_{n})\cap O\) such that \(\gamma(E_{N}^{n})\gtrsim\gamma(F_{n})\), \(\sup_{x\in E_{N}^{n}}\mathrm{dist}(x,F_{n})<d_{n}\), and \(|\mathcal{C}(g\mu)(z)|<\frac{|a|}{2},\ z\in E_{N}^{n}.\) Hence,
\[E_{N}^{n}\subset\mathbb{D}(\lambda,\delta_{n})\cap O\cap\{|\mathcal{C}(g\mu) (z)-a|>\epsilon\}.\]
Hence,
\[\frac{\gamma(\mathbb{D}(\lambda,\delta_{n})\cap O\cap\{|\mathcal{C}(g\mu)(z)- a|>\epsilon\})}{\delta_{n}}\geq\frac{\gamma(E_{N}^{n})}{\delta_{n}}\gtrsim \epsilon_{0},\]
which contradicts the assumption (4.7). The lemma is proved.
Proof.: (Theorem 4.1) For almost all \(\lambda\in\mathcal{R}_{0}\) with respect to \(\gamma\), there exists \(j_{0}\) such that \(\mathcal{C}(g_{j_{0}}\mu)(\lambda)\neq 0\) exists and \(\lambda\in\mathcal{ZD}(g_{j_{0}}\mu)\). (4.1) follows from Lemma 3.3 and Lemma 4.4 for \(O=\mathbb{C}\).
For \(\lambda\in\mathcal{R}_{1}\cap\Gamma_{1},\ \gamma-a.a.\), there are integers \(j_{1}\) and \(j_{2}\) such that:
\[h_{i}(\lambda)\neq 0\ \text{and}\ \lim_{\delta\to 0}\frac{\gamma(\mathbb{D}( \lambda,\delta)\cap\Gamma_{1}\cap\{|h_{i}(z)-h_{i}(\lambda)|>\epsilon\})}{ \delta}=0 \tag{4.8}\]
for \(i=1,2,3\), where \(h_{1}=h\), \(h_{2}=v^{+}(g_{j_{1}}\mu,\Gamma_{1},\beta_{1})\), and \(h_{3}=v^{-}(g_{j_{2}}\mu,\Gamma_{1},\beta_{1})\) (notice \(\gamma|_{\Gamma_{1}}\approx\mathcal{H}^{1}|_{\Gamma_{1}}\) by (3.3));
\[\lim_{\delta\to 0}\frac{\gamma(\mathbb{D}(\lambda,\delta)\cap U_{\Gamma_{1}} \cap\{|\mathcal{C}(g_{j_{1}}\mu)(z)-h_{2}(\lambda)|>\epsilon\})}{\delta}=0 \tag{4.9}\]
by Theorem 3.5 (b); and
\[\lim_{\delta\to 0}\frac{\gamma(\mathbb{D}(\lambda,\delta)\cap L_{\Gamma_{1}} \cap\{|\mathcal{C}(g_{j_{2}}\mu)(z)-h_{3}(\lambda)|>\epsilon\})}{\delta}=0 \tag{4.10}\]
by Theorem 3.5 (c). For \(\epsilon<\frac{1}{2}\min(|h_{1}(\lambda)|,|h_{2}(\lambda)|,|h_{3}(\lambda)|)\), since \(\mathcal{F}_{0}\cap\mathcal{N}(h)=\emptyset\) (see Theorem 3.8), we get
\[\Gamma_{1}\cap\mathcal{F}_{0}\subset\Gamma_{1}\cap\{|h_{1}(z)-h_ {1}(\lambda)|>\epsilon\},\] \[\Gamma_{1}\cap\mathcal{F}_{+}\subset\Gamma_{1}\cap\{|h_{2}(z)-h_ {2}(\lambda)|>\epsilon\},\] \[\Gamma_{1}\cap\mathcal{F}_{-}\subset\Gamma_{1}\cap\{|h_{3}(z)-h_ {3}(\lambda)|>\epsilon\}.\]
Using Theorem 2.1 (2) and (4.8), we have
\[\lim_{\delta\to 0}\frac{\gamma(\mathbb{D}(\lambda,\delta)\cap\Gamma_{1}\cap \mathcal{F})}{\delta}=0. \tag{4.11}\]
By (4.9), applying Lemma 4.4 for \(O=U_{\Gamma_{1}}\), we have
\[\lim_{\delta\to 0}\frac{\gamma(\mathbb{D}(\lambda,\delta)\cap U_{\Gamma_{1}} \cap\mathcal{F})}{\delta}=0. \tag{4.12}\]
By (4.10), applying Lemma 4.4 for \(O=L_{\Gamma_{1}}\), we have
\[\lim_{\delta\to 0}\frac{\gamma(\mathbb{D}(\lambda,\delta)\cap L_{\Gamma_{1}} \cap\mathcal{F})}{\delta}=0. \tag{4.13}\]
Using Theorem 2.1 (2) for (4.11), (4.12), and (4.13), we finish the proof.
## 5. **Surjectivity of the Map \(\rho\)**
**Definition 5.1**.: Let \(\mathcal{D}\subset\mathbb{C}\) be a bounded Borel subset. Let \(H(\mathcal{D})\) be the set of functions \(f(z)\) such that \(f(z)\) is bounded and analytic on \(\mathbb{C}\setminus E_{f}\) for some compact subset \(E_{f}\subset\mathbb{C}\setminus\mathcal{D}.\) Define \(H^{\infty}(\mathcal{D})\) to be the weak-star closed subalgebra of \(L^{\infty}(\mathfrak{m}_{\mathcal{D}})\) generated by functions in \(H(\mathcal{D}).\)
If \(\mathcal{D}\) is a bounded open subset, then \(H^{\infty}(\mathcal{D})\) is the algebra of bounded and analytic functions on \(\mathcal{D}.\) If \(K\) is a Swiss cheese set as in Example 3.14, then \(\mathcal{R}=K\setminus\partial_{e}K\) and \(H^{\infty}(\mathcal{R})\) is different from the bounded and analytic functions on an open subset as \(\mathcal{R}\) has no interior points.
**Definition 5.2**.: The subset \(\mathcal{D}\subset\mathbb{C}\) is \(\gamma\)_-open_ if there exists a subset \(\mathcal{Q}\) with \(\mathfrak{m}(\mathcal{Q})=0\) such that for \(\lambda\in\mathcal{D}\setminus\mathcal{Q},\)
\[\lim_{\delta\to 0}\frac{\gamma(\mathbb{D}(\lambda,\delta)\setminus \mathcal{D})}{\delta}=0. \tag{5.1}\]
If \(\gamma(\mathcal{Q})=0,\) then \(\mathcal{D}\) is called _strong \(\gamma\)-open_.
Clearly, by (2.1), if \(\mathcal{D}\) is strong \(\gamma\)-open, then \(\mathcal{D}\) is \(\gamma\)-open. By Theorem 4.1, we see that \(\mathcal{R}\) is strong \(\gamma\)-open.
**Definition 5.3**.: For \(\delta>0\) and integers \(i,j,\) let \(c_{ij}=(\frac{i+1}{2}\delta,\frac{j+1}{2}\delta).\) We say \(\{\delta,E_{ij},\eta_{ij},\tilde{f}_{ij},k_{1}\}\)\((k_{1}\geq 1)\) is a building block for \(\mathcal{D}\) if \(E_{ij}\subset\mathbb{D}(c_{ij},k_{1}\delta)\setminus\mathcal{D}\) is a compact subset, \(\eta_{ij}\in M_{0}^{+}(E_{ij})\) satisfies \(\|\mathcal{C}_{\epsilon}(\eta_{ij})\|\lesssim 1\) and \(\|\eta_{ij}\|=\gamma(\mathbb{D}(c_{ij},k_{1}\delta)\setminus\mathcal{D}),\) and \(\tilde{f}_{ij}\in L^{\infty}(\mu)\) satisfies \(\|\tilde{f}_{ij}\|_{L^{\infty}(\mu)}\lesssim 1\) and \(\tilde{f}_{ij}(z)=\mathcal{C}(\eta_{ij})(z)\) for \(z\in\mathbb{C}\setminus E_{ij}.\)
By Theorem 2.1 (1) and Proposition 2.2 (1), we see that for \(\delta>0,\) there always exists a building block \(\{\delta,E_{ij},\eta_{ij},\tilde{f}_{ij},k_{1}\}.\) If \(\mathcal{D}\) is bounded, then there are only finite many \(i\) and \(j\) such that \(\eta_{ij}\neq 0.\)
**Theorem 5.4**.: _Let \(\mathcal{D}\) be a \(\gamma\)-open bounded Borel subset. Let \(f\in L^{\infty}(\mathfrak{m}_{\mathcal{D}})\) be given with \(\|f\|_{L^{\infty}(\mathfrak{m}_{\mathcal{D}})}\leq 1\). Then the following statements are equivalent._
_(1) There exists \(C_{f}>0\) (depending on \(f\)) such that for all \(\lambda\in\mathbb{C}\), \(\delta>0\), and for all choices of a smooth non-negative function \(\varphi\) with support in \(\mathbb{D}(\lambda,\delta)\) satisfying \(\varphi(z)\leq 1\) and \(\left\|\frac{\partial\varphi(z)}{\partial\bar{z}}\right\|_{\infty}\lesssim \frac{1}{\delta},\) we have_
\[\left|\int(z-\lambda)^{n}f(z)\frac{\partial\varphi(z)}{\partial\bar{z}}d \mathfrak{m}_{\mathcal{D}}(z)\right|\leq C_{f}\delta^{n}\gamma(\mathbb{D}( \lambda,\delta)\setminus\mathcal{D})\text{ for }n\geq 0. \tag{5.2}\]
_(2) Let \(\{\delta,E_{ij},\eta_{ij},\tilde{f}_{ij},k_{1}\}\) be a building block for \(\mathcal{D}\) as in Definition 5.3. Then there exists \(f_{\delta}\in H(\mathcal{D})\cap L^{\infty}(\mu)\) that is a finite linear combination of \(\tilde{f}_{ij}\) such that \(\|f_{\delta}\|_{\mathcal{D}},\ \|f_{\delta}\|_{L^{\infty}(\mu)}\lesssim C_{f}\) and there exists a subsequence \(\{f_{\delta_{m}}\}\) satisfying \(f_{\delta_{m}}(z)\to f(z),\ \mathfrak{m}_{\mathcal{D}}-a.a.\) as \(\delta_{m}\to 0.\)_
_(3) \(f\in H^{\infty}(\mathcal{D}).\)_
Proof.: (1)\(\Rightarrow\)(2): See [20, Theorem 4.3]. The proof is technical and requires to use the modified Vitushkin scheme of Paramonov [15] (see [20, sections 3 & 4]).
(2)\(\Rightarrow\)(3) is trivial.
(3)\(\Rightarrow\)(1): The Vitushkin's localization operator \(T_{\varphi}\) is defined by
\[(T_{\varphi}f)(\lambda)=\frac{1}{\pi}\int\frac{f(z)-f(\lambda)}{z- \lambda}\bar{\partial}\varphi(z)d\mathfrak{m}(z),\]
where \(f\in L^{1}_{loc}(\mathbb{C})\). Clearly, \((T_{\varphi}f)(z)=-\frac{1}{\pi}\mathcal{C}(\varphi\bar{\partial}f\mathfrak{m} )(z).\) Therefore, by (2.3), \(T_{\varphi}f\) is analytic outside of \(\operatorname{supp}(\bar{\partial}f)\cap\operatorname{supp}(\varphi).\) If \(\operatorname{supp}(\varphi)\subset\mathbb{D}(a,\delta),\) then \(\|T_{\varphi}f\|_{\infty}\leq 4\|f\|_{\infty}\delta\|\bar{\partial}\varphi\|.\) See [13, VIII.7.1] for the details of \(T_{\varphi}.\)
Let \(f\in H^{\infty}(\mathcal{D})\) with \(\|f\|_{\mathcal{D}}\leq 1\) and \(f(z)=0,\ z\in\mathbb{C}\setminus\mathcal{D}.\) Let \(\{f_{m}\}\subset H(\mathcal{D}),\) where \(f_{m}\) is bounded and analytic on \(\mathbb{C}\setminus E_{m}\) for some compact subset \(E_{m}\subset\mathbb{C}\setminus\mathcal{D},\) such that \(f_{m}\) converges to \(f\) in \(L^{\infty}(\mathfrak{m}_{\mathcal{D}})\) weak-star topology. Then \(\|f_{m}\|_{\mathcal{D}}\leq C_{f}\) for some constant \(C_{f}.\) We may assume \(f_{m}(z)\to f(z),\ \mathfrak{m}_{\mathcal{D}}-a.a.\) and \(\|f_{m}\|_{\mathbb{C}}\leq 2C_{f}.\) The function \(T_{\varphi}f_{m}\) is analytic on \(\mathbb{C}_{\infty}\setminus(\operatorname{supp}(\varphi)\cap E_{m}).\) Therefore, for \(n\geq 0,\)
\[\left|\int_{\mathcal{D}}f_{m}(z)(z-\lambda)^{n}\bar{\partial} \varphi(z)d\mathfrak{m}(z)\right|\] \[\leq \left|\int_{\mathbb{C}\setminus\mathcal{D}}f_{m}(z)(z-\lambda)^{ n}\bar{\partial}\varphi(z)d\mathfrak{m}(z)\right|+\left|\int f_{m}(z)(z- \lambda)^{n}\bar{\partial}\varphi(z)d\mathfrak{m}(z)\right|\] \[\lesssim C_{f}\delta^{n-1}\mathfrak{m}(\mathbb{D}(\lambda,\delta) \setminus\mathcal{D})+\pi\|T_{\varphi}((z-\lambda)^{n}f_{m})\|\gamma( \operatorname{supp}(\varphi)\cap E_{m})\] \[\lesssim C_{f}\delta^{n}\gamma(\mathbb{D}(\lambda,\delta)\setminus \mathcal{D}),\]
where (2.1) is used in the last step. Taking \(m\to\infty,\) we prove (5.2).
The aim of this section is to use Theorem 5.4 to prove the following Lemma.
**Lemma 5.5**.: _For \(f\in H^{\infty}(\mathcal{R})\) with \(\|f\|_{\mathcal{R}}\leq 1,\) there exists \(\tilde{f}\in R^{t,\infty}(K,\mu)\) such that \(\rho(\tilde{f})=f.\) Moreover, \(\|\tilde{f}\|_{L^{\infty}(\mu)}\lesssim C_{f},\) where \(C_{f}\) is the constant in (5.2). Consequently, \(\rho\) is surjective._
To prove Lemma 5.5, we need a couple of lemmas.
**Lemma 5.6**.: _Let \(E_{n}\subset E_{n+1}\subset\mathbb{D}(0,R)\) be a sequence of subsets. Then_
\[\gamma\left(\cup_{n=1}^{\infty}E_{n}\right)\lesssim\lim_{n\to \infty}\gamma(E_{n}).\]
Proof.: By Theorem 2.1 (1) and Proposition 2.2 (2), there exists a compact subset \(F\subset\cup_{n=1}^{\infty}E_{n}\) and \(\eta\in M_{0}^{+}(F)\) with 1-linear growth such that \(N_{2}(\eta)\leq 1\) and
\[\gamma\left(\cup_{n=1}^{\infty}E_{n}\right)\lesssim\|\eta\|\lesssim\lim_{n\to \infty}\eta(E_{n}).\]
Since \(N_{2}(\eta|_{E_{n}})\leq 1,\) by Proposition 2.2 (3) and Theorem 2.1 (1), we get \(\eta(E_{n})\lesssim\gamma(E_{n}).\)
**Lemma 5.7**.: _Let \(E_{1}\subset\mathcal{F}\) be a compact subset with \(\gamma(E_{1})>0\). Then there exists \(f\in R^{t,\infty}(K,\mu)\) and \(\eta\in M_{0}^{+}(E_{1})\) such that \(\|\mathcal{C}_{\epsilon}(\eta)\|\lesssim 1,\)\(f(z)=\mathcal{C}(\eta)(z)\) for \(z\in\mathbb{C}_{\infty}\setminus\text{spt}\eta\),_
\[\|f\|_{L^{\infty}(\mu)}\lesssim 1,\ f(\infty)=0,\ f^{\prime}(\infty)=- \gamma(E_{1}),\]
_and_
\[\mathcal{C}(\eta)(z)\mathcal{C}(g_{j}\mu)(z)=\mathcal{C}(fg_{j} \mu)(z),\ \gamma|_{\mathbb{C}\setminus\text{spt}\eta}-a.a.\ \text{for}\ j\geq 1. \tag{5.3}\]
Proof.: Let the measure \(\eta_{1}\) and the function \(f_{1}\) be constructed as in Lemma 3.11 for \(\{g_{j}\}\) and \(E_{1}.\) From Theorem 2.1 (2), we have
\[\max(\gamma(E_{1}\cap\mathcal{F}_{0}),\ \gamma(E_{1}\cap\mathcal{F}_{+}),\ \gamma(E_{1}\cap\mathcal{F}_{-}))\geq\frac{1}{3C_{T}}\gamma(E_{1}).\]
Therefore, we shall consider the following three cases.
Case I (assuming \(E_{1}\subset\mathcal{F}_{0}\)): From (3.13) and (3.14), we see that \(f_{1}\in R^{t,\infty}(K,\mu)\) and \(f_{1}(\lambda)\mathcal{C}(g_{j}\mu)(\lambda)=\mathcal{C}(f_{1}g_{j}\mu)( \lambda,\ \gamma|_{\mathbb{C}\setminus\mathrm{spt}\eta_{1}}-a.a.\) for \(j\geq 1.\) Set
\[f=\frac{f_{1}}{\|\eta_{1}\|}\gamma(E_{1})\ \text{and}\ \eta=\frac{\eta_{1}}{\| \eta_{1}\|}\gamma(E_{1}).\]
Case II (assuming \(E_{1}\subset\mathcal{F}_{+}\)): Using Lemma 5.6, we assume that there exists a positive integer \(n_{0}\) such that \(E_{1}\subset\bigcup_{n=1}^{n_{0}}\Gamma_{n}.\) Put \(E_{1}=\cup_{n=1}^{n_{0}}F_{n},\) where \(F_{n}\subset\Gamma_{n}\) and \(F_{n}\cap F_{m}=\emptyset\) for \(n\neq m.\) Since \(\eta_{1}\) is of \(1\)-linear growth, we can set \(\eta_{1}=\sum_{n=1}^{n_{0}}w_{n}(z)\mathcal{H}^{1}|_{\Gamma_{n}},\) where \(w_{n}\) is supported on \(F_{n}.\) Define
\[f_{2}(z)=f_{1}(z)-\frac{1}{2}\sum_{n=1}^{n_{0}}e^{-i\beta_{n}}L((z-z_{n})e^{- i\beta_{n}}+z_{n})^{-1}w_{n}(z).\]
Then using Theorem 3.5, we conclude that \(f_{2},w_{n}\in L^{\infty}(\mu)\) and \(\|f_{2}\|_{L^{\infty}(\mu)}\lesssim 1.\) From (3.13), we get
\[\begin{split}\int f_{2}g_{j}d\mu=&-\int\mathcal{C} (g_{j}\mu)d\eta_{1}-\frac{1}{2}\sum_{n=1}^{n_{0}}\int_{F_{n}}e^{-i\beta_{n}}L( (z-z_{n})e^{-i\beta_{n}}+z_{n})^{-1}g_{j}hd\eta_{1}\\ =&-\sum_{n=1}^{n_{0}}\int_{F_{n}}v^{+}(g_{j}\mu, \Gamma_{n},\beta_{n})d\eta_{1}.\end{split} \tag{5.4}\]
Similarly, for \(\lambda\in\mathbb{C}\setminus\mathrm{spt}\eta_{1}\) and by (3.14), we have
\[\int\frac{f_{2}(z)-f_{2}(\lambda)}{z-\lambda}g_{j}(z)d\mu(z)=-\sum_{n=1}^{n_{0 }}\int_{F_{n}}v^{+}(g_{j}\mu,\Gamma_{n},\beta_{n})(z)\frac{d\eta_{1}(z)}{z- \lambda}. \tag{5.5}\]
Since \(v^{+}(g_{j}\mu,\Gamma_{n},\beta_{n})(z)=0,\ z\in F_{n},\ \mathcal{H}^{1}|_{\Gamma_{n}}-a.a.,\) by (5.4), we get \(f_{2}\in R^{t,\infty}(K,\mu).\) Similarly, from (5.5), we see that \(\frac{f_{2}(z)-f_{2}(\lambda)}{z-\lambda}\in R^{t,\infty}(K,\mu)\) and \(f_{2}(\lambda)\mathcal{C}(g_{j}\mu)(\lambda)=\mathcal{C}((f_{2}g_{j}\mu)(\lambda)\) for \(\lambda\in\mathbb{C}\setminus\mathrm{spt}\eta_{1},\ \gamma-a.a.\) Set
\[f=\frac{f_{2}}{\|\eta_{1}\|}\gamma(E_{1})\ \text{and}\ \eta=\frac{\eta_{1}}{\| \eta_{1}\|}\gamma(E_{1}).\]
Case III (assuming \(E_{1}\subset\mathcal{F}_{-}\)): The proof is the same as Case II if we modify the definition of \(f_{2}\) by the following
\[f_{2}(z)=f_{1}(z)+\frac{1}{2}\sum_{n=1}^{n_{0}}e^{-i\beta_{n}}L((z-z_{n})e^{-i \beta_{n}}+z_{n})^{-1}w_{n}(z).\]
Then \(f\) and \(\eta\) satisfy the properties of the lemma. The lemma is proved.
We are now ready to prove Lemma 5.5.
Proof.: (Lemma 5.5): Let \(f\in H^{\infty}(\mathcal{R})\) with \(\|f\|_{\mathcal{R}}\leq 1\) and \(f(z)=0,\;z\in\mathbb{C}\setminus\mathcal{R}.\) Let \(E_{ij}\subset\mathbb{D}(c_{ij},k_{1}\delta)\setminus\mathcal{R}\) be a compact subset such that \(\gamma(E_{ij})\geq\frac{1}{2}\gamma(\mathbb{D}(c_{ij},k_{1}\delta)\setminus \mathcal{R}).\) Let \(\eta_{ij}\in M_{0}^{+}(E_{ij})\) and \(\tilde{f}_{ij}\in R^{t,\infty}(K,\mu)\) be as in Lemma 5.7 such that \(\|\mathcal{C}_{\epsilon}(\eta_{ij})\|\lesssim 1,\)\(\|\eta_{ij}\|=\gamma(\mathbb{D}(c_{ij},k_{1}\delta)\setminus\mathcal{R}),\)\(\|\tilde{f}_{ij}\|\lesssim 1,\)\(\tilde{f}_{ij}(z)=\mathcal{C}(\eta_{ij})(z)\) for \(z\in\mathbb{C}\setminus E_{ij},\) and
\[\mathcal{C}(\eta_{ij})(z)\mathcal{C}(g\mu)(z)=\mathcal{C}(\tilde{f}_{ij}g\mu) (z),\;\gamma|_{\mathbb{C}\setminus E_{ij}}-a.a.\text{ for }g\perp R^{t}(K,\mu). \tag{5.6}\]
It is clear that \(\{\delta,E_{ij},\eta_{ij},\tilde{f}_{ij},k_{1}\}\) is a building block for \(\mathcal{R}\) as in Definition 5.3. By Theorem 4.1, \(\mathcal{R}\) is strong \(\gamma\)-open. Hence, using Theorem 5.4 (2), we let \(f_{\delta}\in H(\mathcal{R})\cap R^{t,\infty}(K,\mu)\) be the function that is a finite linear combination of \(\tilde{f}_{ij}\) such that there exists a subsequence \(\{f_{\delta_{m}}\}\) satisfying \(\|f_{\delta_{m}}\|_{\mathcal{R}},\;\|f_{\delta_{m}}\|_{L^{\infty}(\mu)} \lesssim C_{f}\) and \(f_{\delta_{m}}(z)\to f(z),\;\mathfrak{m}_{\mathcal{R}}-a.a.\). Therefore, by passing to a subsequence, we may assume that \(f_{\delta_{m}}(z)\to\tilde{f}(z)\) in \(L^{\infty}(\mu)\) weak-star topology. Hence, \(\tilde{f}\in R^{t,\infty}(K,\mu),\)\(\|\tilde{f}\|_{L^{\infty}(\mu)}\lesssim C_{f},\) and \(\mathcal{C}(f_{\delta_{m}}g\mu)(z)\to\mathcal{C}(\tilde{f}g\mu)(z),\;\mathfrak{ m}-a.a.\). From (5.6) and (3.10), taking \(m\to\infty,\) we infer that
\[f(z)\mathcal{C}(g\mu)(z)=\mathcal{C}(\tilde{f}g\mu)(z),\;\mathfrak{m}-a.a. \text{ for }g\perp R^{t}(K,\mu),\]
which implies \(\rho(\tilde{f})=f\) by (2.6).
## 6. **Proof of Theorem 1.1**
We restate Theorem 1.1 as the following form.
**Theorem 6.1**.: _Let \(\mu\in M_{0}^{+}(K)\) for a compact set \(K\subset\mathbb{C}\). Suppose that \(1\leq t<\infty\) and \(S_{\mu}\) on \(R^{t}(K,\mu)\) is pure. Let \(\mathcal{F}\) and \(\mathcal{R}\) be the non-removable boundary and removable set for \(R^{t}(K,\mu),\) respectively. Let \(\rho\) be the map defined as in Lemma 2.6. Then \(\text{spt}\iota\subset\overline{\mathcal{R}}\) and \(\rho\) is an isometric isomorphism and a weak-star homeomorphism from \(R^{t,\infty}(K,\mu)\) onto \(H^{\infty}(\mathcal{R})\) satisfying (1) \(\rho(r)=r\) for \(r\in\text{Rat}(K),\) (2) \(\mathcal{C}(g\mu)(z)=0,\;\mathfrak{m}_{\mathcal{F}}-a.a.\) for \(g\perp R^{t}(K,\mu),\) and (3) \(\rho(f)(z)\mathcal{C}(g\mu)(z)=\mathcal{C}(fg\mu)(z),\;\gamma-a.a.\) for \(f\in R^{t,\infty}(K,\mu)\) and \(g\perp R^{t}(K,\mu).\)_
To prove the image of \(\rho\) is a subset of \(H^{\infty}(\mathcal{R}),\) by Theorem 5.4, we need to prove the following lemma.
**Lemma 6.2**.: _If \(f\in R^{t,\infty}(K,\mu)\) and \(\varphi\) is a smooth function with support in \(\mathbb{D}(\lambda,\delta)\), then_
\[\left|\int\rho(f)(z)\bar{\partial}\varphi(z)d\mathfrak{m}(z)\right|\lesssim\| \rho(f)\|\delta\|\bar{\partial}\varphi\|\gamma(\mathbb{D}(\lambda,2\delta) \cap\mathcal{F}). \tag{6.1}\]
Let us use Lemma 6.2 to prove Theorem 6.1 before proving the lemma.
Proof.: (Theorem 6.1 assuming Lemma 6.2 holds): (1) is trivial. (2) follows from (3.10). (3) follows from (2.6). For \(\mathbb{D}(\lambda,\delta)\cap\overline{\mathcal{R}}=\emptyset,\) by (2), we have \(\mathcal{C}(g\mu)(z)=0,\;\mathfrak{m}|_{\mathbb{D}(\lambda,\delta)}-a.a.\) for \(g\perp R^{t}(K,\mu),\) which implies \(\mu(\mathbb{D}(\lambda,\delta))=0\) since \(S_{\mu}\) is pure. Hence, \(\text{spt}\iota\subset\overline{\mathcal{R}}.\) It remains to prove that \(\rho\) is an isometric isomorphism and a weak-star homeomorphism.
For \(f\in R^{t,\infty}(K,\mu),\) by Lemma 6.2 and Theorem 5.4, we conclude that \(\rho(f)\in H^{\infty}(\mathcal{R})\) since \(\mathcal{R}\) is strong \(\gamma\)-open by Theorem 4.1. Set \(F=\rho(f).\) Using Lemma
5.5, we see that there exists \(\tilde{F}\in R^{t,\infty}(K,\mu)\) such that \(\rho(\tilde{F})=F\) and \(\|\tilde{F}\|_{L^{\infty}(\mu)}\leq C_{1}\|F\|\), where \(C_{1}>0\) is an absolute constant and \(C_{F}\leq C_{1}\|F\|\) by (6.1) and (5.2). From (2.6), we get
\[\mathcal{C}(\tilde{F}g\mu)(z)=F(z)\mathcal{C}(g\mu)(z)=\mathcal{C}(fg\mu)(z), \ \mathfrak{m}-a.a.\]
for \(g\perp R^{t}(K,\mu)\), which implies \(\tilde{F}=f\) since \(S_{\mu}\) is pure. Hence,
\[\|f\|_{L^{\infty}(\mu)}\leq C_{1}\|\rho(f)\|_{L^{\infty}(\mathfrak{m}_{ \mathcal{R}})}.\]
Using Proposition 2.7 (1),
\[\|f^{n}\|_{L^{\infty}(\mu)}^{\frac{1}{n}}\leq C_{1}^{\frac{1}{n}}\|(\rho(f))^{n }\|_{L^{\infty}(\mathfrak{m}_{\mathcal{R}})}^{\frac{1}{n}}.\]
Thus, taking \(n\to\infty\), we get \(\|f\|_{L^{\infty}(\mu)}\leq\|\rho(f)\|_{L^{\infty}(\mathfrak{m}_{\mathcal{R}})}\). So, by Proposition 2.7 (2), we have proved that
\[\|\rho(f)\|_{L^{\infty}(\mathfrak{m}_{\mathcal{R}})}=\|f\|_{L^{\infty}(\mu)}, \ f\in R^{t,\infty}(K,\mu).\]
It is clear that the map \(\rho\) is injective. Lemma 5.5 implies that \(\rho\) is surjective. Therefore, \(\rho\) is bijective isomorphism between two Banach algebras \(R^{t,\infty}(K,\mu)\) and \(H^{\infty}(\mathcal{R})\). Clearly \(\rho\) is also a weak-star sequentially continuous, so an application of Krein-Smulian Theorem shows that \(\rho\) is a weak-star homeomorphism.
To prove Lemma 6.2, we need some lemmas. Let \(\phi\) be a smooth non-negative function on \(\mathbb{R}\) supported on \([0,1]\) with \(0\leq\phi(|z|)\leq 1\) and \(\int\phi(|z|)d\mathfrak{m}(z)=1\). For \(\epsilon>0\), define \(\phi_{\epsilon}(z)=\frac{1}{\epsilon^{2}}\phi(\frac{|z|}{\epsilon})\) and \(K_{\epsilon}=-\frac{1}{z}\kappa\phi_{\epsilon}.\) For \(\nu\in M_{0}(\mathbb{C})\), define \(\tilde{\mathcal{C}}_{\epsilon}\nu=K_{\epsilon}*\nu.\) The kernel \(K_{\epsilon}\) is a smooth function, satisfies \(\|K_{\epsilon}\|_{\infty}\lesssim\frac{1}{\epsilon}\), \(K_{\epsilon}(z)=-\frac{1}{z}\) for \(|z|\geq\epsilon\), \(\tilde{\mathcal{C}}_{\epsilon}\nu=\phi_{\epsilon}*\mathcal{C}\nu=\mathcal{C} (\phi_{\epsilon}*\nu)\), and
\[|\tilde{\mathcal{C}}_{\epsilon}\nu(\lambda)-\mathcal{C}_{\epsilon}\nu(\lambda )|\lesssim\frac{|\nu|(\mathbb{D}(\lambda,\epsilon))}{\epsilon}\lesssim \mathcal{M}_{\nu}(\lambda). \tag{6.2}\]
We denote \(\mathcal{V}_{\nu}\) the set of \(z\in\mathbb{C}\) for which \(\lim_{\epsilon\to 0}\mathcal{C}_{\epsilon}\nu(z)=\mathcal{C}\nu(z)\) exists. Set \(\mathcal{X}_{\nu}=\mathbb{C}\setminus\mathcal{V}_{\nu}.\) Then \(\gamma(\mathcal{X}_{\nu})=0\) by Corollary 2.3.
**Lemma 6.3**.: _Suppose that \(\eta\in M_{0}^{+}(\mathbb{C}),\)\(\eta\) is of \(1\)-linear growth, and \(\|\mathcal{C}_{\epsilon}(\eta)\|\leq 1\). If \(\nu\in M_{0}(\mathbb{C})\) satisfies \(|\mathcal{C}_{\epsilon}(\nu)(\lambda)|,\ \mathcal{M}_{\nu}(\lambda)\leq M<\infty,\ \eta-a.a.\), then there are two functions \(F_{1}\in L^{\infty}(|\nu|)\) and \(F_{2}\in L^{\infty}(\eta)\) with \(F_{1}(z)=\mathcal{C}(\eta)(z),\ \nu|_{\mathcal{Z}\mathcal{D}(\eta) \setminus\mathcal{X}_{\eta}}-a.a.\) and \(F_{2}(z)=\mathcal{C}(\nu)(z),\ \eta|_{\mathcal{Z}\mathcal{D}(\eta)}-a.a.\) such that in the sense of distribution,_
\[\bar{\partial}(\mathcal{C}(\eta)\mathcal{C}(\nu))=-\pi(F_{1}\nu+F_{2}\eta).\]
Proof.: The transform \(\tilde{\mathcal{C}}_{\epsilon}\nu\) is smooth and \(\|\tilde{\mathcal{C}}_{\epsilon}\nu-\mathcal{C}\nu\|_{L^{1}(\mathfrak{m}_{ \mathcal{D}})}\to 0\) as \(\epsilon\to 0\) for a bounded subset \(\mathcal{D}\) by (6.2). So for a smooth function \(\varphi\) with compact support,
we have
\[\begin{split}&\int\bar{\partial}\varphi(z)\mathcal{C}(\eta)(z) \mathcal{C}(\nu)(z)d\mathfrak{m}(z)\\ =&\lim_{\epsilon\to 0}\int\bar{\partial}\varphi(z) \tilde{\mathcal{C}}_{\epsilon}\nu(z)\mathcal{C}(\eta)(z)d\mathfrak{m}(z)\\ =&\lim_{\epsilon\to 0}\int\bar{\partial}(\varphi(z) \tilde{\mathcal{C}}_{\epsilon}\nu(z))\mathcal{C}(\eta)(z)d\mathfrak{m}(z)- \lim_{\epsilon\to 0}\int\varphi(z)\bar{\partial}(\tilde{\mathcal{C}}_{ \epsilon}\nu(z))\mathcal{C}(\eta)(z)d\mathfrak{m}(z)\\ =& I-\lim_{\epsilon\to 0}II_{\epsilon}.\end{split}\]
By (6.2) and the assumption, we have
\[|\tilde{\mathcal{C}}_{\epsilon}(\nu)(\lambda)|\leq|\tilde{\mathcal{C}}_{ \epsilon}(\nu)(\lambda)-\mathcal{C}_{\epsilon}(\nu)(\lambda)|+|\mathcal{C}_{ \epsilon}(\nu)(\lambda)|\lesssim M,\ \eta-a.a..\]
We find a sequence \(\{\tilde{\mathcal{C}}_{\epsilon_{k}}\nu(z)\}\) converging to \(F_{2}\) in \(L^{\infty}(\eta)\) weak-star topology and
\[\tilde{\mathcal{C}}_{\epsilon_{k}}\nu(z)\to\mathcal{C}\nu(z)=F_{2}(z),\ \eta|_{ \mathcal{Z}\mathcal{D}(\nu)}-a.a.\]
by (6.2) and Corollary 2.3 since a zero \(\gamma\) set is a zero \(\eta\) set (see Lemma 3.10). By Lemma 3.1, we see that \(\eta(\mathcal{Z}\mathcal{D}(\eta)\cap\mathcal{ND}(\nu))=0.\) Therefore, \(\mathcal{C}\nu(z)=F_{2}(z),\ \eta|_{\mathcal{Z}\mathcal{D}(\eta)}-a.a.\) Using the Lebesgue dominating convergence theorem, we get
\[I=-\lim_{k\to\infty}\int\varphi(z)\tilde{\mathcal{C}}_{\epsilon_{k}}\nu(z) \bar{\partial}\mathcal{C}(\eta)(z)d\mathfrak{m}(z)=\pi\int\varphi(z)F_{2}(z)d \eta(z).\]
Now we estimate \(II_{\epsilon}:\)
\[\begin{split} II_{\epsilon}=&\int\varphi(z)\bar{ \partial}(\mathcal{C}(\phi_{\epsilon}*\nu)(z))\mathcal{C}\eta(z)d\mathfrak{m}( z)\\ =&-\pi\int\varphi(z)\mathcal{C}\eta(z)(\phi_{ \epsilon}*\nu)(z)d\mathfrak{m}(z).\\ =&-\pi\int(\varphi\mathcal{C}\eta)*\phi_{\epsilon}( z)d\nu(z).\end{split} \tag{6.3}\]
On the other hand,
\[\begin{split}&|(\varphi\mathcal{C}\eta)*\phi_{\epsilon}(z)-\varphi(z)( \mathcal{C}\eta)*\phi_{\epsilon}(z)|\\ \lesssim&\int|\varphi(w)-\varphi(z)||\mathcal{C} \eta(w)|\phi_{\epsilon}(z-w)d\mathfrak{m}(w)\\ \lesssim&\|\mathcal{C}\eta\|_{\mathbb{C}}\sup_{|w-z |\leq\epsilon}|\varphi(w)-\varphi(z)|\to 0\ \text{as}\ \epsilon\to 0.\end{split} \tag{6.4}\]
For \(\lambda\in\mathbb{C}\setminus\mathcal{X}_{\eta}\), \(\lim_{\epsilon\to 0}\mathcal{C}_{\epsilon}(\eta)(\lambda)=\mathcal{C}(\eta)(\lambda)\) exists. By (6.2) and the assumption, \(\tilde{\mathcal{C}}_{\epsilon}\eta(z)\) converges to \(\mathcal{C}\eta(z)\) on \(\mathcal{Z}\mathcal{D}(\eta)\setminus\mathcal{X}_{\eta}\) and \(\|\tilde{\mathcal{C}}_{\epsilon}(\eta)\|_{L^{\infty}(|\nu|)}\lesssim 1.\) We find a sequence \(\{\tilde{\mathcal{C}}_{\epsilon_{k}}\eta(z)\}\) converging to \(F_{1}\) in \(L^{\infty}(|\nu|)\) weak-star topology and
\[\tilde{\mathcal{C}}_{\epsilon_{k}}\eta(z)\to\mathcal{C}\eta(z)=F_{1}(z),\ \nu|_{ \mathcal{Z}\mathcal{D}(\eta)\setminus\mathcal{X}_{\eta}}-a.a.\]
(see (6.2)). Combining with (6.3) and (6.4), we conclude that
\[\lim_{k\to\infty}II_{\epsilon_{k}}=-\pi\lim_{k\to\infty}\int\varphi(z)\tilde{ \mathcal{C}}_{\epsilon_{k}}\eta(z)(z)d\nu(z)=-\pi\int\varphi(z)F_{1}(z)d\nu(z).\]
The lemma is proved.
The following lemma is a simple application of Theorem 2.1 (2).
**Lemma 6.4**.: _Let \(X\subset\mathbb{C}\) and \(a_{1},a_{2}\in\mathbb{C}.\) Let \(\mathcal{Q}\) be any set in \(\mathbb{C}\) with \(\gamma(\mathcal{Q})=0.\) Let \(f_{1}(z)\) and \(f_{2}(z)\) be functions on \(\mathbb{D}(\lambda,\delta_{0})\setminus\mathcal{Q}\) for some \(\delta_{0}>0.\) If \(a_{2}\neq 0\) and_
\[\lim_{\delta\to 0}\frac{\gamma(\mathbb{D}(\lambda,\delta)\cap X\cap\{|f_{i}(z)-a _{i}|>\epsilon\})}{\delta}=0,\ i=1,2,\]
_for all \(\epsilon>0,\) then_
\[\lim_{\delta\to 0}\frac{\gamma(\mathbb{D}(\lambda,\delta)\cap X\cap\{|f_{j_{ 1}(z)}^{\prime}-\frac{a_{1}}{a_{2}}|>\epsilon\})}{\delta}=0.\]
**Lemma 6.5**.: _If \(f\in R^{t}(K,\mu),\) then there exists a subset \(\mathcal{Q}_{f}\) with \(\gamma(\mathcal{Q}_{f})=0\) such that \(\rho(f)\) is \(\gamma\)-continuous at each point \(\lambda\in\mathcal{R}\setminus\mathcal{Q}_{f}.\)_
Proof.: There exists a subset \(\mathcal{Q}_{0}\) with \(\gamma(\mathcal{Q}_{0})=0\) such that for \(\lambda\in\mathcal{R}_{0}\setminus\mathcal{Q}_{0},\) there exists \(j_{0}\) satisfying \(\lim_{\epsilon\to 0}\mathcal{C}_{\epsilon}(g_{j_{0}}\mu)(\lambda)= \mathcal{C}(g_{j_{0}}\mu)(\lambda)\) exists, \(\lim_{\epsilon\to 0}\mathcal{C}_{\epsilon}(fg_{j_{0}}\mu)(\lambda)= \mathcal{C}(fg_{j_{0}}\mu)(\lambda)\) exists, \(\Theta_{g_{j_{0}}\mu}(\lambda)=\Theta_{fg_{j_{0}}\mu}(\lambda)=0,\)\(\mathcal{C}(g_{j_{0}}\mu)(\lambda)\neq 0,\) and \(\rho(f)(\lambda)=\frac{\mathcal{C}(fg_{j_{0}}\mu)(\lambda)}{\mathcal{C}(g_{j_{ 0}}\mu)(\lambda)}\) by (2.6). From Lemma 3.3, we infer that \(\mathcal{C}(g_{j_{0}}\mu)\) and \(\mathcal{C}(fg_{j_{0}}\mu)\) are \(\gamma\)-continuous at \(\lambda.\) Hence, applying Lemma 6.4 for \(X=\mathbb{C},\) we infer that \(\frac{\mathcal{C}(fg_{j_{0}}\mu)(z)}{\mathcal{C}(g_{j_{0}}\mu)(z)}\) is \(\gamma\)-continuous at \(\lambda.\) It follows from (2.6) that \(\rho(f)\) is \(\gamma\)-continuous at \(\lambda.\)
To deal with \(\mathcal{R}_{1},\) without loss of generality, we consider \(\mathcal{R}_{1}\cap\Gamma_{1}\) with the rotation angle \(\beta_{1}=0.\) There exists a subset \(\mathcal{Q}_{1}\) with \(\gamma(\mathcal{Q}_{1})=0\) such that for \(\lambda\in\mathcal{R}_{1}\cap\Gamma_{1}\setminus\mathcal{Q}_{1},\) there exist integers \(j_{0},j_{1},j_{2}\) satisfying \(\lim_{\epsilon\to 0}\mathcal{C}_{\epsilon}(g_{j_{k}}\mu)(\lambda)= \mathcal{C}(g_{j_{k}}\mu)(\lambda)\) and \(\lim_{\epsilon\to 0}\mathcal{C}_{\epsilon}(fg_{j_{k}}\mu)(\lambda)= \mathcal{C}(fg_{j_{k}}\mu)(\lambda)\) exist for \(k=0,1,2,\)\(\mathcal{C}(g_{j_{0}}\mu)(\lambda)\neq 0\) by Lemma 3.12, \(v^{+}(g_{j_{1}}\mu,\Gamma_{1})(\lambda)\neq 0,\)\(v^{-}(g_{j_{2}}\mu,\Gamma_{1})(\lambda)\neq 0,\) and
\[\rho(f)(\lambda)=f(\lambda)=\frac{\mathcal{C}(fg_{j_{0}}\mu)(\lambda)}{ \mathcal{C}(g_{j_{0}}\mu)(\lambda)}=\frac{v^{+}(fg_{j_{1}}\mu,\Gamma_{1})( \lambda)}{v^{+}(g_{j_{1}}\mu,\Gamma_{1})(\lambda)}=\frac{v^{-}(fg_{j_{2}}\mu, \Gamma_{1})(\lambda)}{v^{-}(g_{j_{2}}\mu,\Gamma_{1})(\lambda)}\]
by (2.6) and (2.7). Set \(X_{1}=\Gamma_{1},\)\(X_{2}=U_{\Gamma_{1}},\) and \(X_{3}=L_{\Gamma_{1}}.\) Using (2.6), Theorem 3.5, and Lemma 6.4, we get
\[\lim_{\delta\to 0}\frac{\gamma(\mathbb{D}(\lambda,\delta)\cap X_{j}\cap\{|\rho( f)(z)-\rho(f)(\lambda)|>\epsilon\})}{\delta}=0,\ j=1,2,3.\]
Applying Theorem 2.1 (2), we see that \(\rho(f)\) is \(\gamma\)-continuous at \(\lambda.\)
**Lemma 6.6**.: _For \(f\in R^{t,\infty}(K,\mu)\), if \(\eta\in M_{0}^{+}(\mathbb{C}),\)\(\eta\) is of \(1\)-linear growth, and \(\|\mathcal{C}_{\epsilon}(\eta)\|\leq 1\) for \(\epsilon>0\) such that \(\mathcal{C}(\eta)(\lambda)=\rho(f)(\lambda),\ \mathfrak{m}_{\mathcal{R}}-a.a.\) and_
\[\mathcal{M}_{g_{j}\mu}(z),\ |\mathcal{C}_{\epsilon}(g_{j}\mu)(z)|\leq M_{j}< \infty,\ \eta-a.a.\ \text{for}\ j\geq 1, \tag{6.5}\]
_then \(\eta(\mathcal{R})=0.\)_
Proof.: Suppose that \(\gamma(\mathcal{R}\cap\mathcal{N}\mathcal{D}(\eta))>0.\) Then using Lemma 3.1, we find a Lipschitz graph \(\Gamma_{0}\) such that \(\eta(\Gamma_{0}\cap\mathcal{R})>0\). Without loss of generality, we assume that the rotation angle of \(\Gamma_{0}\) is zero. Clearly, \(\eta|_{\Gamma_{0}\cap\mathcal{R}}\) is absolutely continuous with respect to \(\mathcal{H}^{1}|_{\Gamma_{0}}.\) There exists a subset \(\mathcal{Q}\) with \(\gamma(\mathcal{Q})=0\) such that Theorem 3.5
holds and \(\rho(f)\) is \(\gamma\)-continuous at at \(\lambda\in\Gamma_{0}\cap\mathcal{R}\setminus\mathcal{Q}\) (Lemma 6.5). Together with the assumption and (2.1), we get
\[v^{+}(\eta,\Gamma_{0})(\lambda)=v^{-}(\eta,\Gamma_{0})(\lambda)=\rho(f)(\lambda)\]
for \(\lambda\in\Gamma_{0}\cap\mathcal{R}\setminus\mathcal{Q}\), which implies \(\eta(\Gamma_{0}\cap\mathcal{R})=0.\) This is a contradiction. Thus,
\[\Theta_{\eta}(\lambda)=0,\ \mathcal{C}(\eta)(\lambda)=\rho(f)(\lambda),\ \gamma|_{ \mathcal{R}}-a.a..\]
Since a zero \(\gamma\) set is also a zero \(\eta\) set (see Lemma 3.10), we get
\[\eta(\mathcal{R}_{1})=0\ \text{and}\ \mathcal{C}(\eta)(\lambda)=\rho(f)( \lambda),\ \eta|_{\mathcal{R}}-a.a.. \tag{6.6}\]
Let \(\mu=h\eta+\mu_{s}\) be the Radon Nikodym decomposition with respect to \(\eta\), where \(\mu_{s}\perp\eta\). Using (6.5) and applying Lemma 6.3 for \(\eta\) and \(\nu=g_{j}\mu\), we have
\[F_{1}g_{j}\mu+F_{2}\eta=fg_{j}\mu.\]
Applying (2.7) and (6.6), we get
\[F_{1}g_{j}h=\mathcal{C}(\eta)g_{j}h=\rho(f)g_{j}h=fg_{j}h,\ \eta|_{\mathcal{R}}- a.a..\]
Therefore,
\[F_{2}(z)=\mathcal{C}(g_{j}\mu)(z)=0,\ \eta|_{\mathcal{R}}-a.a.\]
for \(j\geq 1\). Thus, \(\eta(\mathcal{R})=0\) since \(\eta(\mathcal{R}_{1})=0\) by (6.6) and \(\mathcal{R}_{0}\subset\mathcal{N}\).
The following lemma generalizes [20, Lemma 7.1].
**Lemma 6.7**.: _For \(\lambda\in\mathbb{C}\) and \(\delta>0\), we have_
\[\lim_{N\to\infty}\gamma(\mathbb{D}(\lambda,\delta)\cap\mathcal{E}_{N}) \lesssim\gamma(\mathbb{D}(\lambda,2\delta)\cap\mathcal{F}). \tag{6.7}\]
Proof.: Set
\[\epsilon_{0}=\lim_{N\to\infty}\gamma(\mathbb{D}(\lambda,\delta)\cap\mathcal{E} _{N})(=\inf_{N\geq 1}\gamma(\mathbb{D}(\lambda,\delta)\cap\mathcal{E}_{N})).\]
We assume \(\epsilon_{0}>0\). From Lemma 3.9, there exists a Borel subset \(F\subset\mathbb{D}(\lambda,\delta)\) such that \(\gamma(\mathbb{D}(\lambda,\delta)\setminus F)<\frac{1}{2C_{T}}\epsilon_{0}\), where \(C_{T}\) is the constant used in Theorem 2.1; \(\mathcal{C}_{*}(g_{j}\mu)(z)\leq M_{j}<\infty\) for \(z\in F\); and \(\mathcal{M}_{g_{j}\mu}(z)\leq M_{j}<\infty\) for \(z\in F\).
For \(z\in\overline{F}\), let \(\lambda_{n}\in F\cap\mathbb{D}(z,\frac{1}{n})\), we have
\[\frac{|g_{j}\mu|(\mathbb{D}(z,\delta))}{\delta}\leq\frac{\delta+\frac{1}{n}}{ \delta}\mathcal{M}_{g_{j}\mu}(\lambda_{n})\leq\frac{\delta+\frac{1}{n}}{\delta }M_{j},\]
which implies \(\mathcal{M}_{g_{j}\mu}(z)\leq M_{j}.\) For \(z\in F\), we have, by (6.2),
\[|\tilde{\mathcal{C}}_{\epsilon}(g_{j}\mu)(z)|\leq|\tilde{\mathcal{C}}_{ \epsilon}(g_{j}\mu)(z)-\mathcal{C}_{\epsilon}(g_{j}\mu)(z)|+|\mathcal{C}_{ \epsilon}(g_{j}\mu)(z)|\lesssim M_{j}.\]
Because \(\tilde{\mathcal{C}}_{\epsilon}(g_{j}\mu)(z)\) is continuous on \(\overline{F}\), we have
\[\mathcal{M}_{g_{j}\mu}(z),\ |\mathcal{C}_{\epsilon}(g_{j}\mu)(z)|,\ |\tilde{ \mathcal{C}}_{\epsilon}(g_{j}\mu)(z)|\lesssim M_{j}\ \text{for}\ z\in\overline{F}. \tag{6.8}\]
Using Theorem 2.1 (2), we have
\[\gamma(F\cap\mathcal{E}_{N})\geq \frac{1}{C_{T}}\gamma(\mathbb{D}(\lambda,\delta)\cap\mathcal{E}_{N })-\gamma(\mathbb{D}(\lambda,\delta)\setminus F)\] \[\geq \frac{1}{C_{T}}\gamma(\mathbb{D}(\lambda,\delta)\cap\mathcal{E}_{N })-\frac{1}{2C_{T}}\epsilon_{0}\] \[\geq \frac{1}{2C_{T}}\gamma(\mathbb{D}(\lambda,\delta)\cap\mathcal{E}_ {N}).\]
From Lemma 3.11, we find \(\eta_{N}\in M_{0}^{+}(F\cap\mathcal{E}_{N})\) with 1-linear growth, \(\|\mathcal{C}_{\epsilon}(\eta_{N})\|_{\mathbb{C}}\leq 1\), and \(\gamma(\mathbb{D}(\lambda,\delta)\cap\mathcal{E}_{N})\lesssim\|\eta_{N}\|.\) We may assume that \(\eta_{N}\in C(\overline{F})^{*}\to\eta\) in weak-star topology. Clearly, \(\operatorname{spt}(\eta)\subset\overline{F}\), \(\eta\) is 1-linear growth, \(\lim_{N\to\infty}\|\eta_{N}\|=\|\eta\|,\) and \(\|\mathcal{C}_{\epsilon}(\eta)\|\leq 1\) since \(\mathcal{C}_{\epsilon}(\eta_{N})\) converges to \(\mathcal{C}_{\epsilon}(\eta)\) in \(L^{\infty}(\mathbb{C})\) weak-star topology. Hence,
\[\lim_{N\to\infty}\gamma(\mathbb{D}(\lambda,\delta)\cap\mathcal{E}_{N})\lesssim \|\eta\|. \tag{6.9}\]
Using Lemma 3.11 (3), we conclude that there exists a sequence of \(\{\epsilon_{k}\}\) such that \(\mathcal{C}_{\epsilon_{k}}(\eta_{N})\) converges to \(f_{N}\) in \(L^{\infty}(\mu)\) weak-star topology, \(\|f_{N}\|_{L^{\infty}(\mu)}\leq 1,\)
\[\int f_{N}g_{j}d\mu=-\int\mathcal{C}(g_{j}\mu)d\eta_{N}, \tag{6.10}\]
and
\[\int\frac{f_{N}(z)-f_{N}(\lambda)}{z-\lambda}g_{j}(z)d\mu(z)=-\int\mathcal{C} (g_{j}\mu)(z)\frac{d\eta_{N}(z)}{z-\lambda}\text{ for }\lambda\in( \operatorname{spt}\eta_{N})^{c}. \tag{6.11}\]
We may assume that \(f_{N}\) converges to \(f\) in \(L^{\infty}(\mu)\) weak-star topology. By (6.10),
\[\left|\int fg_{j}d\mu\right|=\lim_{N\to\infty}\left|\int f_{N}g_{j}d\mu\right| \leq\lim_{N\to\infty}\int|\mathcal{C}(g_{j}\mu)|d\eta_{N}\leq\lim_{N\to\infty} \frac{\|\eta_{N}\|}{N}=0,\]
which implies \(f\in R^{t,\infty}(K,\mu)\). By passing to a subsequence, we may assume that \(\mathcal{C}(\eta_{N})\) converges to \(H(z)\) in \(L^{\infty}(\mathbb{C})\) weak-star topology. Let \(\varphi\) be a smooth function with compact support. Then \(\mathcal{C}(\varphi\mathfrak{m})\) is continuous and we have
\[\int\varphi Hd\mathfrak{m} =\lim_{N\to\infty}\int\varphi\mathcal{C}\eta_{N}d\mathfrak{m}=- \lim_{N\to\infty}\int\mathcal{C}(\varphi\mathfrak{m})d\eta_{N}\] \[= -\int\mathcal{C}(\varphi\mathfrak{m})d\eta=\int\varphi\mathcal{C }\eta d\mathfrak{m}.\]
Hence, \(H=\mathcal{C}\eta.\) Let \(n<N\) and let \(\phi\) be a bounded Borel function supported in \(\mathbb{C}\setminus\mathcal{E}_{n}.\) Since \(\mathcal{E}_{N}\subset\mathcal{E}_{n},\) by (6.11), we get
\[\left|\int(\mathcal{C}(f_{N}g_{j}\mu)(\lambda)-\mathcal{C}(\eta_ {N})(\lambda)\mathcal{C}(g_{j}\mu)(\lambda))\phi(\lambda)d\mathfrak{m}\right|\] \[= \left|\int\mathcal{C}(\phi\mathfrak{m})(z)\mathcal{C}(g_{j}\mu)( z)d\eta_{N}(z)\right|\] \[\leq \frac{\|\mathcal{C}(\phi\mathfrak{m})\|\|\eta_{N}\|}{N}\to 0, \text{ as }N\to\infty.\]
Clearly, \(\int\mathcal{C}(f_{N}g_{j}\mu)(\lambda)\phi(\lambda)d\mathfrak{m}\to\int\mathcal{C }(fg_{j}\mu)(\lambda)\phi(\lambda)d\mathfrak{m}\) as \(N\to\infty.\) Therefore,
\[\int\mathcal{C}(fg_{j}\mu)(\lambda)\phi(\lambda)d\mathfrak{m}=\int\mathcal{C}( \eta)(\lambda)\mathcal{C}(g_{j}\mu)(\lambda))\phi(\lambda)d\mathfrak{m},\]
which implies
\[\mathcal{C}(fg_{j}\mu)(\lambda)=\mathcal{C}(\eta)(\lambda)\mathcal{C}(g_{j}\mu )(\lambda),\ \mathfrak{m}|_{\mathbb{C}\setminus\mathcal{E}_{n}}-a.a..\]
As \(\mathcal{F}\approx\cap_{n=1}^{\infty}\mathcal{E}_{n},\ \mathfrak{m}-a.a.,\) by (3.10), we infer that
\[\mathcal{C}(fg_{j}\mu)(\lambda)=\mathcal{C}(\eta)(\lambda)\mathcal{C}(g_{j}\mu )(\lambda),\ \mathfrak{m}-a.a.,\]
which implies \(\mathcal{C}(\eta)(\lambda)=\rho(f)(\lambda),\ \mathfrak{m}_{\mathcal{R}}-a.a..\) Thus, by (6.8) and Lemma 6.6, we get \(\eta(\mathcal{R})=0.\) There is an open subset \(O\) such that \(\mathrm{spt}(\eta)\cap\mathcal{R}\subset O\) and \(\eta(O)\leq\frac{1}{4}\|\eta\|\). Using Proposition 2.2 (2), there exists a subset \(A\) such that \(\|\eta\|\leq 2\|\eta|_{A}\|\) and \(N_{2}(\eta|_{A})\lesssim 1.\) Then \(\|\eta\|\leq 4\|\eta|_{A\setminus O}\|\) and \(N_{2}(\eta|_{A\setminus O})\lesssim 1.\) Hence, \(\mathrm{spt}(\eta|_{A\setminus O})\subset\mathrm{spt}(\eta)\cap\mathcal{F}\) and by Proposition 2.2 (3) and Theorem 2.1 (1), we get
\[\|\eta\|\lesssim\gamma(\mathrm{spt}(\eta|_{A\setminus O})\lesssim\gamma( \mathrm{spt}(\eta)\cap\mathcal{F}).\]
The proof now follows from (6.9).
The proof of the following lemma is the same as that of [20, Lemma 7.2] if [20, Lemma 7.1] is replaced by Lemma 6.7.
**Lemma 6.8**.: _Let \(F\in L^{\infty}(\mathfrak{m}_{\mathcal{R}})\) and \(\|F\|_{L^{\infty}(\mathfrak{m}_{\mathcal{R}})}\leq 1.\) Suppose that for \(\epsilon>0,\) there exists \(A_{\epsilon}\subset\mathcal{R}\) with \(\gamma(A_{\epsilon})<\epsilon\) and there exists \(F_{\epsilon,N}\in R(K_{\epsilon,N})\) (uniform closure of \(\text{Rat}(K_{\epsilon,N})\) in \(C(K_{\epsilon,N})\)), where \(\mathcal{R}_{\epsilon,N}=\mathcal{R}\setminus(A_{\epsilon}\cup\mathcal{E}_{N})\) and \(K_{\epsilon,N}=\overline{\mathcal{R}_{\epsilon,N}},\) such that \(\|F_{\epsilon,N}\|\leq 2\) and_
\[F(z)=F_{\epsilon,N}(z),\ \mathfrak{m}_{\mathcal{R}_{\epsilon,N}}-a.a.. \tag{6.12}\]
_Then for \(\varphi\) a smooth function with support in \(\mathbb{D}(\lambda,\delta)\),_
\[\left|\int F(z)\bar{\partial}\varphi(z)d\mathfrak{m}_{\mathcal{R}}(z)\right| \lesssim\delta\|\bar{\partial}\varphi\|\gamma(\mathbb{D}(\lambda,2\delta) \cap\mathcal{F}).\]
Now we are ready to prove Lemma 6.2 as the following.
Proof.: (Lemma 6.2): Let \(f\in R^{t,\infty}(K,\mu)\) and \(\{r_{n}\}\subset\text{Rat}(K)\) such that \(\|r_{n}-f\|_{L^{t}(\mu)}\to 0\) and \(r_{n}(z)\to f(z),\ \mu-a.a.\) as \(n\to\infty.\) Using Lemma 2.4 (1), we find \(A_{\epsilon}^{1}\) and a subsequence \(\{r_{n,1}\}\) of \(\{r_{n}\}\) such that \(\gamma(A_{\epsilon}^{1})<\frac{\epsilon}{2C_{T}}\) and \(\{\mathcal{C}(r_{n,1}g_{1}\mu)\}\) uniformly converges to \(\mathcal{C}(fg_{1}\mu)\) on \(\mathbb{C}\setminus A_{\epsilon}^{1}\). Then we find \(A_{\epsilon}^{2}\) and a subsequence \(\{r_{n,2}\}\) of \(\{r_{n,1}\}\) such that \(\gamma(A_{\epsilon}^{2})<\frac{\epsilon}{2^{2}C_{T}}\) and \(\{\mathcal{C}(r_{n,2}g_{2}\mu)\}\) uniformly converges to \(\mathcal{C}(fg_{2}\mu)\) on \(\mathbb{C}\setminus A_{\epsilon}^{2}\). Therefore, we have a subsequence \(\{r_{n,n}\}\) such that \(\{\mathcal{C}(r_{n,n}g_{j}\mu)\}\) uniformly converges to \(\mathcal{C}(fg_{j}\mu)\) on \(\mathbb{C}\setminus A_{\epsilon}\) for all \(j\geq 1,\) where \(A_{\epsilon}=\cup_{j}A_{\epsilon}^{j}\) and \(\gamma(A_{\epsilon})<\epsilon\) by Theorem 2.1 (2). From (2.6), we infer that \(\{r_{n,n}\}\) uniformly tends to \(F:=\rho(f)\) on \(\mathcal{R}_{\epsilon,N}:=\mathcal{R}\setminus(A_{\epsilon}\cup\mathcal{E}_{N}).\) Thus, \(\{r_{n,n}\}\) uniformly tends to \(F_{\epsilon,N}\in R(K_{\epsilon,N})\) on \(K_{\epsilon,N}:=\overline{\mathcal{R}_{\epsilon,N}}.\) The proof now follows from Lemma 6.8.
## 7. **Decomposition Theorems for \(R^{t}(K,\mu)\)**
We do not need to assume that \(S_{\mu}\) is pure in this section. In this case, there exists a partition \(\{\Delta_{00},\Delta_{01}\}\) of spt\(\mu\) such that
\[R^{t}(K,\mu)=L^{t}(\mu_{\Delta_{00}})\oplus R^{t}(K,\mu_{\Delta_{01}}) \tag{7.1}\]
and \(S_{\mu_{\Delta_{01}}}\) is pure. If \(g\perp R^{t}(K,\mu),\) then \(g(z)=0,\ \mu_{\Delta_{00}}-a.a..\) Hence, we see that \(\mathcal{F}\) and \(\mathcal{R}\) do not depend on the trivial summand \(L^{t}(\mu_{\Delta_{00}}).\) Therefore, we will not distinguish \(\mathcal{F}\) and \(\mathcal{R}\) between \(R^{t}(K,\mu)\) and \(R^{t}(K,\mu_{\Delta_{01}}).\) We set \(\mathcal{F}=\mathbb{C}\) and \(\mathcal{R}=\emptyset\) if \(\mu_{\Delta_{01}}=0.\) For a Borel subset \(\Delta\) with \(\chi_{\Delta}\in R^{t}(K,\mu),\) let \(\mathcal{F}_{\Delta}\) and \(\mathcal{R}_{\Delta}\) denote the non-removable boundary and removable set for \(R^{t}(K,\mu_{\Delta}),\) respectively.
**Proposition 7.1**.: _If \(S_{\mu}\) on \(R^{t}(K,\mu)\) is pure, then the following properties hold:_
_(1) If \(\Delta\) is a Borel subset and \(\chi_{\Delta}\in R^{t}(K,\mu)\), then \(\rho(\chi_{\Delta})=\chi_{\mathcal{R}_{\Delta}},\ \gamma-a.a.\) and \(R^{t}(K,\mu_{\Delta})=R^{t}(\overline{\mathcal{R}_{\Delta}},\mu_{\Delta}).\)_
_(2) Suppose that for \(i=1,2,\)\(\Delta_{i}\) is a Borel subset and \(\chi_{\Delta_{i}}\in R^{t}(K,\mu).\) Then \(\Delta_{1}\cap\Delta_{2}=\emptyset,\ \mu-a.a.\) if and only if \(\mathcal{R}_{\Delta_{1}}\cap\mathcal{R}_{\Delta_{2}}\approx\emptyset,\ \gamma-a.a.\)._
_(3) If \(\{\Delta_{i}\}_{i=1}^{\infty}\) is a Borel partition of spt\(\mu\) such that \(\chi_{\Delta_{i}}\in R^{t}(K,\mu),\) then_
\[\mathcal{F}\approx\bigcap_{i=1}^{\infty}\mathcal{F}_{\Delta_{i}}\ \text{and}\ \mathcal{R}\approx\bigcup_{i=1}^{\infty}\mathcal{R}_{\Delta_{i}},\ \gamma-a.a..\]
_(4) If \(\Delta\) is a Borel subset and \(\chi_{\Delta}\in R^{t}(K,\mu)\) is a non-trivial characteristic function, then there exists a minimal \(\chi_{\Delta_{0}}\in R^{t}(K,\mu)\) such that \(\Delta_{0}\subset\Delta.\)_
_(5) If \(\mathcal{R}_{0}\subset\mathcal{R}\) is a Borel subset and \(\chi_{\mathcal{R}_{0}}\in H^{\infty}(\mathcal{R}),\) then there exists a Borel subset \(\Delta_{0}\) and \(\mathcal{Q}\subset\mathcal{R}\) with \(\gamma(\mathcal{Q})=0\) such that \(\mathcal{R}_{0}\approx\mathcal{R}_{\Delta_{0}},\ \mathfrak{m}-a.a.\) and \(\mathcal{R}_{\Delta_{0}}\setminus\mathcal{Q}\subset\Delta_{0}\subset \overline{\mathcal{R}_{\Delta_{0}}},\ \mu-a.a..\) In particular, if \(U\) is an open subset and \(U\subset\mathcal{R}_{\Delta_{0}},\ \gamma-a.a.,\) then \(U\subset\Delta_{0},\ \mu-a.a..\)_
Proof.: (1): \(\rho(\chi_{\Delta})=\chi_{\mathcal{R}_{\Delta}},\ \gamma-a.a.\) follows from (2.6). Using Proposition 2.5 (2), we get \(R^{t}(K,\mu_{\Delta})=R^{t}(\overline{\mathcal{R}_{\Delta}},\mu_{\Delta}).\)
(2) is trivial since from (1), we have
\[\rho(\chi_{\Delta_{1}\cap\Delta_{2}})=\rho(\chi_{\Delta_{1}})\rho(\chi_{\Delta_ {2}})=\chi_{\mathcal{R}_{\Delta_{1}}}\chi_{\mathcal{R}_{\Delta_{2}}}=\chi_{ \mathcal{R}_{\Delta_{1}}\cap\mathcal{R}_{\Delta_{2}}}.\]
(3): \(\Lambda_{i}=\{\chi_{\Delta_{i}}g_{j}\}\subset R^{t}(K,\mu_{\Delta_{i}})^{\perp}\) is also a dense subset. It is clear that
\[\mathcal{C}(g_{j}\mu)(z)\approx \sum_{i=0}^{\infty}\mathcal{C}(\chi_{\Delta_{i}}g_{j}\mu)(z),\ \gamma-a.a.;\] \[v^{+}(g_{j}\mu,\Gamma_{n},\beta_{n})(z)\approx \sum_{i=0}^{\infty}v^{+}(\chi_{\Delta_{i}}g_{j}\mu,\Gamma_{n}, \beta_{n})(z),\ \gamma-a.a.;\] \[v^{-}(g_{j}\mu,\Gamma_{n},\beta_{n})(z)\approx \sum_{i=0}^{\infty}v^{-}(\chi_{\Delta_{i}}g_{j}\mu,\Gamma_{n}, \beta_{n})(z),\ \gamma-a.a.;\] \[\mathcal{ZD}(g_{j}\mu)\approx \bigcap_{i=0}^{\infty}\mathcal{ZD}(\chi_{\Delta_{i}}g_{j}\mu),\ \gamma-a.a.;\] \[\mathcal{ND}(g_{j}\mu)\approx \bigcup_{i=0}^{\infty}\mathcal{ND}(\chi_{\Delta_{i}}g_{j}\mu),\ \gamma-a.a....\]
With above equations, it is straightforward to prove (3).
(4): Clearly, \(\mathfrak{m}(\mathcal{R}_{\Delta})>0\). There exists \(g\perp R^{t}(K,\mu_{\Delta})\) and \(\lambda_{0}\in\mathcal{R}_{\Delta}\) such that
\[\int\frac{1}{|z-\lambda_{0}|}|g(z)|d\mu<\infty\]
and \(\mathcal{C}(g\mu)(\lambda_{0})\neq 0\). Define
\[e_{\lambda_{0}}(f):=\frac{\mathcal{C}(fg\mu)(\lambda_{0})}{\mathcal{C}(g\mu)( \lambda_{0})}=\rho(f)(\lambda_{0}),\ f\in R^{t,\infty}(K,\mu_{\Delta}).\]
It is easy to verify \(e_{\lambda_{0}}\) a weak-star continuous multiplicative linear functional on \(R^{t,\infty}(K,\mu_{\Delta})\) such that \(e_{\lambda_{0}}(f)=f(\lambda_{0})\) for each \(f\in\mathrm{Rat}(K)\).
Suppose that \(R^{t,\infty}(K,\mu_{\Delta})\) does not contain a non-trivial minimal characteristic function. Set
\[\mathcal{B}=\{B\subset\Delta:\ \chi_{B}\in R^{t,\infty}(K,\mu_{\Delta}),\ e_{ \lambda_{0}}(\chi_{B})=1\}.\]
Then \(\chi_{\Delta}\in\mathcal{B}\neq\emptyset\). For \(B_{1},B_{2}\in\mathcal{B}\), \(e_{\lambda_{0}}(\chi_{B_{1}\cap B_{2}})=e_{\lambda_{0}}(\chi_{B_{1}})e_{ \lambda_{0}}(\chi_{B_{2}})(\lambda_{0})=1\). We find \(\Delta\supset B_{n}\supset B_{n+1}\) such that
\[\mu(B_{n})\to b=\inf_{B\in\mathcal{B}}\mu(B).\]
Clearly \(B=\cap B_{n}\in\mathcal{B}\) and \(b=\mu(B)>0\). From the assumption, we get that \(\chi_{B}\) is not a minimal characteristic function. Hence, there exists \(B_{0}\subset B\) with \(\mu(B_{0})>0\) and \(\mu(B\setminus B_{0})>0\) such that \(\chi_{B_{0}},\ \chi_{B\setminus B_{0}}\in R^{t,\infty}(K,\mu_{\Delta})\). Since
\[1=e_{\lambda_{0}}(\chi_{B})=e_{\lambda_{0}}(\chi_{B_{0}})+e_{\lambda_{0}}(\chi_ {B\setminus B_{0}})\]
and
\[e_{\lambda_{0}}(\chi_{B_{0}})e_{\lambda_{0}}(\chi_{B\setminus B_{0}})=e_{ \lambda_{0}}(\chi_{B_{0}}\chi_{B\setminus B_{0}})=0,\]
we see that \(B\setminus B_{0}\in\mathcal{B}\) or \(B_{0}\in\mathcal{B}\), which contradicts the definition of \(b\). Therefore, there exists a non-trivial minimal characteristic function in \(R^{t,\infty}(K,\mu_{\Delta})\).
(5): Let \(f_{0}=\chi_{\mathcal{R}_{0}}.\) By Theorem 6.1, if \(\tilde{f}_{0}:=\rho^{-1}(f_{0})\), then
\[\tilde{f}_{0}^{2}=\rho^{-1}(f_{0})\rho^{-1}(f_{0})=\rho^{-1}(f_{0}^{2})=\tilde{ f}_{0}.\]
Hence, there exists a Borel subset \(\Delta_{0}\) such that \(\tilde{f}_{0}=\chi_{\Delta_{0}}.\) From (1) and (2.7), we get \(\mathcal{R}_{\Delta_{0}}\setminus\mathcal{Q}\subset\Delta_{0}\subset\overline{ \mathcal{R}_{\Delta_{0}}},\ \mu-a.a..\) If an open subset \(U\subset\mathcal{R}_{\Delta_{0}},\ \gamma-a.a.,\) then \(\mathcal{C}(\tilde{f}_{0}g\mu)(z)=\mathcal{C}(g\mu)(z),\ \mathfrak{m}_{U}-a.a.\) for \(g\perp R^{t}(K,\mu).\) By (2.3), we have \(U\subset\Delta_{0},\ \mu-a.a.\) since \(S_{\mu}\) is pure.
**Theorem 7.2**.: _Let \(K\) be a compact subset, \(1\leq t<\infty,\) and \(\mu\in M_{0}^{+}(K).\) Then there exists a Borel partition \(\{\Delta_{i}\}_{i\geq 0}\) of \(\text{spt}(\mu)\) and compact subsets \(\{K_{i}\}_{i=1}^{\infty}\) such that \(\Delta_{i}\subset K_{i}\) for \(i\geq 1\),_
\[R^{t}(K,\mu)=L^{t}(\mu|_{\Delta_{0}})\oplus\bigoplus_{i=1}^{\infty}R^{t}(K_{ i},\mu_{\Delta_{i}})\]
_and the following statements are true:_
_(1) If \(i\geq 1\), then \(R^{t}(K_{i},\mu_{\Delta_{i}})\) contains no non-trivial characteristic functions._
_(2) If \(i\geq 1\), then \(K_{i}=\overline{\mathcal{R}_{\Delta_{i}}}\) and \(R^{t}(K,\mu_{\Delta_{i}})=R^{t}(K_{i},\mu_{\Delta_{i}}).\)_
_(3) If \(i\geq 1\), then the map \(\rho_{i}\) is an isometric isomorphism and a weak\({}^{*}\) homeomorphism from \(R^{t,\infty}(K_{i},\mu_{\Delta_{i}})\) onto \(H^{\infty}(\mathcal{R}_{\Delta_{i}})\)._
Proof.: Using [8, Theorem 1.6 on page 279], we find disjoint Borel subsets \(\{\Delta_{i}\}_{i\geq 1}\) such that \(\chi_{\Delta_{i}}\in R^{t,\infty}(K_{i},\mu)\) is a minimal characteristic function. Set \(\Delta_{0}=K\setminus\cup_{i=1}^{\infty}\Delta_{i}.\) Then
\[R^{t}(K,\mu)=R^{t}(K,\mu_{\Delta_{0}})\oplus\bigoplus_{i=1}^{\infty}R^{t}(K, \mu_{\Delta_{i}})\]
and \(R^{t}(K,\mu_{\Delta_{0}})\) has no minimal characteristic functions. Applying Proposition 7.1 (4), we conclude that \(\mathcal{R}_{\Delta_{0}}=\emptyset,\) which implies \(R^{t}(K,\mu_{\Delta_{0}})=L^{t}(\mu_{\Delta_{0}}).\)
(2) follows from Proposition 7.1 (1). (1) is trivial. Theorem 6.1 implies (3).
A point \(z_{0}\in K\) is called a _bounded point evaluation_ for \(R^{t}(K,\mu)\) if \(r\mapsto r(z_{0})\) defines a bounded linear functional for functions in \(\text{Rat}(K)\) with respect to the \(L^{t}(\mu)\) norm. The collection of all such points is denoted \(\text{bpe}(R^{t}(K,\mu)).\) If \(z_{0}\) is in the interior of \(\text{bpe}(R^{t}(K,\mu))\) and there exist positive constants \(\delta\) and \(M\) such that \(|r(z)|\leq M\|r\|_{L^{t}(\mu)},\) whenever \(z\in\mathbb{D}(z_{0},\delta)\) and \(r\in\text{Rat}(K),\) then we say that \(z_{0}\) is an _analytic bounded point evaluation_ for \(R^{t}(K,\mu).\) The collection of all such points is denoted \(\text{abpe}(R^{t}(K,\mu)).\)
In 1991, J. Thomson [16] obtained a celebrated decomposition theorem for \(P^{t}(\mu),\) the closed subspace of \(L^{t}(\mu)\) spanned by the analytic polynomials. J. Conway and N. Elias studied the set analytic bounded point evaluations for certain \(R^{t}(K,\mu).\) Later, J. Brennan [4] generalized Thomson's theorem to \(R^{t}(K,\mu)\) when the diameters of the components of \(\mathbb{C}\setminus K\) are bounded below. In all above cases, we will see below that \(\mathcal{R}\) equals the set of analytic bounded point evaluations. However, it may happen that \(R^{t}(K,\mu)\neq L^{t}(\mu),\)\(\text{abpe}(R^{t}(K,\mu))=\emptyset,\) and \(\mathcal{R}\neq\emptyset.\) Examples of this phenomenon can be constructed, where \(K\) is a Swiss cheese set (with empty interior, see [3] and [12]).
**Lemma 7.3**.: _Let \(R^{t}(K,\mu)\) be decomposed as in (7.1). Then \(\text{abpe}(R^{t}(K,\mu_{\Delta_{01}}))=\text{abpe}(R^{t}(K,\mu)).\)_
Proof.: Let \(\mathbb{D}(\lambda_{0},\delta)\subset\text{abpe}(R^{t}(K,\mu))\) such that for \(\lambda\in\mathbb{D}(\lambda_{0},\delta)\) and \(r\in\text{Rat}(K),\)\(|r(\lambda)|\leq M\|r\|_{L^{t}(\mu)}\) for some \(M>0\) and \(r(\lambda)=(r,k_{\lambda}),\) where \(k_{\lambda}\in L^{s}(\mu)\) and \(\|k_{\lambda}\|_{L^{s}(\mu)}\leq M.\) Because \((z-\lambda)\bar{k}_{\lambda}\perp R^{t}(K,\mu),\) we get \(k_{\lambda}(z)=0,\ \mu_{\Delta_{00}}-a.a.\) if \(\mu(\{\lambda\})=0.\) Let \(\{\lambda_{n}\}\) be the set of atoms for \(\mu.\) Then \(|r(\lambda)|\leq M\|r\|_{L^{t}(\mu_{\Delta_{01}})}\) for \(\lambda\in\mathbb{D}(\lambda_{0},\delta)\setminus\{\lambda_{n}\}.\) Now for \(|\lambda-\lambda_{0}|<\frac{\delta}{2},\) we have
\[|r(\lambda)|\lesssim\frac{1}{\pi\delta^{2}}\int_{\mathbb{D}(\lambda_{0}, \delta)\setminus\{\lambda_{n}\}}|r(z)|d\mathfrak{m}\lesssim M\|r\|_{L^{t}( \mu_{\Delta_{01}})}.\]
Thus, \(\lambda_{0}\in\text{abpe}(R^{t}(K,\mu_{\Delta_{01}})).\) The lemma is proved.
From Lemma 7.3, we see that \(\text{abpe}(R^{t}(K,\mu))\) does not depend on the trivial summand \(L^{t}(\mu_{\Delta_{00}}).\)
We let \(\partial_{e}K\) (the exterior boundary of \(K\)) denote the union of the boundaries of all the components of \(\mathbb{C}\setminus K.\) Define
\[\partial_{1}K=\left\{\lambda\in K:\ \overline{\lim_{\delta\to 0}}\frac{ \gamma(\mathbb{D}(\lambda,\delta)\setminus K)}{\delta}>0\right\}. \tag{7.2}\]
Obviously, \(\partial_{e}K\subset\partial_{1}K\subset\partial K.\) If the diameters of the components of \(\mathbb{C}\setminus K\) are bounded below, then there exist \(\epsilon_{0}>0\) and \(\delta_{0}>0\) such that for each \(\lambda\in\partial K,\)
\[\gamma(\mathbb{D}(\lambda,\delta)\setminus K)\geq\epsilon_{0}\delta\ \text{for}\ \delta<\delta_{0}. \tag{7.3}\]
Clearly, if \(K\) satisfies (7.3), then \(\partial K=\partial_{1}K.\) Conversely, it is straightforward to construct a compact subset \(K\) such that \(\partial K=\partial_{1}K\) and \(K\) does not satisfy (7.3).
**Proposition 7.4**.: \(\partial_{1}K\subset\mathcal{F},\ \gamma-a.a..\)_. Consequently, if \(K\) satisfies (7.3), then \(\partial K\subset\mathcal{F},\ \gamma-a.a..\)_
Proof.: The proposition follows from Theorem 3.13 and the fact that for \(\lambda\in\partial_{1}K,\)
\[\mathbb{D}(\lambda,\delta)\setminus K\subset\mathbb{D}(\lambda,\delta)\cap \mathcal{E}_{N}.\]
The following Lemma is from Lemma B in [2].
**Lemma 7.5**.: _There are absolute constants \(\epsilon_{1},C_{1}>0\) with the following property. If \(R>0\) and \(E\subset\overline{\mathbb{D}(0,R)}\) with \(\gamma(E)<R\epsilon_{1}\), then_
\[|p(\lambda)|\leq\frac{C_{1}}{\pi R^{2}}\int_{\overline{\mathbb{D}(0,R)} \setminus E}|p|\,d\mathfrak{m}\]
_for all \(\lambda\) in \(\mathbb{D}(0,\frac{R}{2})\) and all analytic polynomials \(p\)._
The theorem below provides an important relation between \(\text{abpe}(R^{t}(K,\mu))\) and \(\mathcal{R}\).
**Theorem 7.6**.: _The following property holds:_
\[\text{abpe}(R^{t}(K,\mu))\approx\text{int}(K)\cap\mathcal{R},\ \gamma-a.a.. \tag{7.4}\]
_More precisely, the following statements are true:_
_(1) If \(\lambda_{0}\in\text{int}(K)\) and there exists \(N\geq 1\) such that_
\[\lim_{\delta\to 0}\frac{\gamma(\mathcal{E}_{N}\cap\mathbb{D}(\lambda_{0}, \delta))}{\delta}=0, \tag{7.5}\]
_then \(\lambda_{0}\in\text{abpe}(R^{t}(K,\mu)).\)_
_(2)_
\[\text{abpe}(R^{t}(K,\mu))\subset\text{int}(K)\cap\mathcal{R},\ \gamma-a.a..\]
Proof.: From Lemma 7.3, we may assume that \(S_{\mu}\) is pure.
(1): If \(\lambda_{0}\in\text{int}(K)\) satisfies (7.5), then we choose \(\delta>0\) small enough such that \(\mathbb{D}(\lambda_{0},\delta)\subset int(K)\) and \(\gamma(E:=\mathcal{E}_{N}\cap\mathbb{D}(\lambda_{0},\delta)))\leq\epsilon_{1}\delta\), where \(\epsilon_{1}\) is from Lemma 7.5. Hence, using Lemma 7.5, we conclude
\[|r(\lambda)|\lesssim \frac{1}{\pi\delta^{2}}\int_{\mathbb{D}(\lambda_{0},\delta) \setminus E}|r(z)|d\mathfrak{m}(z)\] \[\lesssim \frac{N}{\pi\delta^{2}}\int_{\mathbb{D}(\lambda_{0},\delta)}|r(z )|\max_{1\leq j\leq N}|\mathcal{C}(g_{j}\mu)(z)|d\mathfrak{m}(z)\] \[\lesssim \frac{N}{\pi\delta^{2}}\int_{\mathbb{D}(\lambda_{0},\delta)}\max _{1\leq j\leq N}|\mathcal{C}(rg_{j}\mu)(z)|d\mathfrak{m}(z)\] \[\lesssim \frac{N}{\pi\delta^{2}}\sum_{j=1}^{N}\int_{\mathbb{D}(\lambda_{0},\delta)}|\mathcal{C}(rg_{j}\mu)(z)|d\mathfrak{m}(z)\] \[\lesssim \frac{N}{\pi\delta^{2}}\sum_{j=1}^{N}\int\int_{\mathbb{D}( \lambda_{0},\delta)}\left|\frac{1}{z-w}\right|d\mathfrak{m}(z)|r(w)||g_{j}(w) |d\mu(w)\] \[\lesssim \frac{N}{\delta}\sum_{j=1}^{N}\|g_{j}\|_{L^{s}(\mu)}\|r\|_{L^{t} (\mu)}\]
for all \(\lambda\) in \(\mathbb{D}(\lambda_{0},\frac{\delta}{2})\) and all \(r\in Rat(K)\). This implies that \(\lambda_{0}\in abpe(R^{t}(K,\mu))\).
(2): Let \(E\subset G\cap\mathcal{F}\) be a compact subset with \(\gamma(E)>0\), where \(G\) is a connected component of \(abpe(R^{t}(K,\mu))\). By Lemma 5.7, there exists \(f\in R^{t,\infty}(K,\mu)\) that is bounded and analytic on \(E^{c}\) such that
\[\|f\|_{L^{\infty}(\mu)}\lesssim 1,\ f(\infty)=0,\ f^{\prime}(\infty)=\gamma(E),\]
and
\[\frac{f(z)-f(\lambda)}{z-\lambda}\in R^{t,\infty}(K,\mu),\ \lambda\in E^{c}.\]
Let \(r_{n}\in Rat(K)\) such that \(\|r_{n}-f\|_{L^{t}(\mu)}\to 0\). Hence, \(r_{n}\) uniformly tends to an analytic function \(f_{0}\) on compact subsets of \(G\) and \(\frac{f-f_{0}(\lambda)}{z-\lambda}\in R^{t}(K,\mu)\) for \(\lambda\in G\). Therefore, \(f(z)=f_{0}(z)\) for \(z\in G\setminus E.\) Thus, the function \(f_{0}(z)\) can be analytically extended to \(\mathbb{C}_{\infty}\) and \(f_{0}(\infty)=0\). So \(f_{0}=0.\) This contradicts \(f^{\prime}(\infty)\neq 0.\)
(7.4) now follows from Theorem 3.13.
From Proposition 7.4 and Theorem 7.6, under the assumptions of [16], [9], or [4], we conclude that \(\mathcal{R}\approx\text{abpe}(R^{t}(K,\mu)),\ \gamma-a.a..\)
**Theorem 7.7**.: _Let \(K\subset\mathbb{C}\) be a compact subset, \(1\leq t<\infty,\) and \(\mu\in M_{0}^{+}(K)\). Let \(\text{abpe}(R^{t}(K,\mu))=\cup_{n=1}^{\infty}U_{n}\), where \(U_{n}\) is a connected component. Then the following statements are equivalent. (1) \(\partial U_{i}\cap\partial K\subset\mathcal{F},\ \gamma-a.a.\) for all \(i\geq 1.\)_
_(2) There is a Borel partition \(\{\Delta_{i}\}_{i=0}^{\infty}\) of spt\((\mu)\) and compact subsets \(\{K_{i}\}_{i=0}^{\infty}\) such that \(\Delta_{i}\subset K_{i}\) for \(i\geq 0\),_
\[R^{t}(K,\mu)=R^{t}(K_{0},\mu_{\Delta_{0}})\oplus\bigoplus_{i=1}^{\infty}R^{t}(K _{i},\mu_{\Delta_{i}}),\]
_and the following properties hold:_
1. \(K_{0}\) _is the spectrum of_ \(S_{\mu_{\Delta_{0}}}\) _and_ \(\text{abpe}(R^{t}(K_{0},\mu_{\Delta_{0}}))=\emptyset\)_._
2. _If_ \(i\geq 1\)_, then_ \(S_{\mu_{\Delta_{i}}}\) _on_ \(R^{t}(K_{i},\mu_{\Delta_{i}})\) _is irreducible, that is,_ \(R^{t}(K_{i},\mu_{\Delta_{i}})\) _contains no non-trivial characteristic functions._
3. _If_ \(i\geq 1,\) _then_ \(U_{i}=\text{abpe}(R^{t}(K_{i},\mu_{\Delta_{i}}))\) _and_ \(K_{i}=\overline{U_{i}}.\)__
4. _If_ \(i\geq 1\)_, then the evaluation map_ \(\rho_{i}:f\to f|_{U_{i}}\) _is an isometric isomorphism and a weak-star homeomorphism from_ \(R^{t,\infty}(K_{i},\mu_{\Delta_{i}})\) _onto_ \(H^{\infty}(U_{i})\)_._
Proof.: (Theorem 7.7 (1)\(\Rightarrow\)(2)): From (7.1) and Lemma 7.3, we may assume that \(S_{\mu}\) is pure. From (7.4), we see that \(\partial U_{i}\cap\text{int}(K)\subset\mathcal{F},\ \gamma-a.a..\) Hence, the assumption (1) implies \(\partial U_{i}\subset\mathcal{F},\ \gamma-a.a..\) Thus, \(\chi_{U_{i}}\in H^{\infty}(\mathcal{R}).\) Using Proposition 7.1 (5), we infer that there exists a Borel subset \(\Delta_{i}\) such that \(\chi_{\Delta_{i}}\in R^{t}(K,\mu),\)\(U_{i}\approx\mathcal{R}_{\Delta_{i}},\ \mathfrak{m}-a.a.,\) and \(U_{i}\subset\Delta_{i}\subset\overline{U}_{i},\ \mu-a.a..\) From Proposition 7.1 (1)\(\&\)(2), we see that
\[\rho(\chi_{\Delta_{i}}\chi_{\Delta_{m}})=\chi_{\mathcal{R}_{\Delta_{i}}}\chi_ {\mathcal{R}_{\Delta_{m}}}=0\ \text{for}\ i\neq m,\]
which implies \(\Delta_{i}\cap\Delta_{m}=\emptyset,\ \mu-a.a..\) Set \(\Delta_{0}=K\setminus\cup_{i=1}^{\infty}\Delta_{i}.\) Then \(\{\Delta_{i}\}_{i\geq 0}\) is a Borel partition of spt\(\mu\) and
\[R^{t}(K,\mu)=R^{t}(K,\mu_{\Delta_{0}})\oplus\bigoplus_{i=1}^{\infty}R^{t}(K, \mu_{\Delta_{i}}).\]
(c): Set \(K_{i}=\overline{U_{i}}.\) Then \(K_{i}=\overline{\mathcal{R}_{\Delta_{i}}}\) and by Proposition 7.1 (1), \(R^{t}(K,\mu_{\Delta_{i}})=R^{t}(K_{i},\mu_{\Delta_{i}}).\) Clearly, \(\partial U_{i}\subset\mathcal{F}\subset\mathcal{F}_{\Delta_{i}},\ \gamma-a.a.\), so \(\mathcal{R}_{\Delta_{i}}\subset U_{i}\subset\text{int}(K_{i}),\ \gamma-a.a..\) Therefore, by Theorem 7.6, we get \(\mathcal{R}_{\Delta_{i}}\approx\text{abpe}(R^{t}(K_{i},\mu_{\Delta_{i}})),\ \gamma-a.a..\) Hence, \(U_{i}=\text{abpe}(R^{t}(K_{i},\mu_{\Delta_{i}})).\) This proves (c).
(d) follows from Theorem 6.1.
(b) follows from (d) since \(U_{i}\) is connected and \(H^{\infty}(U_{i})\) contains no non-trivial characteristic functions.
(a) follows from Proposition 7.1 (1)\(\&\)(2) and Theorem 7.6.
To prove Theorem 7.7 (2)\(\Rightarrow\)(1), we need the following lemma.
**Lemma 7.8**.: _Let \(U\) be a bounded open connected subset satisfying \(K\subset\overline{U}.\) Suppose that \(S_{\mu}\) on \(R^{t}(K,\mu)\) is pure and \(\mathcal{I}\) is an algebraic homomorphism from \(H^{\infty}(U)\) to \(R^{t,\infty}(K,\mu)\) that sends \(1\) to \(1\) and \(z\) to \(z.\) Then \(\partial U\subset\mathcal{F}.\)\(\gamma-a.a..\)_
Proof.: Let \(W\supset U\) be an open subset. For \(f\in H^{\infty}(W)\) and \(\lambda\in W,\) we have \(f_{\lambda}(z):=\frac{f(z)-f(\lambda)}{z-\lambda}\in H^{\infty}(W).\) Hence,
\[\mathcal{I}(f)(z)-f(\lambda)=(z-\lambda)\mathcal{I}(f_{\lambda})(z),\]
which implies \(\frac{\mathcal{I}(f)(z)-f(\lambda)}{z-\lambda}\in R^{t,\infty}(K,\mu).\) Thus, by Corollary 2.3, we get
\[\mathcal{C}(\mathcal{I}(f)g\mu)(\lambda)=f(\lambda)\mathcal{C}(g\mu)(\lambda), \ \gamma|_{W}-a.a.,\ \text{for}\ g\perp R^{t}(K,\mu). \tag{7.6}\]
Therefore,
\[\rho(\mathcal{I}(f))(\lambda)=f(\lambda),\ \gamma|_{W\cap\mathcal{R}}-a.a.. \tag{7.7}\]
Claim: If \(\Gamma\) is a (rotated) Lipschitz graph, then \(\gamma(\partial U\cap\Gamma\cap\mathcal{R})=0.\)
Without loss of generality, we assume the rotation angle of \(\Gamma\) is zero. Suppose there exists a compact subset \(E\subset\partial U\cap\Gamma\cap\mathcal{R}\) with \(\gamma(E)>0.\) Let \(f\) be an analytic function on \(W:=\mathbb{C}_{\infty}\setminus E\) with \(\|f\|=1,\)\(f(\infty)=0,\) and \(f^{\prime}(\infty)\geq\frac{1}{2}\gamma(E).\) From [19, Proposition 6.5], there exists a Borel function \(w(z)\) on \(E\) with \(0\leq|w(z)|\leq 1\) such that \(f(z)=\mathcal{C}(w\mathcal{H}^{1})(z)\) for \(z\in W.\) For \(\lambda\in E\), by Lemma 6.5, we see that \(\rho(\mathcal{I}(f))\) is \(\gamma\)-continuous at \(\lambda.\) Using (7.7) and Theorem 3.5, we infer that
\[v^{+}(w\mathcal{H}^{1},\Gamma)(z)=v^{-}(w\mathcal{H}^{1},\Gamma)(z)=\rho( \mathcal{I}(f))(z),\ \gamma|_{E}-a.a., \tag{7.8}\]
which implies \(w(z)=0,\ \mathcal{H}^{1}|_{E}-a.a..\) This is a contradiction. The claim is proved.
From the above claim, we only need to prove \(\gamma(\partial U\cap\mathcal{R}_{0})=0.\) Suppose that \(\gamma(\partial U\cap\mathcal{R}_{0})>0.\) By Lemma 3.9, we find a compact subset \(E\subset\partial U\cap\mathcal{R}_{0}\) such that \(\gamma(E)>0,\)
\[|\mathcal{C}_{\epsilon}(g_{j}\mu)(\lambda)|,\ \mathcal{M}_{g_{j}\mu}( \lambda)\leq M_{j}<\infty,\ \lambda\in E\ \text{for}\ j\geq 1.\]
From Proposition 2.2, there exists \(\eta\in M_{0}^{+}(E)\) such that \(\eta\) is of \(1\)-linear growth, \(\|\mathcal{C}_{\epsilon}\eta\|\leq 1,\) and \(\gamma(E)\lesssim\|\eta\|.\) Using Lemma 3.1 and the above claim, we infer that \(\gamma(\mathcal{ND}(\eta))=0.\)
Now applying Lemma 6.3, for given \(j\geq 1,\) then there are two functions \(F_{1}\in L^{\infty}(\mu)\) and \(F_{2}\in L^{\infty}(\eta)\) with
\[F_{1}(z)=\mathcal{C}(\eta)(z),\ |g_{j}|\mu_{\mathbb{C}\setminus\mathcal{ X}_{\eta}}-a.a., \tag{7.9}\]
where \(\mathcal{X}_{\eta}\) is defined before Lemma 6.3, and
\[F_{2}(z)=\mathcal{C}(g_{j}\mu)(z),\ \eta-a.a. \tag{7.10}\]
such that in the sense of distribution,
\[\bar{\partial}(\mathcal{C}(\eta)\mathcal{C}(g_{j}\mu))=-\pi(F_{1}g_{j}\mu+F_{ 2}\eta). \tag{7.11}\]
From (7.6), we have
\[\mathcal{C}\eta(z)\mathcal{C}(g_{j}\mu)(z)=\mathcal{C}(\mathcal{I}(\mathcal{C }\eta)g_{j}\mu)(z),\ \gamma|_{\mathbb{C}\setminus E}-a.a.,\]
which implies \(U\subset D:=\{\mathcal{C}\eta(z)\mathcal{C}(g_{j}\mu)(z)=\mathcal{C}( \mathcal{I}(\mathcal{C}\eta)g_{j}\mu)(z)\}\ \gamma-a.a..\) By Lemma 3.3, \(\mathcal{C}\eta\), \(\mathcal{C}(g_{j}\mu),\) and \(\mathcal{C}(\mathcal{I}(\mathcal{C}\eta)g_{j}\mu)\) are \(\gamma\)-continuous \(\gamma|_{\mathcal{R}_{0}}-a.a..\) So \(\mathcal{C}\eta\mathcal{C}(g_{j}\mu)\) is \(\gamma\)-continuous \(\gamma|_{\mathcal{R}_{0}}-a.a..\) For \(\lambda\in\partial U\cap\mathcal{R}_{0},\) we have \(\varlimsup\limits_{\delta\to 0}\frac{\gamma(\mathbb{D}(\lambda,\delta)\cap U)}{ \delta}>0\) since \(U\) is connected. Set \(A_{\epsilon}=\{|\mathcal{C}\eta(z)\mathcal{C}(g_{j}\mu)(z)-\mathcal{C}\eta( \lambda)\mathcal{C}(g_{j}\mu)(\lambda)|\leq\epsilon\}\) and \(B_{\epsilon}=\{|\mathcal{C}(\mathcal{I}(\mathcal{C}\eta)g_{j}\mu)(z)-\mathcal{C }(\mathcal{I}(\mathcal{C}\eta)g_{j}\mu)(\lambda)|\leq\epsilon\}.\) Since
\[\mathbb{D}(\lambda,\delta)\cap U\subset(\mathbb{D}(\lambda,\delta)\cap U\cap A _{\epsilon}\cap B_{\epsilon}\cap D)\cup A_{\epsilon}^{c}\cup B_{\epsilon}^{ c},\gamma-a.a.,\]
by Theorem 2.1 (2), we get
\[\varlimsup\limits_{\delta\to 0}\frac{\gamma(\mathbb{D}(\lambda,\delta)\cap U \cap A_{\epsilon}\cap B_{\epsilon}\cap D)}{\delta}\geq\frac{1}{C_{T}} \varlimsup\limits_{\delta\to 0}\frac{\gamma(\mathbb{D}(\lambda,\delta)\cap U)}{ \delta}>0.\]
Thus, there exists \(\{\lambda_{n}\}\subset U\) with \(\lambda_{n}\to\lambda\) such that \(\mathcal{C}\eta(\lambda_{n})\mathcal{C}(g_{j}\mu)(\lambda_{n})\to\mathcal{C}\eta( \lambda)\mathcal{C}(g_{j}\mu)(\lambda),\)\(\mathcal{C}(\mathcal{I}(\mathcal{C}\eta)g_{j}\mu)(\lambda_{n})\to\mathcal{C}(\mathcal{I}( \mathcal{C}\eta)g_{j}\mu)(\lambda),\) and \(\mathcal{C}\eta(\lambda_{n})\mathcal{C}(g_{j}\mu)(\lambda_{n})=\mathcal{C}( \mathcal{I}(\mathcal{C}\eta)g_{j}\mu)(\lambda_{n}).\) Hence,
\[\mathcal{C}\eta(z)\mathcal{C}(g_{j}\mu)(z)=\mathcal{C}(\mathcal{I}(\mathcal{C} \eta)g_{j}\mu)(z),\ \gamma-a.a..\]
Using Lemma 2.6, we see that there exists \(\mathcal{Q}\) with \(\gamma(\mathcal{Q})=0\) such that
\[\mathcal{I}(\mathcal{C}\eta)(z)=\mathcal{C}\eta(z),\ \mu_{\mathcal{R}_{0} \setminus\mathcal{Q}}-a.a.. \tag{7.12}\]
Since a \(\gamma\) zero set is also a \(\mathfrak{m}\) zero set, we have
\[\mathcal{C}\eta(z)\mathcal{C}(g_{j}\mu)(z)=\mathcal{C}(\mathcal{I}(\mathcal{C }\eta)g_{j}\mu)(z),\ \mathfrak{m}-a.a..\]
Using (7.11) and (2.3), we have
\[\mathcal{I}(\mathcal{C}\eta)(z)g_{j}(z)\mu=F_{1}(z)g_{j}(z)\mu+F_{2}(z)\eta.\]
From (7.9) and (7.12), we have
\[F_{1}(z)g_{j}(z)=\mathcal{I}(\mathcal{C}\eta)(z)g_{j}(z)=\mathcal{C}\eta(z)g_{ j}(z),\ \mu_{E\setminus(\mathcal{Q}\cup\mathcal{X}_{\eta})}-a.a..\]
Therefore, together with (7.10), we get
\[F_{2}(z)=\mathcal{C}(g_{j}\mu)(z)=0,\ \eta-a.a.\]
since \(\eta(\mathcal{Q}\cup\mathcal{X}_{\eta})=0\). This contradicts \(\eta(E)>0.\) This completes the proof.
Proof.: (Theorem 7.7 (2)\(\Rightarrow\)(1)) By Theorem 7.7 (2) (d), we see that the map \(\mathcal{I}_{i}=\rho_{i}^{-1}\) is an algebraic homomorphism from \(H^{\infty}(U_{i})\) to \(R^{t,\infty}(K_{i},\mu_{\Delta_{i}})\) for \(i\geq 1\). Using Lemma 7.8, we conclude that
\[\partial U_{i}\subset\mathcal{F}_{\Delta_{i}},\ \gamma-a.a.,\ \text{for}\ i\geq 1.\]
For a given \(i\geq 1\) and \(\lambda\in\partial U_{i},\) since \(U_{i}\) is connected, \(\varlimsup_{\delta\to 0}\underline{\gamma(\mathbb{D}(\lambda,\delta) \cap U_{i})}>0.\) Hence, by Proposition 7.4,
\[\partial U_{i}\subset\partial_{1}K_{j}\subset\mathcal{F}_{\Delta_{j}},\ \gamma-a.a.,\ \text{for}\ j\neq i.\]
Thus, by Proposition 7.1 (3), we have
\[\partial U_{i}\subset\bigcap_{j=0}^{\infty}\mathcal{F}_{\Delta_{j}}\approx \mathcal{F},\ \gamma-a.a..\]
The theorem is proved.
**Corollary 7.9**.: _Let \(K\) be a compact set, \(1\leq t<\infty,\) and \(\mu\in M_{0}^{+}(K).\) If \(\partial K\subset\mathcal{F},\ \gamma-a.a.,\) then there is a Borel partition \(\{\Delta_{i}\}_{i=0}^{\infty}\) of spt\((\mu)\) and compact subsets \(\{K_{i}\}_{i=1}^{\infty}\) such that \(\Delta_{i}\subset K_{i}\) for \(i\geq 1\),_
\[R^{t}(K,\mu)=L^{t}(\mu_{\Delta_{0}})\oplus\bigoplus_{i=1}^{\infty}R^{t}(K_{i},\mu_{\Delta_{i}}),\]
_and the following statements are true:_
_(a) If \(i\geq 1\), then \(S_{\mu_{\Delta_{i}}}\) on \(R^{t}(K_{i},\mu_{\Delta_{i}})\) is irreducible._
_(b) If \(i\geq 1\) and \(U_{i}:=\text{abpe}(R^{t}(K_{i},\mu_{\Delta_{i}}))\), then \(U_{i}\) is connected and \(K_{i}=\overline{U_{i}}.\)_
_(c) If \(i\geq 1\), then the evaluation map \(\rho_{i}:f\to f|_{U_{i}}\) is an isometric isomorphism and a weak-star homeomorphism from \(R^{t,\infty}(K_{i},\mu_{\Delta_{i}})\) onto \(H^{\infty}(U_{i})\)._
Proof.: Let \(\text{\rm{abpe}}(R^{t}(K,\mu))=\cup_{i=1}^{\infty}U_{i},\) where \(U_{i}\) is a connected component. By the assumption, we have \(\partial U_{i}\cap\partial K\subset\mathcal{F},\ \gamma-a.a.\) for all \(i\geq 1.\) Therefore, by Theorem 7.7, we see that the decomposition in Theorem 7.7 holds. We only need to show that
\[R^{t}(K,\mu_{\Delta_{0}})=L^{t}(\mu_{\Delta_{0}}).\]
In fact, from (7.4),
\[\text{\rm{int}}(K)\setminus\mathcal{F}_{\Delta_{0}}\approx\text{\rm{abpe}}(R^{ t}(K,\mu_{\Delta_{0}}))=\emptyset,\ \gamma-a.a..\]
Hence, \(\mathcal{F}_{\Delta_{0}}=\mathbb{C},\ \gamma-a.a.\) since \(\partial K\subset\mathcal{F}\subset\mathcal{F}_{\Delta_{0}},\ \gamma-a.a..\) The proof follows from (3.10).
As an application of Corollary 7.9 and Proposition 7.4, we have the following corollary which extends the results of [16], [9], and [4].
**Corollary 7.10**.: _Let \(K\) be a compact set such that \(\gamma(\partial K\setminus\partial_{1}K)=0\). Suppose that \(1\leq t<\infty\) and \(\mu\in M_{0}^{+}(K).\) Then there is a Borel partition \(\{\Delta_{i}\}_{i=0}^{\infty}\) of \(\text{\rm{spt}}(\mu)\) and compact subsets \(\{K_{i}\}_{i=1}^{\infty}\) such that \(\Delta_{i}\subset K_{i}\) for \(i\geq 1\),_
\[R^{t}(K,\mu)=L^{t}(\mu_{\Delta_{0}})\oplus\bigoplus_{i=1}^{\infty}R^{t}(K_{i},\mu_{\Delta_{i}}),\]
_and the following statements are true:_
_(a) If \(i\geq 1\), then \(S_{\mu_{\Delta_{i}}}\) on \(R^{t}(K_{i},\mu_{\Delta_{i}})\) is irreducible._
_(b) If \(i\geq 1\) and \(U_{i}:=\text{\rm{abpe}}(R^{t}(K_{i},\mu_{\Delta_{i}}))\), then \(U_{i}\) is connected and \(K_{i}=\overline{U_{i}}\)._
_(c) If \(i\geq 1\), then the evaluation map \(\rho_{i}:f\to f|_{U_{i}}\) is an isometric isomorphism and a weak-star homeomorphism from \(R^{t,\infty}(K_{i},\mu_{\Delta_{i}})\) onto \(H^{\infty}(U_{i})\)._
**Acknowledgments.** The authors would like to thank Professor John McCarthy for carefully reading through the manuscript and providing many useful comments.
|
2308.01081 | Magnetization control of the nematicity direction and nodal points in a
superconducting doped topological insulator | We study the effects of magnetization on the properties of the doped
topological insulator with nematic superconductivity. We found that the
direction of the in-plane magnetization fixes the direction of the nematicity
in the system. The chiral state is more favorable than the nematic state for
large values of out-of-plane magnetization. Overall, the critical temperature
of the nematic state is resilient against magnetization. We explore the
spectrum of the system with the pinned direction of the nematic order parameter
$\Delta_{y}$ in details. Without magnetization, there is a full gap in the
spectrum. At strong enough out-of-plane $m_z$ or orthogonal in-plane $m_x$
magnetization, the spectrum is closed at the nodal points that are split by the
magnetization. Flat Majorana surface states connect such split bulk nodal
points. Parallel magnetization $m_y$ lifts nodal points and opens a full gap in
the spectrum. We discuss relevant experiments and propose experimental
verifications of our theory. | D. A. Khokhlov, R. S. Akzyanov, A. V. Kapranov | 2023-08-02T11:10:32Z | http://arxiv.org/abs/2308.01081v1 | Magnetization control of the nematicity direction and nodal points in a superconducting doped topological insulator
###### Abstract
We study the effects of magnetization on the properties of the doped topological insulator with nematic superconductivity. We found that the direction of the in-plane magnetization fixes the direction of the nematicity in the system. The chiral state is more favorable than the nematic state for large values of out-of-plane magnetization. Overall, the critical temperature of the nematic state is resilient against magnetization. We explore the spectrum of the system with the pinned direction of the nematic order parameter \(\Delta_{y}\) in details. Without magnetization, there is a full gap in the spectrum. At strong enough out-of-plane \(m_{z}\) or orthogonal in-plane \(m_{x}\) magnetization, the spectrum is closed at the nodal points that are split by the magnetization. Flat Majorana surface states connect such split bulk nodal points. Parallel magnetization \(m_{y}\) lifts nodal points and opens a full gap in the spectrum. We discuss relevant experiments and propose experimental verifications of our theory.
## I Introduction
Superconductivity in doped topological insulators of Bi\({}_{2}\)Se\({}_{3}\) family was observed in various experiments [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22]. To achieve a superconducting state, bismuth selenide is doped with Cu, Sr, or Nb [23]. Finite Knight shift in nuclear magnetic resonance shows that Cooper's pairs have spin-triplet pairing in such materials [9]. Superconductivity in doped topological insulators lowers the rotational symmetry of the system from \(C_{3}\) in the normal state to \(C_{2}\) in the superconducting state. This bright feature appears in observation of the second critical field [10; 14], magnetic resonance [12], vortex core form [13], specific heat measurements with applied in-plane magnetic field [11], quasiparticle interference [24; 25].
Symmetry-based theoretical investigation shows that the order parameter from the \(E_{u}\) representation of the \(D_{3d}\) crystalline group satisfies both experimental observations: spin-triplet pairing and rotational symmetry breaking [26; 27; 28]. This order parameter is a two-component vector. Real order parameter \(\mathbf{\eta}=\eta(\cos\alpha;\sin\alpha)\) keeps time-reversal symmetry, transforms as a coordinate vector \((x,y)\) and is called a nematic order parameter. The direction of this vector \(\alpha\) defines the direction of the anisotropy of the system and is often called nematicity. Usually, two directions of the nematicity \(\alpha=0\) that corresponds to \(\Delta_{x}\) and \(\alpha=\pi/2\) that corresponds to \(\Delta_{y}\) are considered. A reach variety of different phenomena has been predicted theoretically for the topological insulators with nematic superconductivity, such as Majorana surface states [29; 30; 31], vestigial order [32; 33; 34], unconventional Abrikosov vortices [35; 36], spin and spin-mass vortices [37; 38; 39], chiral Higgs modes [40], anomalous Josephson Hall effect [41], partial paramagnetic response to the magnetic field [42; 43].
In topological insulators, the Fermi surface is hexagonally deformed due to \(C_{3}\) rotational crystal symmetry. This deformation arises due to cubic in momentum terms in a Hamiltonian that are called as hexagonal warping [44; 45]. Such warping significantly affects the properties of the nematic superconductivity.Fo example, hexagonal warping stabilizes nematic superconductivity as a ground state [46; 47]. Also, warping can open a full gap in the spectrum and fix a direction of the nematicity of the system along the crystal axes [27; 48].
Singlet superconductivity can be destroyed by magnetization since the Zeeman field pulls apart electrons with the different spins in Cooper's pair [49; 50]. In the case of spin-triplet superconductivity, system can stay in the superconducting state even for magnetization \(m\gg\Delta\) since in a triplet Cooper's pairs electrons have the same spin [51]. In a theoretical article Ref. [52], authors investigate s-wave superconductivity in NbSe\({}_{2}\). Due to strong spin-orbit coupling, the Clogston limit for Cooper's pair breaking is high, and a magnetic field can close the gap while superconductivity survives. In such a system, a nodal-point superconductor with Majorana fermions can appear. In Ref. [53], authors investigate \(p\)-wave topological superconductor in the presence of external magnetization. Superconductivity survives in such material even for high values of magnetization. Strong magnetization closes the superconducting gap at nodal points connected by the flat Majorana bands.
Usually, the direction of the anisotropy in the superconducting state of the doped topological insulators is fixed by the some crystal fields that arise due to finite strain in the normal state [17], but other possibilities for such pinning remain [54]. However, in the recent experiment [22], it was shown that a strong enough magnetic field can change the system's nematicity direction. This work raises of interesting question of the possibility of the control of the nematicity in doped topological insulators
by external magnetization.
In this work, we focus on the effects of the strong magnetization with the arbitrary direction in the doped topological insulator with nematic superconductivity. We write down linearized in the order parameter Gor'kov equations for such a system and calculate the critical temperature. For the in-plane magnetization, we found that the highest critical temperature is realized for the nematic order parameter, which direction is orthogonal to the magnetization. In this case, the critical temperature is independent of the value of the magnetization. If magnetization is collinear to the nematicity direction, then critical temperature decreases with the increase of the magnetization. In implies that direction of the magnetization can fix the nematicity direction. Out-of-plane magnetization decreases critical temperature for the nematic state. The chiral state becomes favorable for a high enough value of the out-of-plane magnetization. Overall, the critical temperature of the nematic and chiral states is finite, even for large magnetization values. We study the spectrum of the nematic state with the pinned direction \(\alpha=\pi/2\) (which is \(\Delta_{y}\)) in details. Without hexagonal warping nematic superconducting state has nodal points at the Fermi energy. In this case, out-of-plane magnetization \(m_{z}\) transforms nodal points to the Fermi surface. With hexagonal warping, the spectrum is fully gapped without magnetization. Finite out-of-plane magnetization opens 12 bulk nodal points that come in pairs. Each pair is split by the magnetization. Orthogonal in-plane magnetization \(m_{x}\) opens 4 bulk nodal points in pairs. In mixed magnetization, in the \(Oxz\) plane, we have 4 of 12 nodal points depending on the projection values of the magnetization on \(Oz\) and \(Ox\) axes. Parallel in-plane magnetization \(m_{y}\) lifts nodal points and opens a full gap in the spectrum. We calculate a tight-binding spectrum and discover that each split pair of nodal points is connected by flat Majorana surface states. We discuss the possible experimental verifications of our work.
## II Model
### Normal phase
We describe bulk electrons in a doped topological insulator of Bi\({}_{2}\)Se\({}_{3}\) family by low-energy \(\mathbf{k}\cdot\mathbf{p}\) two-orbital Hamiltonian [45]:
\[\hat{H}_{0}(\mathbf{k})=-\mu+m\sigma_{z}+v_{z}k_{z}\sigma_{y}+v(k_ {x}s_{y}-k_{y}s_{x})\sigma_{x}+ \tag{1}\] \[\lambda(k_{x}^{3}-3k_{x}k_{y}^{2})s_{z}\sigma_{x},\]
where \(\mu\) is the chemical potential, \(2m\) is a single-electron gap at zero chemical potential, Fermi velocities \(v\) and \(v_{z}\) describe motion in the \((\Gamma K;\Gamma M)\) plane and along \(\Gamma Z\) direction correspondingly, \(\lambda\) describes hexagonal warping. In the general model, there is one more term with the hexagonal warping \(\lambda_{2}(k_{y}^{3}-3k_{x}^{2}k_{y})\sigma_{y}\), see Ref. [45]. This term has the same spin and orbital structure as the term \(v_{z}k_{z}\sigma_{y}\). Thus, adding hexagonal warping \(\lambda_{2}\) simply transforms the plane \(k_{z}=0\) to a manifold \(v_{z}k_{z}+\lambda_{2}(k_{y}^{3}-3k_{x}^{2}k_{y})=0\). Such a transformation makes the calculation more complicated but does not bring any new physics for our calculations. Therefore, we put \(\lambda_{2}=0\) in our model. Pauli matrices \(s_{i}\) act in spin space while matrices \(\sigma_{i}\) act in space of Bi and Se orbitals \(\mathbf{p}=(P^{1},P^{2})\), where \(i=\{x,y,z\}\), Planck constant \(\hbar=1\). The Hamiltonian (1) obeys time-reversal symmetry \(\hat{\mathcal{T}}\hat{H}_{0}(\mathbf{k})\hat{\mathcal{T}}^{-1}=\hat{H}_{0}(- \mathbf{k})\), where \(\hat{\mathcal{T}}=is_{y}\hat{K}\), \(\hat{\mathcal{T}}^{2}=-1\) is time-reversal operator and \(\hat{K}\) provides complex conjugation. Also, this Hamiltonian has inversion symmetry \(\hat{P}\hat{H}_{0}(\mathbf{k})\hat{P}=\hat{H}_{0}(-\mathbf{k})\), where \(\hat{P}=\sigma_{z}\), \(\hat{P}^{2}=1\) is the inversion operator [45]. Note, hexagonal warping lowers \(C_{\infty}\) rotational symmetry down to \(C_{3}\).
### Superconducting phase
We describe superconductivity in Nambu-II basis, where the wave function is
\[\Psi_{\mathbf{k}}=(\phi_{\mathbf{k}}^{t},-i\phi_{\mathbf{k}}^{\dagger}s_{y}) ^{t}, \tag{2}\]
with \(\phi_{\mathbf{k}}=(\phi_{\uparrow,1,\mathbf{k}},\phi_{\downarrow,1,\mathbf{k }},\phi_{\uparrow,2,\mathbf{k}},\phi_{\downarrow,2,\mathbf{k}})^{t}\), symbol \(t\) means transposition and symbol \({}^{\dagger}\) means Hermitian conjugation. Operator \(\phi_{\uparrow(\downarrow),\sigma,\mathbf{k}}^{(\dagger)}\) annihilates (creates) electron with up (down) spin on the orbital \(\sigma=(P^{1},P^{2})\) with momentum \(\mathbf{k}\). Superconducting order parameter from E\({}_{u}\) representation of D\({}_{3d}\) crystalline point group has vector structure with two components \(\mathbf{\eta}=(\eta_{x};\eta_{y})\). It has the following matrix structure [48]:
\[\hat{\Delta}=\eta_{x}\hat{\delta}_{x}+\eta_{y}\hat{\delta}_{y}, \tag{3}\]
where \(\hat{\delta}_{x,y}=s_{x,y}\sigma_{y}\). We can write the order parameter as \(\eta_{x}=\eta\sin(\alpha)e^{i\phi_{1}}\) and \(\eta_{y}=\eta\cos(\alpha)e^{i\phi_{2}}\), where \(\phi=\phi_{1}-\phi_{2}\). Order parameter with \(\phi=0\) is called nematic, which can have arbitrary orientation \(\alpha\). The nematic order parameter has rotational symmetry \(C_{2}\), breaking \(C_{3}\) crystalline symmetry. This order parameter has time-reversal symmetry. When angles \(\phi\neq 0\), time-reversal symmetry is broken. Particularly, order parameter \(\eta(\frac{1}{\sqrt{2}};\pm\frac{i}{\sqrt{2}})\) is called chiral [48]. We assume that only electrons in the Debye window participate in the superconductivity \(-\omega_{D}<\epsilon_{\mathbf{k}}<\omega_{D}\), where \(\epsilon_{\mathbf{k}}\) is the band's dispersion of the Hamiltonian (1).
The BdG Hamiltonian in Nambu-II basis is [26]:
\[\hat{H}_{BdG}(\mathbf{k})=\tau_{z}\hat{H}_{0}(\mathbf{k})+\hat{\Delta}\frac{ \tau_{x}+i\tau_{y}}{2}+\hat{\Delta}^{*}\frac{\tau_{x}-i\tau_{y}}{2}. \tag{4}\]
Matrices \(\tau_{i}\) act in electron-hole space. The Hamiltonian (4) obeys electron-hole symmetry
\[\hat{\Xi}^{-1}\hat{H}_{BdG}(\mathbf{k})\hat{\Xi}=-\hat{H}_{BdG}(-\mathbf{k}), \tag{5}\]
where \(\hat{\Xi}=s_{y}\tau_{y}K\) and \(K\) is complex conjugation.
In the absence of the hexagonal warping \(\lambda=0\), the system degenerates with respect to the nematicity direction \(\alpha\). The superconducting gap has two nodal points at the Fermi energy with coordinates \(\mathbf{k}\pm\sqrt{\mu^{2}+\eta^{2}-m^{2}}/v\left(-\sin\alpha;\cos\alpha;0\right)\).
In the presence of the hexagonal warping rotational symmetry of the normal phase becomes \(C_{3}\), and the spectrum is sensitive to the mutual orientation of the warping and the order parameter [25; 28]. Particularly, the order parameter \(\Delta_{x}\) has nodal points in the spectrum, while the system with \(\Delta_{y}\) has the highest full gap \(2\eta\sqrt{1-m^{2}/\mu^{2}}\) among possible orientations of the nematicity.
### Magnetization
We consider the Zeeman splitting due to finite magnetization. It can appear due to the proximity effect from magnetic substrate [55; 56] or due to a finite magnetic field. The magnetization can be written as [45]:
\[\hat{H}_{m}=m_{x}s_{x}+m_{y}s_{y}+m_{z}s_{z}, \tag{6}\]
where \(m_{i}\) is the strength of the magnetization along axis \(i\), where \(i=x,y,z\).
The magnetization breaks time-reversal symmetry and lifts the twofold Krademer's degeneracy of the \(\hat{H}_{BdG}\). The full Hamiltonian \(\hat{H}_{BdG}(\mathbf{k})+\hat{H}_{m}\) obeys chiral symmetry \(\hat{\Sigma}^{-1}\left(\hat{H}_{BdG}(\mathbf{k})+\hat{H}_{m}\right)\hat{\Sigma} =\hat{H}_{BdG}(-\mathbf{k})+\hat{H}_{m}\), where operator \(\hat{\Sigma}=\sigma_{z}\tau_{z}\). Thus, each positive eigenvalue \(E_{\mathbf{k}}\) is accompanied by the partner \(-E_{\mathbf{k}}\)[57].
## III Critical temperature in presence of magnetization
We calculate the critical temperature of the nematic superconductor as a function of magnetization. For simplicity, we present results only for systems without hexagonal warping \(\lambda=0\)--including the warping leads to some insignificant enhancement of the critical temperature. See Ref. [47] for the case without magnetization.
Matsubara Green's function of the normal state electrons is \(\hat{G}_{0,e}(\omega,\mathbf{k})=\left(\omega-\hat{H}_{0}(\mathbf{k})-\hat{H }_{m}\right)^{-1}\), where fermionic frequency \(\omega=\pi T(2n+1)\). For holes in the normal state we get \(\hat{G}_{0,h}(-\omega,\mathbf{k})=\left(\omega+\hat{\mathcal{T}}^{-1}(\hat{H}_ {0}(\mathbf{k})+\hat{H}_{m})\hat{\mathcal{T}}\right)^{-1}\). Anomalous Green's function in the linear approximation is \(\hat{F}_{0}^{(1)}(\omega,\mathbf{k})=\hat{G}_{0,e}(\omega,\mathbf{k})\hat{ \delta}_{o}\hat{G}_{0,h}(-\omega,\mathbf{k})\). We introduce \(f_{\alpha\beta}(\omega,\mathbf{k})=\mathrm{Tr}\left(\hat{\delta}_{\alpha}\hat {F}_{\beta}\omega,\mathbf{k}\right)\), where trace is taken over spin and orbital degrees of freedom. In the vicinity of the critical temperature order parameter is small. Then, the gap equation can be written in the linearized form:
\[\eta_{\alpha}=\frac{gT}{4}\sum_{\omega}\int\frac{d^{3}\mathbf{k}}{(2\pi)^{3}} f_{\alpha\beta}(\omega,\mathbf{k})\eta_{\beta}, \tag{7}\]
where \(g\) is the coupling constant, \(\alpha,\beta\) corresponds to \(x,y\). Integration is performed over the first Brillouin zone, and summation is provided over fermionic Matsubara frequencies \(\omega=(2n+1)\pi T\). Thus, we obtain two linear equations on \(\eta_{x},\eta_{y}\). Superconductivity appears when the system has a nontrivial solution. Thus, the system (7) should be degenerate, and we get a condition on critical temperature \(T_{c}\). The determinant of the system (7) is equal to zero.
We start with magnetization along the \(Oz\) axis. For this geometry critical temperature is defined by \((1-\Phi_{d})^{2}=\Phi_{od}^{2}\), where \(\Phi_{xx}=\Phi_{yy}=\Phi_{d}=\frac{gT}{4}\sum_{\omega}\int\frac{d^{3}\mathbf{k }}{(2\pi)^{3}}f_{\alpha\alpha}(\omega,\mathbf{k})\) and \(i\Phi_{od}=\frac{gT}{4}\sum_{\omega}\int\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}f_{ \alpha\pi}(\omega,\mathbf{k})\). The exact solution is chiral \(\Delta_{x}\pm i\Delta_{y}\) for any nonzero \(\Phi_{od}\) induced by the field \(\pm m_{z}\), where \(m_{z}\neq 0\). Further, we consider positive magnetization \(m_{z}>0\). We plot the critical temperature of the chiral phase in Fig. 1 by red dots in panels a) and b).
Also, we consider the case when nematic order \(\Delta_{x}\) or \(\Delta_{y}\) is pinned. In this case, the critical temperature is determined by \(\Phi_{d}=1\). We calculate the critical temperature for pinned nematic order parameter and show it in Fig. 1b). The difference between critical temperature for nematic and chiral order parameters is low for \(m_{z}\lesssim 10\) - \(20~{}T_{c0}\). The difference becomes larger at higher magnetization.
The critical temperature slowly decreases towards zero for both chiral and nematic phases as a function of \(m_{z}\). This fundamentally differs from singlet pairing [49; 50], where a field of several \(T_{c}\) destroys the superconductivity. We calculate \(T_{c}\) for our system and plot it by blue dots in Fig. 1b). If we calculate the critical temperature for the topological insulator with the s-wave order parameter, we get that critical temperature vanishes if the Zeeman field is of magnitude \(\sim 1.6T_{c0}\). Note, s-wave order parameter is independent of the direction of the magnetization.
Further, we consider the Zeeman field in \((\Gamma M;\Gamma K)\) plane. We focus on the two orientations along \(Ox\) or \(Oy\). Note, we consider a system without hexagonal warping; thus, \(C_{\infty}\) rotational symmetry of normal state presents. In this case parameter \(\Phi_{od}=0\) and \(\Phi_{xx}\neq\Phi_{yy}\). Such a situation allows \(\Delta_{x}\) and \(\Delta_{y}\) order parameters. We calculate the critical temperature for \(\Delta_{y}\) order parameter at both field orientations. In Fig. 1a) green and blue dots show these two critical temperatures. When the nematicity axis is perpendicular to the field, the critical temperature is insensitive to the magnetization magnitude. When the magnetization is applied along the nematicity axis, it suppresses critical temperature. Dependence of two order parameters \(\Delta_{x}\) and \(\Delta_{y}\) on the field \(m_{x}\) in the insert in Fig. 1a) shows that critical temperature \(\Delta_{y}\) is independent of \(m_{x}\) while critical temperature of \(\Delta_{x}\) decreases with of the increase of \(m_{x}\).
The state with the higher critical temperature is the most energetically favorable near the critical temperature. So, without warping or pinning fields, the in-plane
magnetization selects the orthogonal direction of the nematicity. In means that for \(m_{x}\) magnetization, the most favorable order parameter is \(\Delta_{y}\) while for \(m_{y}\) most favorable is \(\Delta_{x}\).
## IV Bulk Fermi surface evolution by the magnetization
In the previous section, we have shown that \(\mathrm{E}_{u}\) superconductivity is robust with respect to magnetization with the magnitude \(\sim 10\) - \(100\,T_{c}\). Now, we investigate how such a magnetizxtion influences the bulk spectrum of the nematic superconductor. We show that strong magnetization changes the topology of the Fermi surface.
For convenience, we focus on the orientation \(\Delta_{y}\) of the order parameter that corresponds to the nematicity \(\alpha=\pi/2\) since it has the lowest free energy among possible order parameters (3) for zero magnetic field [27; 46]. Also, we focus on the plane \(k_{z}=0\) since the most crucial changes in the spectrum occur there.
### Spectrum without warping
We start from the model without the hexagonal warping (i.e., \(\lambda=0\)). Without the hexagonal warping, the system has a \(C_{\infty}\) rotational symmetry: simultaneous rotation of the order parameter, and the magnetization does not change physical properties. In the zero Zeeman field, the spectrum has two nodal points (Fig. 2a). Turning on \(m_{z}\) leads to a spectrum transformation. Two nodal points transform into a pair of closed nodal lines (Fig. 2b). Further, an increase in the Zeeman field makes these nodal lines bigger (Fig 2). Finally, they become two nested closed circles (Fig. 2d).
### Spectrum with warping
Now we include the hexagonal warping in our model. We have the full gap for the order parameter \(\Delta_{y}\) and in zero magnetization. In contrast with the model without hexagonal warping, magnetization can close the gap only in several nodal points. We consider the model with hexagonal warping and investigate the spectrum for different orientations of the Zeeman field.
We set \(m_{x}=m_{y}=0\) in Eq. (6) and consider only \(m_{z}\) for now. Low-energy band of \(E_{\mathbf{k}}\) splits into two bands with energies \(E_{\mathbf{k}}\pm\delta\epsilon(\mathbf{k})\). Low magnetization lifts the degeneracy keeping the gap open. See the blue curve in Fig. 3a). A strong field makes splitting between bands higher. Finally, electron and hole bands cross at the Fermi level, and the gap closes at 12 nodal points (Fig. 3b).
We apply magnetization \(m_{x}\) along the \(Ox\) axis. We show spectrum along the set of vectors \(\mathbf{k}\) in Fig. 4a. Each vector \(\mathbf{k}\) is placed in plane \((k_{x};k_{y})\) and has angle \(\beta\in[-\pi/2;\pi/2]\) with \(k_{x}\). When we apply a strong enough field, the gap closes at \(\phi=0\) at 4 different nodal points. We indicate nodes by red points at the Fermi surface of the normal phase in Fig. 4b.
Figure 1: Critical temperature \(T_{c}/T_{c0}\) vs magnetization \(m_{z}/T_{c0}\). We set \(\mu=2m\) and \(\lambda=0\). Panel a): We consider three different orientations of the magnetization. For \(m_{z}\), we plot the critical temperature of chiral phase \(\Delta_{x}+i\Delta_{y}\). For \(m_{x}\) and \(m_{y}\), we fix the order parameter as \(\Delta_{y}\). Insert shows the critical temperature of \(\Delta_{x}\) and \(\Delta_{y}\) as a function of the \(m_{x}\). Panel b): Critical temperature for s-wave, nematic, and chiral \(\Delta_{x}+i\Delta_{y}\) phases as a function of the magnetization \(m_{z}\).
Figure 2: Fermi surface of the of nematic superconductor \(E_{\mathbf{k}}=0\) with \(\Delta_{y}\) orientation in dimensionless momentum coordinates \((vk_{x}/m,vk_{y}/m)\). The figure is plotted without the hexagonal warping (\(\lambda=0\)). The magnitude of the order parameter is \(\eta\). a) Without magnetization Fermi surface consists of two nodal points. b) Small magnetization \(m_{z}=\eta\) transforms each nodal point to a small Fermi surface. b) Larger magnetization values \(m_{z}=3\eta\) increase the area of the Fermi surface. c) At \(m_{z}=5\eta\), Fermi surfaces merge into two split circles from each nodal point.
Now we turn on the Zeeman field in the \(Oxz\) plane. We again focus on the \((k_{x};k_{y})\) plane in the momentum space. When \(m_{z}\) is relatively high, we have 12 nodal points (Fig. 5a). The system has the same topology as in the case of pure \(m_{z}\) field (Fig. 3b). Due to the presence of \(m_{x}\), these nodal points are shifted from symmetrical positions in the vertices of the Fermi surface (Fig. 5b).
When \(m_{z}\lesssim m_{x}\) 8 of 12 nodal points gaped, and the topology of the Fermi surface becomes equal to the case with \(m_{x}\) only with 4 nodal points (Fig. 4a).
Finite magnetization along \(Oy\) direction \(m_{y}\) opens the full gap in the spectrum without any nodal points (Fig. 6).
## V Surface states
In this section, we briefly analyze the properties of the surface states on the nematic superconductor with the magnetization along the z-axis. We rewrite our Hamiltonian in a tight-binding approximation with the additional quadratic terms that renormalize chemical potential \(\mu\rightarrow\mu+C_{i}k_{i}^{2}\) and single electron gap \(m\to m+B_{i}k_{i}^{2}\), \(i=x,y,z\), see Ref. [45] for details on the values of the parameters. We calculate the spectrum for \(n=200\) layers of the doped topological insulator stacked along the \(Oz\) axis. In this case, we can catch surface states along with the axis of the bulk states of the superconductor. The spectrum of the nematic superconductor along \(x\) and \(y\) axes is shown in Figs. 7 and 8 respectively.
In Fig. 7, we show the cut that takes nodal points. We see that between nodal points, the spectrum is similar to the spectrum of the surface states without the magnetization[30; 31]. A flat Majorana nodal line connects nodal points. This line is very flat up to numerical error that arises in the calculations. In Fig. 8, we can see that no such flat lines occur at the slice that does not catch bulk nodal points. Also, such flat lines that connect nodal points occur at all orientations of the magnetic fields where such nodal points occur.
## VI Discussion
We investigated critical temperature of spin triplet E\({}_{u}\) superconductivity with vector order parameter, that appears in doped Bi\({}_{2}\)Se\({}_{3}\). We find, this superconductivity exist for large values of the magnetization of about 10-100 \(T_{c}\) which is typical for the spin-triplet superconductivity.
One of the main results that we can tune the direction of the nematic order parameter by the external magnetization, see Fig. 1. In the recent experiment [22] it was shown that for the low values of the in-plane magnetic field the direction of the nematicity is fixed which is consistent with the previous studies [17]. However, at large
Figure 3: Panel a) : Dimensionless spectrum of the nematic superconductor \(E_{k}/\eta\) vs dimensionless momentum \(vk_{y}/m\). Order parameter of magnitude \(\eta\) has \(\alpha=\pi/2\) (\(\Delta_{y}\)) orientation. We set \(\mu=2m\) and \(\lambda m^{2}/v^{3}=0.3\). Dash red line gives gaped spectrum without Zeeman field \(m_{z}=0\). The blue line corresponds to a small Zeeman field of magnitude \(m_{x}=2\eta\) that is too small to close the gap. The green line gives the system in the presence of a strong Zeeman field \(m_{z}=7\eta\) that closes the gap at 12 nodal points. Panel b): Fermi surface for the normal state in plane \(k_{z}=0\) with nonzero \(m_{z}>0\). Red points indicate nodal points in the presence of superconductivity.
Figure 4: a): Dimensionless spectrum of nematic superconductor \(E_{k}/\eta\) vs dimensionless momentum \(vk/m\). Order parameter of magnitude \(\eta\) has \(\Delta_{y}\) orientation. Different colors correspond to different orientations of the spectrum’s cut. The spectrum is plotted with the following parameters: \(m_{x}=10\eta\), \(\mu=2m\), \(\lambda m^{2}/v^{3}=0.3\). b): Fermi surface in plane \(k_{z}=0\) with nonzero \(m_{x}\) and without superconductivity. Red points indicate nodes in the presence of superconductivity.
enough values of the magnetic field the nematicity orientation is no longer pinned and starts following the direction of the magnetic field. These results are consistent with our predictions that in-plane magnetic field chooses the direction of the order parameter, see Fig. 1.
We have shown that magnetization can open nodal points in the bulk spectra. Such opening occurs either for
Figure 5: Panel a): Dimensionless spectrum of nematic superconductor \(E_{\mathbf{k}}/\eta\) vs dimensionless momentum \(vk/m\). The momentum stays in plane \(Oxy\). Different colors give the angle between \(\mathbf{k}\) and \(k_{x}\). We set \(m_{x}=3\eta\), \(m_{z}=7\eta\), \(\mu=2m\), \(\lambda m^{2}/v^{3}=0.3\). The order parameter of magnitude \(\eta\) has \(\Delta_{y}\) orientation.. Insert shows that the gap closes along three \(\mathbf{k}\) orientations. Panel b): Fermi surface in plane \(k_{z}=0\) with nonzero \(m_{x}\) and \(m_{z}\) and without superconductivity. Red points indicate nodes in the presence of superconductivity.
Figure 8: Dimensionless spectrum of nematic superconductor \(E/\eta\) vs dimensionless momentum \(vk_{y}/m\). Order parameter of magnitude \(\eta\) has an orientation \(\alpha=\pi/2\) (\(\Delta_{y}\)). Blue lines are bulk states, and green lines are surface states. There is no flat Majorana nodal line since the cut does not include bulk nodal points. The spectrum is plotted with the following parameters: \(m_{x}=m_{y}=0\), \(m_{z}=3\eta\), \(\mu=2m\), \(\lambda m^{2}/v^{3}=0.3\).
Figure 6: Panel a): Dimensionless spectrum of nematic superconductor \(E_{\mathbf{k}}/\eta\) vs dimensionless momentum \(vk/m\). The momentum stays in plane \(Oxy\). Different colors give the angle between \(\mathbf{k}\) and \(k_{x}\). We set \(m_{x}=7\eta\), \(m_{y}=10\eta\), \(m_{z}=10\eta\mu=2m\), \(\lambda m^{2}/v^{3}=0.3\). The order parameter of magnitude \(\eta\) has \(\Delta_{y}\) orientation. The full gap is opened.
Figure 7: Dimensionless spectrum of nematic superconductor \(E/\eta\) vs dimensionless momentum \(vk_{x}/m\) along the \(x\) axis.Order parameter of magnitude \(\eta\) has an orientation \(\alpha=\pi/2\) (\(\Delta_{y}\)). Red dots correspond to nodal points, blue lines are bulk states, and green are surface states. Flat surface states connect nodal points. The spectrum is plotted with the following parameters: \(m_{x}=m_{y}=0\), \(m_{z}=3\eta\), \(\mu=2m\), \(\lambda m^{2}/v^{3}=0.3\).
the magnetization along \(0z\) direction, see Fig. 3, or magnetization perpeniducular to the nematicity direction, see Fig. 4, or both, see Fig.5. However, magnetization parallel to the direction of the order parameter lifts the nodal points, see Fig.6. These features can be experimentally measured. In case of the full gap, the variation of the London penetration length as a function of a temperature should be exponential \(\delta\lambda_{L}/\lambda_{L}\propto e^{-\text{gap}/T}\). Nodal points change law from the exponential to the quadratic \(\delta\lambda_{L}/\lambda_{L}\propto T^{2}\)[54]. So, different asymptotics for the London penetration length for the different orientations of the magnetic fields would be an experimental justification of our theory.
In conclusion, we have shown that nematic superconductivity in doped topological insulators is robust against the magnetization. The direction of the nematic order parameter can be tuned by the direction of the in-plane magnetization. Large values of the out-of-plane magnetization favor chiral superconducting state. In case of the pinned nematic state, the magnetization can open nodal points in the bulk spectrum. Such nodal points are split by the Zeeman field. Splitted nodal points are connected through the flat nodal line Majorana surface states. Magnetization parallel to the direction of the order parameter lifts the nodal points.
## Acknowledgment
Authors acknowledge support by the Russian Science Foundation under Grant No 20-72-00030. AVK thanks the partial support from the Foundation for the Advancement of Theoretical Physics and Mathematics "BASIS".
|
2306.06242 | Chiral pair density wave as a precursor of the pseudogap in kagomé
superconductors | Motivated by scanning tunneling microscopy experiments on $A$V$_3$Sb$_5$ ($A$
= Cs, Rb, K) that revealed periodic real-space modulation of electronic states
at low energies, I show using model calculations that a triple-{\bf Q} chiral
pair density wave (CPDW) is generated in the superconducting state by a charge
order of $2a\! \times \!2a$ superlattice periodicity, intertwined with a
time-reversal symmetry breaking orbital loop current. In the presence of such a
charge order and orbital loop current, the superconducting critical field is
enhanced beyond the Chandrasekhar-Clogston limit. The CPDW correlation survives
even when the long-range superconducting phase coherence is diminished by a
magnetic field or temperature, stabilizing an exotic granular superconducting
state above and in the vicinity of the superconducting transition. The
presented results suggest that the CPDW can be regarded as the origin of the
pseudogap observed near the superconducting transition. | Narayan Mohanta | 2023-06-09T20:28:00Z | http://arxiv.org/abs/2306.06242v2 | # Chiral pair density wave as the precursor of pseudogap in kagome superconductors
###### Abstract
Motivated by scanning tunnelling microscopy experiments on AV\({}_{3}\)Sb\({}_{5}\) (A = Cs, Rb, K) that revealed periodic real-space modulation of electronic states at low energies, I show using model calculations that a tripple-**Q** chiral pair density wave (CPDW) is generated in the superconducting state by a charge order of \(2a\!\times\!2a\) superlattice periodicity, intertwined with a time-reversal symmetry breaking orbital loop current. The CPDW correlation survives even when the long-range superconducting phase coherence is diminished by a magnetic field or temperature. The superconducting critical field is enhanced beyond the Chandrasekhar-Clogston limit, pointing to a rare quantum state above the superconducting transition. The presented results suggest that the CPDW can be regarded as the origin of the pseudogap observed near the superconducting transition.
Understanding electronic properties arising from coexisting superconductivity and various density-wave orders has remained as a central problem in condensed matter physics. It has allured the physics community for decades in the context of high-temperature cuprate superconductors; the recently-synthesized kagome metals AV\({}_{3}\)Sb\({}_{5}\) (A = Cs, Rb, K) have revived the interest [1]. The V atoms in these compounds form a kagome lattice, and the Fermi level is predominantly populated by V \(3d\) orbitals. The electronic band structure exhibits Dirac points and nearly-flat less-dispersive bands [2; 3]. Strong correlation of the nearly-flat bands, topological effects from the Dirac fermions, van Hove singularities, and frustration effects in the kagome geometry are favorable conditions for instabilities towards long-range many-body order to set in. Superconductivity with a gap-to-\(T_{c}\) ratio \(2\Delta_{0}/k_{B}T_{c}\!\approx\!5\) was found below \(T_{c}\!\approx\!2.5\) K [2]. A chiral charge order was found to appear below \(T_{c}\!\approx\!94\) K with broken time-reversal symmetry (TRS) but without the trace of any long-range magnetic order, indicating the presence of an intertwined orbital loop current [4; 5; 6]. The absence of acoustic phonon anomaly at the charge-order wave vector rules out the Pierls instability related to the Fermi surface nesting and phonon softening as a possible mechanism, and implies that extended Coulomb interactions at a van Hove filling may be responsible for it [7; 8; 9]. A pressure-driven transition from fully-gapped to partially-gapped superconductivity and the coexistence of the superconductivity with the charge order over a large parameter regime suggest unconventional pairing in these compounds [10]. Alternative scenarios include non-chiral, anisotropic \(s\)-wave superconductivity, supported by recent experimental findings at different pressures [11].
A suppressed electronic density of states at the Fermi level, known as the 'pseudogap', posed an enigmatic problem in the high-temperature cuprate superconductors. A similar pseudogap with a V-shaped density of states was observed in scanning tunnelling microscopy experiments on AV\({}_{3}\)Sb\({}_{5}\), with periodic modulations of both charge density and Cooper pair density of \(2a\!\times\!2a\) superlattice periodicity (\(a\) being the lattice constant) [12; 13; 14]. The findings are usually indicative of a nodal pairing symmetry or ungapped sections of the Fermi surface. The concomitant periodic modulations of both superfluid and normal fluid raised a series of questions including the origin of the pseudogap found in the tunnelling spectra.
In this work, I focus on the observed variation of the density of states in the pseudogap near the superconducting transition and show that a chiral density wave of \(s\)-wave Cooper pairs can account for it. The chiral pair density wave (CPDW) is generated in the superconducting state by the TRS breaking charge order and it persists above the superconducting transition without long-range superconducting phase coherence. This CPDW state can be described by a pairing gap \(\Delta({\bf r})\!=\!\sum_{a}\Delta_{a}e^{i({\bf Q}_{a}\cdot{\bf r}+\varphi_{a})}\) at a lattice site position \({\bf r}\), \(\Delta_{a}\) and \(\varphi_{a}\) being the magnitude and the relative phase of the pairing amplitude along three characteristic momentum \({\bf Q}_{a}\) (\(a\!=\!1,2,3\)), set by the charge order periodicity. The presented theoretical arguments are based on calculated density of states \(\rho(E)\) and Fourier transformed local density of states \(\rho({\bf Q}_{p},E)\) at a CPDW wave vector \({\bf Q}_{p}\). In the vicinity and above the critical field \(B_{c}\) or critical temperature \(T_{c}\) for the superconducting transition, determined by a vanishing superfluid density \(n_{s}\), \(\rho({\bf Q}_{p},E)\) reveals a particle-hole symmetric density of states around zero energy, thereby ruling out the charge-ordered electronic states as a possible origin of the pseudogap. Remarkably, the critical value of the magnetic field, perpendicular to the kagome plane, is found to be enhanced beyond the usual Chandrasekhar-Clogston limit in the presence of the orbital loop current, implying that a highly-provoking quantum state prevails above the superconducting transition.
Two types of charge order were reported--star of David and tri-hexagonal or inverse star of David patterns, both of \(2a\!\times\!2a\) periodicity [15; 16; 17]. A chiral flux phase, compatible with the symmetry of the kagome lattice and broken TRS, was shown to be energetically favorable [18]. The tri-hexagonal charge order which has been observed prominently in most compounds, as shown in Fig. 1, is considered in the theoretical model and results presented below. Such an unusual charge ordered state is also sup
ported by a number of interesting phenomena such as anomalous Hall effect and Nernst effect [19; 20; 21].
To model the superconducting state and the experimentally-observed pseudogap, the minimal tight-binding Hamiltonian at the mean spin-singlet pairing field on the kagome lattice is expressed as
\[\mathcal{H} = -t\sum_{\langle ij\rangle,\sigma}(c_{i\sigma}^{\dagger}c_{j\sigma} +\mathrm{H.c.})-\sum_{i,\sigma}(\mu_{0}+\xi_{i}\mu_{\mathrm{co}})c_{i\sigma}^{ \dagger}c_{i\sigma}\] \[-\sum_{i}(\Delta_{i}c_{i\uparrow}^{\dagger}c_{i\downarrow}^{ \dagger}+\mathrm{H.c.})-it_{\mathrm{lc}}\sum_{\langle ij\rangle,\sigma}(c_{i \sigma}^{\dagger}c_{j\sigma}-\mathrm{H.c.}),\]
where \(t\) is the nearest-neighbor hopping energy, \(\mu_{0}\) is the global chemical potential, \(\mu_{\mathrm{co}}\) is the charge order amplitude, \(\xi_{i}\) is a local variable (\(\pm 1\)) that generates the tri-hexagonal charge order pattern, shown in Fig. 1, \(\Delta_{i}\) is the local spin-singlet pairing gap, and the complex nearest-neighbor hopping \(it_{\mathrm{lc}}\) incorporates the TRS breaking orbital loop current. The Hamiltonian is diagonalized by using the standard unitary transformation of the fermionic fields \(c_{i\sigma}=\sum_{n}u_{ni}^{\sigma}\gamma_{n}+v_{ni}^{\sigma\,*}\gamma_{n}^{\dagger}\), where \(\gamma_{n}\) is an annihilation operator acting on the \(n^{\mathrm{th}}\) eigenstate, and \(u_{ni}^{\sigma}\) (\(v_{ni}^{\sigma}\)) is the corresponding quasiparticle (quasi-hole) amplitude at site \(i\) and spin \(\sigma\). The eigenstates are obtained by solving the Bogoliubov-de Gennes equations \(\sum_{j}\mathcal{H}_{ij}\psi_{nj}=E_{n}\psi_{ni}\), subject to the self-consistent gap equation
\[\Delta(\mathbf{r}_{i})=\frac{\mathcal{U}}{2}\sum_{n}\big{[}u_{ni}^{\uparrow}u_ {ni}^{\downarrow*}-u_{ni}^{\downarrow}u_{ni}^{\uparrow*}\big{]}\tanh\Big{(} \frac{E_{n}}{2k_{B}T}\Big{)}, \tag{2}\]
where \(\psi_{ni}=[u_{ni}^{\uparrow},u_{ni}^{\downarrow},v_{ni}^{\uparrow},v_{ni}^{ \downarrow}]^{T}\), \(\mathcal{U}\) is the pair-wise attractive potential and \(T\) is the temperature. Throughout the presented results, \(\mu_{0}\) is kept at zero which places the Fermi level close to one of the van Hove singularities, \(t=1\) and \(\mathcal{U}=2\). The relevant energy scale is the maximum pairing gap magnitude, which was found experimentally to be \(\Delta\approx 0.52\) meV [13], is taken here to be the unit for all energies in what follows.
To keep track of the superconducting transition, the global superconducting phase rigidity, determined by the superfluid density, is calculated from the effective Drude weight, given by [22]
\[n_{s}=\frac{D_{s}}{\pi e^{2}}=-\langle\kappa\rangle+\Pi(Q\to 0,\omega\to 0), \tag{3}\]
where the first term on the right hand side is the diamagnetic response, with the local kinetic energy expressed in terms of the Bogoliubov quasiparticle weights as
\[\kappa_{i}= -t\sum_{\langle j\rangle,n,\sigma}\big{[}u_{ni}^{\sigma}u_{nj}^{ \sigma*}+c.c.\big{]}f(E_{n})\] \[+\big{[}v_{ni}^{\sigma}v_{nj}^{\sigma*}+c.c.\big{]}(1-f(E_{n})). \tag{4}\]
The second term represents the paramagnetic response, obtained by the transverse current-current correlation function
\[\Pi(Q\to 0,\omega\to 0) = \frac{1}{N}\sum_{i,j,n_{1},n_{2}}^{\sigma,\sigma^{\prime}} \mathcal{A}_{n_{1}n_{2}}^{i\sigma\sigma^{\prime}}[\mathcal{A}_{n_{1}n_{2}}^{j \sigma\sigma^{\prime}*}+\mathcal{B}_{n_{1}n_{2}}^{j\sigma\sigma^{\prime}}] \tag{5}\] \[\times \frac{f(E_{n1})-f(E_{n2})}{E_{n1}-E_{n2}},\]
where \(N\) is the total number of lattice sites and
\[\mathcal{A}_{n_{1}n_{2}}^{i\sigma\sigma^{\prime}}=2\big{[}u_{nj}^{ \sigma^{\prime}*}u_{nj}^{\sigma}-u_{ni}^{\sigma*}u_{nj}^{\sigma^{\prime}} \big{]},\] \[\mathcal{B}_{n_{1}n_{2}}^{i\sigma\sigma^{\prime}}=2\big{[}v_{nj}^{ \sigma^{\prime}*}v_{nj}^{\sigma}-v_{nj}^{\sigma*}v_{nj}^{\sigma^{\prime}} \big{]}. \tag{6}\]
The local density of states, an observable that can be compared with the scanning tunneling microscopy data, is calculated via
\[\rho(\mathbf{r}_{i},E)=\sum_{n}\big{[}|u_{ni}^{\sigma}|^{2}\delta(E-E_{n})+|v_ {ni}^{\sigma}|^{2}\delta(E+E_{n})\big{]}. \tag{7}\]
The total density of states \(\rho(E)\) is obtained by summing over all lattice sites, and the Fourier transformed local density of states at a momentum \(\mathbf{Q}\) is obtained using
\[\rho(\mathbf{Q},E)=\frac{1}{N}\sum_{i}\cos\bigl{(}\mathbf{Q}\cdot\mathbf{r}_{ i}\bigr{)}\rho(\mathbf{r}_{i},E). \tag{8}\]
The local and non-local modulations of the density of states are useful to analyze the presence of the particle-hole symmetry and hence, to differentiate the contributions from the normal fluid and the superfluid, as will be evident from the numerical results presented below.
The density wave of s-wave bosons is envisaged from the pairing gap \(\Delta(\mathbf{r}_{i})=\Delta_{m}e^{i\theta_{i}}\), both real and imaginary parts of which reveal \(2a\times 2a\) periodic modulation (real part is shown on the color scale in Fig. 2(a)). The phase angle \(\theta_{i}\) also shows a periodic structure (arrows in Fig. 2(a)). This intriguing CPDW state is confirmed further by the Fourier transform of the pair-pair correlation function
\[C(\mathbf{Q})=\frac{1}{N}\sum_{i,j}\langle\Delta(\mathbf{r}_{i})\Delta( \mathbf{r}_{j})\rangle e^{-i\mathbf{Q}\cdot\mathbf{r}_{ij}}, \tag{9}\]
which shows three characteristic momenta (see Fig. 2(b)), given by (\(\pm\pi\), \(\pi/\sqrt{3}\)) and (0, \(2\pi/\sqrt{3}\)). It is confirmed that the CPDW state is generated in the superconducting
Figure 1: Charge order configuration with intertwined orbital loop current on the kagöe lattice, analogous to the tri-hexagonal pattern observed in experiments. The yellow and cyan colors represent the modulation in the chemical potential \(\mu_{\mathrm{co}}\), while the arrows represent the loop current propagation direction and the associated flux \(\varphi\).
state due to the interplay between \(s\)-wave onsite pairing, charge order and the TRS breaking orbital loop current. Three types of such tripple-\(\mathbf{Q}\) correlation, at different momenta, have been observed in the experiments [12; 13], implying that the density waves are cascaded between normal fluid and superfluid _i.e._ the CPDW can also induce subsequent charge orders.
The coexistence of charge order and superconductivity in AV\({}_{3}\)Sb\({}_{5}\) over a large parameter regime raised the natural question whether there is a cooperation between the two commonly-known competing orders [23]. The present analysis shows that the superconducting gap is suppressed in the presence of the charge order and the orbital loop current at zero temperature and zero magnetic field. However, the average pairing gap \(|\Delta|\) vanishes at a magnetic field and a temperature, larger than the critical values \(B_{c}\) and \(T_{c}\), determined by a vanishing superfluid density \(n_{s}\) (Fig. 3(a)-(b)). The magnetic field of amplitude \(B_{z}\) was incorporated by the Hamiltonian \(\mathcal{H}_{x}=-\mu_{{}_{B}}B_{z}\sum_{i,\sigma,\sigma^{\prime}}\mathbf{\sigma}^ {z}_{\sigma\sigma^{\prime}}c^{\dagger}_{i\sigma}c_{i\sigma^{\prime}}\) which describes the Zeeman exchange coupling. Remarkably, the critical field at which \(|\Delta|\) drops to zero is enhanced by more than 20% above \(B_{c}\) in the presence of the charge order and the loop current.
It is known that a conventional spin-singlet superconductor, with a gap around the Fermi level, has a vanishing paramagnetic susceptibility at \(T=0\), and hence it cannot lower its Free energy indefinitely by spin-polarizing the quasiparticle states in the presence of a Zeeman magnetic field. Consequently, when the Zeeman energy gain is comparable to the superconducting condensation energy, given by \(\mu_{{}_{B}}B_{c0}=\Delta_{0}/\sqrt{2}\approx 0.7\Delta_{0}\), known as the Chandrasekhar-Clogston limit [24; 25], there is a transition to the normal state. Exception to this stringent condition occurs in the case of Fulde-Ferrell-Larkin-Ovchinnikov type finite-momentum condensates [26; 27] and in thin superconducting films with a large spin-orbit coupling [28]. The enhancement in the critical field for vanishing \(|\Delta|\) in the kagome lattice with a charge order and orbital loop current indicates the formation of an unconventional state, which requires further investigation. However, it can be attributed to the non-zero density of states \(\rho(E)\) within the superconducting gap (Fig. 3(c)) in the pseudogap state. At the critical field \(B_{c}\) for vanishing \(n_{s}\), \(|\Delta|\) exhibits a dip while \(\rho(E=0)\) shows a zero weight (inset in (Fig. 3(c))), attesting the appearance of an unconventional state immediately above \(B_{c}\). The enhancement of the critical magnetic field and the ap
Figure 3: (a), (b) Variation of the average gap magnitude \(|\Delta|\) (without and with the charge order and the loop current) and superfluid density \(n_{s}\) with magnetic field \(B_{z}\) and temperature \(T\). (c), (d) Density of states \(\rho(E)\) for different \(B_{z}\) and \(T\) near the critical values \(B_{c}\) and \(T_{c}\), determined by vanishing \(n_{s}\). Insets in (c), (d) show the density of states at zero energy \(\rho(0)\) as a function of \(B_{z}\) and \(T\), respectively. Parameters for the charge order and loop current: \(\mu_{\rm{co}}=0.5\) and \(t_{\rm{lc}}=1\). A constant offset has been added to the vertical axis for each curve in (c) and (d) for clarity.
Figure 2: (a) Profile of the pairing gap \(\Delta(\mathbf{r}_{i})=\Delta_{m}e^{i\theta_{i}}\) solution on the considered \(10a\times 10a\) lattice—the color scale shows the real part; the arrows show the phase \(\theta_{i}\). (b) Fourier transform of the pair-pair correlation function \(C(\mathbf{Q})\), showing the three characteristic momenta (six-peak structure), indicative of the CPDW state. The hexagon plotted with dashed lines depicts the Brillouin zone. Parameters used are \(\mu_{\rm{co}}=0.5\) and \(t_{\rm{lc}}=1\).
pearance of the pseudogap in the CPDW state suggest that there is a correlation among these phenomena. The temperature driven transition to the normal state also reveals a similar pseudogap (Fig. 3(d)), though the variation of \(\rho(E\!=\!0)\) with \(T\) is rather monotonic. Moreover, the V-shaped density of states and the multiple coherence peaks around the gap show similarities with those observed in the tunneling spectra [12; 13; 14].
To gain insights into the origin of the pseudogap, the fourier-transformed pair-pair correlation function \(C(\mathbf{Q})\) and the fourier-transformed local density of states \(\rho(\mathbf{Q}_{p},E)\) at \(\mathbf{Q}_{p}\), one of the three characteristic momenta for the CPDW, were looked at (Fig. 4(a)-(d)) across the superconducting transition. Interestingly, the CPDW correlation survives above the critical magnetic field \(B_{c}\) and critical temperature \(T_{c}\). The observable \(\rho(\mathbf{Q}_{p},E)\) is particle-hole symmetric _i.e._ it is symmetric when \(E\rightarrow-E\), above and in the vicinity of \(B_{c}\) and \(T_{c}\), ruling out other possible mechanisms of the pseudogap such as charge order of electronic states or modulation due to electron scattering from a periodic potential. From these findings, it can be argued that the modulations of the density of states in the pseudogap, observed in the experiments, are a consequence of the CPDW of \(s\)-wave Cooper pairs without a global phase coherence.
The superconducting state can be influenced by multiple properties of the compounds such as the TRS breaking loop current, the rotational symmetry-breaking nematic order, fermi surface nesting, Coulomb interactions and sublattice interference [29; 30; 31; 32; 33]. Despite the complex nature of the pairing mechanism, there are growing experimental evidences in support of spin-singlet \(s\)-wave pairing such as the absence of a nodal state while transitioning from an anisotropic full-gap state to an isotropic full-gap state driven by impurity concentration [11], and the appearance of a prominent Hebel-Slichter coherence peak immediately below \(T_{c}\)[34]. The proposed CPDW of \(s\)-wave Cooper pairs, therefore, provides a natural explanation for many paradoxical experimental observations, including the pseudogap in the tunnelling spectra.
To summarize, it is shown that a CPDW state of \(2a\!\times\!2a\) periodicity emerges spontaneously in the kagome lattice due to the interplay of onsite spin-singlet superconductivity with a charge order of the same periodicity and an orbital loop current. The CPDW correlation survives beyond the superconducting transition without a global superconducting phase coherence, producing a V-shaped particle-hole symmetric density of states and pseudogap, which are otherwise indicative of a nodal unconventional pairing symmetry.
_Acknowledgement:_ Numerical calculations were performed at the computing resources of PARAM Ganga at Indian Institute of Technology Roorkee, provided by National Supercomputing Mission, implemented by C-DAC, and supported by the Ministry of Electronics and Information Technology and Department of Science and Technology, Government of India.
|
2307.11417 | Uncomputation in the Qrisp high-level Quantum Programming Framework | Uncomputation is an essential part of reversible computing and plays a vital
role in quantum computing. Using this technique, memory resources can be safely
deallocated without performing a nonreversible deletion process. For the case
of quantum computing, several algorithms depend on this as they require
disentangled states in the course of their execution. Thus, uncomputation is
not only about resource management, but is also required from an algorithmic
point of view. However, synthesizing uncomputation circuits is tedious and can
be automated. In this paper, we describe the interface for automated generation
of uncomputation circuits in our Qrisp framework. Our algorithm for
synthesizing uncomputation circuits in Qrisp is based on an improved version of
"Unqomp", a solution presented by Paradis et. al. Our paper also presents some
improvements to the original algorithm, in order to make it suitable for the
needs of a high-level programming framework. Qrisp itself is a fully
compilable, high-level programming language/framework for gate-based quantum
computers, which abstracts from many of the underlying hardware details.
Qrisp's goal is to support a high-level programming paradigm as known from
classical software development. | Raphael Seidel, Nikolay Tcholtchev, Sebastian Bock, Manfred Hauswirth | 2023-07-21T08:21:03Z | http://arxiv.org/abs/2307.11417v1 | # Uncomputation in the Qrisp high-level Quantum Programming Framework
###### Abstract
Uncomputation is an essential part of reversible computing and plays a vital role in quantum computing. Using this technique, memory resources can be safely deallocated without performing a non-reversible deletion process. For the case of quantum computing, several algorithms depend on this as they require disentangled states in the course of their execution. Thus, uncomputation is not only about resource management, but is also required from an algorithmic point of view. However, synthesizing uncomputation circuits is tedious and can be automated. In this paper, we describe the interface for automated generation of uncomputation circuits in our Qrisp framework. Our algorithm for synthesizing uncomputation circuits in Qrisp is based on an improved version of "Unqomp", a solution presented by Paradis et. al. Our paper also presents some improvements to the original algorithm, in order to make it suitable for the needs of a high-level programming framework. Qrisp itself is a fully compilable, high-level programming language/framework for gate-based quantum computers, which abstracts from many of the underlying hardware details. Qrisp's goal is to support a high-level programming paradigm as known from classical software development.
Keywords:Quantum computation Uncomputation High-level programming Qrisp.
## 1 Introduction
While the hardware side of quantum computing has seen steady improvements, significant progress in quantum software development methods is still lacking. This is due to the fact that coding algorithms for the main available physical backends is still done using quantum circuit objects, which are indeed expressive but provide little structure. In order to better support more complex algorithms, which might include a multitude of concepts, a more abstract programming workflow is necessary.
This problem has been identified by the community and two solutions have been proposed: Q# [11] and Silq [2]. Unfortunately, these proposals currently provide no straightforward way of compiling their algorithms into quantum circuits. In previous work on Qrisp [9], we demonstrated several constructs and abstractions, which permit a high-level programming workflow, while still maintaining full platform-independent compilability. The fundamental paradigm behind Qrisp's design has always been the automation of as many of the repetitive steps of low-level programming as possible without losing expressiveness. As uncomputation is a central and re-occurring topic in many quantum algorithms, it is natural to investigate the automation of this procedure and how Qrisp can support it. In the following, we present an interface for automatic uncomputation within Qrisp as well as some adjustments to the underlying algorithm "Unqomp" [7].
The rest of this paper is organized as follows: Section 2 overviews our Qrisp framework for high-level programming of quantum computers. Then Section 3 motivates the role of uncomputation for quantum software development. Sections 4 and 5 discuss the possible methods for implementing uncomputation and present the corresponding Qrisp interface. After that, in Section 6 the improvements to established uncomputation methods, which make those more comfortable to use in the scope of Qrisp, are discussed. The final section summarizes and concludes our paper.
## 2 Brief Overview of Qrisp
The state of the art in programming a quantum computer is currently similar to programming in assembler on a classical computer. Even worse, while assembly programming offers at least some basic instructions, e.g. commands, registers, loops, etc., which are more abstract than accessing the actual hardware gates through binary codes, in quantum computing gates and qubits is the current standard way of programming. Frameworks such as Qiskit [12] or Cirq [4] enable the user to create sub-circuits that can be reused in larger, more complex circuits. However, the handling of the circuits is still quite complicated and tedious.
The Qrisp framework [1] consists of a set of Python modules and language extensions that attempt to overcome the above challenge by abstracting the qubit and gate structure of the underlying circuits as far as possible. This is achieved by conceptually replacing gates and qubits with functions and variables. In this way, it is possible to create much more complex circuits than would be possible with the current low-level approach. It goes without saying that the transition to variables and functions does not mean the end of programming with gates and qubits. The elementary quantum functions must of course still be implemented in the background with the help of gates and qubits.
## 3 The Need for Uncomputation in Quantum Computing
Uncomputation is an important aspect of quantum information processing (and reversible computing in general), because it facilitates the efficient use of quantum resources. In classical computing, resource efficiency can be achieved by deleting information from main memory and reusing the deleted bits for other purposes.3 Deleting or resetting a qubit, however, is not a reversible process and is usually performed by measuring the qubit in question and performing a bit flip based on the outcome. This measurement collapses the superposition of other entangled qubits, which are supposed to be unaffected. In many cases this collapse interferes with the quantum algorithm, such that the resulting state can no longer be used.
Footnote 3: A good correspondence in classical computing to uncomputation in quantum computing is the concept of garbage collection as, e.g., in Java. While classical garbage collection usually simply performs a non-reversible deletion of the collected data, uncomputation in contrast means performing the necessary (reversible) steps to bring the data back into some initial state.
In some situations, uncomputation is not only relevant as a way to manage quantum resources but is actually required, in order for a quantum algorithm to produce a correct result. One such example is Grover's algorithm [3], which utilizes an oracle function and a diffusing process, in order to search for suitable solutions in a given space based on the logic of the above mentioned oracle function. Assume that we have two quantum variables, of which one is in a state of uniform superposition:
\[\ket{\psi_{0}}=\sum_{i=0}^{2^{n}-1}\ket{i}\ket{0}. \tag{1}\]
In Grover's algorithm, an oracle now calculates a boolean value \(f(i)\), which is required to perform a phase tag on the correct solution, for which the algorithm is searching:
\[\ket{\psi_{1}}=\sum_{i=0}^{2^{n}-1}\ket{i}\ket{f(i)}. \tag{2}\]
After performing the phase tag, the state is:
\[\ket{\psi_{2}}=Z_{\ket{f(i)}}\ket{\psi_{1}}=\sum_{i=0}^{2^{n}-1}(-1)^{f(i)} \ket{i}\ket{f(i)}. \tag{3}\]
In order for Grover's diffuser to actually amplify the amplitude of the tagged state, we need the state to be disentangled, i.e.,
\[\left|\psi_{3}\right\rangle=\sum_{i=0}^{2^{n}-1}(-1)^{f(i)}\left|i\right\rangle \left|0\right\rangle \tag{4}\]
Therefore we need to uncompute the variable containing \(f(i)\). This shows clearly why uncomputation is required even within the implementation and execution of one of the most popular quantum algorithms, in order to enable the efficient search of solutions for a particular problem based on an oracle function.
## 4 The Challenge of Implementing Uncomputation
In many cases, uncomputing a set of qubits can be achieved by applying the inverse of the steps required for the computation. While this seems like a simple approach, it can be ambiguous, e.g., because it might not be clear which gates actually contributed to the computation. For instance, consider the Z gate in eq. 3. In any case, it is a tedious amount of extra programming work, which should be automated.
To remedy this problem, an algorithm called "Unqomp" for automatic uncomputation has been devised [7]. An important advantage of Unqomp is, that it does not follow the philosophy of simply reverting the computation, but rather enables the algorithm to skip "un-uncomputation" or recomputation. Recomputation is a phenomenon that happens if one naively reverts the computation process. If that computation process contained an uncomputation itself, the reversed process contains a recomputation. Unqomp enables skipping that recomputation, by inserting the reverted operations at the correct point within the circuit instead of appending them at the end. While skipping recomputation is generally a truly useful feature, it also has a drawback: Due to the insertion of the reverted operations within the circuit, the qubits holding the values that would potentially be recomputed cannot be deallocated until the uncomputation is completed. If we recompute them, these qubits can be used for other purposes between their un- and recomputation.
In Qrisp the developer can choose4 whether they want to perform recomputation: Using the gate_wrap decorator, functions of quantum variables can be packed into self-contained gate objects, which are not dissolved by the Unqomp implementation. Any uncomputed quantum variables inside these objects will be recomputed if required. The advantages and drawbacks of uncomputation with and without recomputation are summarized in Fig. 1.
Footnote 4: An algorithm for automatic determination wether a variable should be recomputed has been presented in [6]. This method is however not yet implemented within Qrisp
## 5 Utilizing the Unqomp Method in Qrisp
Unqomp has been implemented in Qrisp and we provide two ways to call this function as described in the following subsections.
### Decorator based Uncomputation in Qrisp
The first option is the auto_uncompute decorator, which automatically uncomputes all local quantum variables, i.e., QuantumVariable class instances, of a function. To demonstrate this functionality, we create a function which returns a QuantumBool instance in Qrisp containing the AND value of the three associated inputs. To do so, this function creates a local QuantumBool, which stores the temporary result of the AND value of the first two inputs.
```
fromqrispimportQuantumBool,mcx deftriple_AND(a,b,c): local=QuantumBool() result=QuantumBool()
```
Figure 1: Conceptual visualisation of different uncomputation strategies.
mcx([a,b], local_quantum_bool) mcx([local_quantum_bool, c], result) returnresult a= QuantumBool() b= QuantumBool() c= QuantumBool() result= triple_AND(a,b,c) ```
Executing this piece of code and visualizing the.qs attribute (the QuantumSession5) of any of the participating QuantumVariables produces the following circuit:
Footnote 5: In Qrisp, a QuantumSession contains all the high-level objects and steer the interaction with the hardware or simulation backend.
\[a.0 :\] \[b.0 :\] \[local.0 :\] \[c.0 :\] \[result.0 :\]
We see that the qubit containing the local QuantumBool does not end up in the \(|0\rangle\) state, if a and b are in the \(|1\rangle\) state. Therefore this qubit is still entangled and cannot be reused for other purposes.
We will now rewrite this function with the auto_uncompute decorator:
``` fromqrispimportQuantumBool,mcx,auto_uncompute @auto_uncompute deftriple_AND(a,b,c): local=QuantumBool() result=QuantumBool() mcx([a,b], local_quantum_bool) mcx([local_quantum_bool, c], result) returnresult a= QuantumBool() b= QuantumBool() c= QuantumBool() result= triple_AND(a,b,c)
This snippet produces the following QuantumCircuit:
\[\begin{array}{c}\includegraphics[]{figure/1}{\psfig{cre_circ_circ_circ_circ_}} \end{array}\]
\[\begin{array}{c}\includegraphics[]{figure/1}{\psfig{cre_circ_circ_circ_circ_circ_}} \end{array}\]
\[\begin{array}{c}\includegraphics[]{figure/1}{\psfig{cre_circ_circ_circ_circ_circ_}} \end{array}\]
\[\begin{array}{c}\includegraphics[]{figure/1}{\psfig{cre_circ_circ_circ_circ_circ_circ_}} \end{array}\]
\[\begin{array}{c}\includegraphics[]{figure/1}{\psfig{cre_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circ_circcirc_circ_circ_circ_circcirc_circ_circ_circcirc_circ_circcirc_circ_circcirc_circ_circ_circ_circcirc_circ_circcirc_circ_circcirc_circ_circcirc_circcirc_circ_circcirc_circ_circcirc_circ_circcirc_circ_circcirc_circcirc_circ_circcirc_circcirc_circ_circcirc_circcirc_circ_circcirc_circcirc_circcirc_circcirc_circcirc_circcirc_circcirc_circcirc_circcirc_circcirc_circcirc_circcirc_circcirc_circcirc_circcirc_circcirc_circcirc_circcirc_circcirc_circcirc_circcirc_circcirc_circcirc_circcirc_circcirc_circcirc_circcirc_circcirc_circcirc_circcirc_circcirc_circcirccirc_circcirc_circcirc_circcirc_circcirc_circcirc_circcirccirc_circcirccirc_circcirc_circcirccirc_circcirccirc_circcirc_circcirccirc_circcirccirc_circcirccirc_circcirccirc_circcirccirc_circcirccirc_circcirccirc_circcirccirccirc_circcirccirc_circcirccirccirc_circcirccirccirc_circcirccirccirccirc_circcirccirccirccirccirc_
result = triple_AND(a, b, c)
This produces the following quantum circuit:
\[\begin{array}{c}a.0:\begin{array}{c}0\\ \end{array}\\ b.0:\begin{array}{c}0\\ \end{array}\\ local.0:\begin{array}{c}0\\ \end{array}\\ c.0:\begin{array}{c}0\\ \end{array}\\ result.0:\begin{array}{c}0\\ \end{array}\\ \end{array}\]
The uncompute method and the auto_uncompute decorator automatically call the delete method after successful uncomputation, which frees the used qubit. If we allocate a new QuantumBool, the compiled quantum circuit will reuse that qubit:
```
fromqrispimortcx d=QuantumBool() cx(result, d)
```
And the quantum circuit is updated to:
\[\begin{array}{c}a.0:\begin{array}{c}0\\ \end{array}\\ b.0:\begin{array}{c}1\\ \end{array}\\ d.0:\begin{array}{c}2\\ \end{array}\\ c.0:\begin{array}{c}0\\ \end{array}\\ result.0:\begin{array}{c}0\\ \end{array}\\ \end{array}\]
We can see how the qubit holding the local QuantumBool has been reused to now accomodate the QuantumBool d.
### Case Study: Solving Quadratic Equations using Grover's Algorithm
We want to close this section with an example given in our previous article [9], where we highlighted how Qrisp can be used to solve a quadratic equation using Grover's algorithm. To achieve this in Qrisp, we employed manual uncomputation using the invert environment. Using the auto_uncompute decorator we reformulate the code from [9] as follows:
```
fromqrispimportQuantumFloat,h,z,auto_uncompute @auto_uncompute defsqrt_oracle(qf): z(qf*qf==0.25) qf=QuantumFloat(3,-1,signed=True) n=qf.size iterations=int((n/2)**0.5)+1 h(qf) fromqrisp.groverimportdiffuser foriinrange(iterations): sqrt_oracle(qf) diffuser(qf) result=qf.get_measurement(plot=True)
```
The function sqrt_oracle applies a Z gate onto the QuantumBool generated by evaluating the comparison. Note that this QuantumBool and the result of the multiplication qf*qf (a QuantumFloat) are uncomputed automatically.
The histogram of the simulated outcome probabilities is shown in Figure 2, demonstrating the correctness of the quadratic equation solving procedure, while in parallel uncomputing qubits using the Qrisp infrastructure for the efficient execution of the Grover's algorithm.
## 6 Uncomputation beyond Unqomp
Even though the Unqomp algorithm provides a very convenient way of solving automatic uncomputation, it comes with a few restrictions. We will not detail
Figure 2: Histogram of the simulation results of the quadratic solver.
them too deeply here as they are well documented in the original publication [7], however, the most important one can be overcome using the Qrisp implementation of the algorithm which is described in the following section.
### Uncomputing Synthesized Gates
The main restriction Unqomp imposes is that only a certain class of gates can be uncomputed, which the authors of Unqomp call _qfree_: A quantum gate is _qfree_ if it neither introduces nor destroys states of superposition. In more mathematical terms, this implies that the unitary matrix of a _qfree_ gate can only have a single non-zero entry per column.
This is a serious restriction, since many quantum functions make use of non-_qfree_ gates such as the Hadamard, even though their net-effect is _qfree_. An example of such a situation is Fourier arithmetic (of which Qrisp's arithmetic module makes heavy use). Even though the multiplication function
\[U_{mul}\ket{a}\ket{b}\ket{0}=\ket{a}\ket{b}\ket{a\cdot b} \tag{5}\]
itself is qfree, it makes use of Hadamard gates, which are not qfree. In order to overcome this major restriction, the Qrisp implementation of Unqomp does not decompose gates but instead check the combined gate for _qfree_-ness.
This feature, in combination with the previously mentioned gate_wrap decorator, can be used to create quantum functions that can be successfully uncomputed even though their inner workings contain non-_qfree_ gates.
We demonstrate this with an implementation of the Margolus gate taken from [5]
```
fromqrispimportcx,ry fromnumpyimportpi defmargolus(control,target): ry(pi/4,target[0]) cx(control[1],target[0]) ry(-pi/4,target[0]) cx(control[0],target[0]) ry(pi/4,target[0]) cx(control[1],target[0]) ry(-pi/4,target[0])
```
While the Margolus gate itself is _qfree_, the constituents (to be specific, the RY gates) are not. Therefore the following code results in an error:
``` fromqrispimportQuantumVariable control=QuantumVariable(2) target=QuantumVariable(1)
margolus(control, target) target.uncompute() ```
We can circumvent this error by applying the gate_wrap decorator to margolus.
``` fromqrispiimportgate_wrap control=QuantumVariable(2) target=QuantumVariable(1) margolus_wrapped=gate_wrap(margolus) margolus_wrapped(control, target) target.uncompute() ```
Qrisp now automatically checks the combined gate for _qfree_-ness instead of the constituents. Since _qfree_-ness corresponds to the unitary having only a single non-zero entry per column, this property can be checked in linear time if the unitary is known.
### Permeability
_Permeability_ is a concept, which is introduced in the Qrisp implementation of Unqomp and generalizes the notion of a controlled operation. The permeability status of a gate on a certain input qubit \(q\) decides how this gate is treated, when \(q\) is uncomputed. We choose this definition, because it permits a broader scope of uncomputable circuits than Unqomp [7] and can be decided in linear time, if the unitary matrix is available. A gate is called permeable on qubit \(i\), if it commutes with the \(Z\) operator on this qubit.
\[\text{U is permeable on qubit }\text{i}\Leftrightarrow\text{UZ}_{i}=\text{Z}_{i} \text{U} \tag{6}\]
This implies that any controlled gate is permeable on its control qubit because
\[\text{Z}_{0}\text{cU} =\begin{pmatrix}\mathbb{1}&0\\ 0&-\mathbb{1}\end{pmatrix}\begin{pmatrix}\mathbb{1}&0\\ 0&U\end{pmatrix}\] \[=\text{cUZ}_{0}\]
However, not every permeable unitary is equal to a controlled gate, for example, \(\text{Z}_{0}\text{CX}_{01}\).
Why is this property relevant for the Unqomp algorithm? The definining feature of the DAG (directed acyclic graph) representation of Unqomp, is the
fact that multiple "control edges" can be connected to the same node. This is due to the commutative property of control knobs:
\[\begin{array}{c}\includegraphics[]{c}\end{array}\]
The DAG representation of Uncomp no longer contains any information about the order in which controlled gates are applied. It therefore supports a flexible insertion of the inverse "uncomputation" gates, since it is not necessary to specify, the concrete position in a sequence of controlled gates. In other words, Unqomp's DAG representation abstracts away equivalence classes of gate sequence permutations based on non-trivial commutation relations.
At this point we need the following theorem, which is proved in the appendix:
Theorem 4.1: _Let \(U\in U(2^{n})\) and \(V\in U(2^{m})\) be \(n\) and \(m\) qubit operators, respectively. If \(U\) is permeable on its last \(p\) qubits and \(V\) is permeable on its first \(p\) qubits, the two operators commute, if they intersect only on these qubits:_
\[(U\otimes\mathbb{1}^{\otimes m-p})(\mathbb{1}^{\otimes n-p}\otimes V)=( \mathbb{1}^{\otimes n-p}\otimes V)(U\otimes\mathbb{1}^{\otimes m-p}) \tag{7}\]
According to Theorem 4.1, it is not only control knobs that possess the above non-trivial commutation relation but the same is also true for two general gates, \(U,V\) if they are both permeable on \(q_{1}\):
\[\begin{array}{c}\includegraphics[]{c}\end{array}\]
We therefore modify the Unqomp algorithm in such a way that, every time it determines whether a gate is controlled on a certain qubit, we instead return the permeability status on that qubit. This simple modification has proved to provide a uniform way of treating uncomputation of synthesized gates. In addition, it also expanded the class of circuits that can be uncomputed. For example, an important class of synthesized gates, that is permeable but not controlled, is quantum logic synthesis.
As mentioned before, permeability can be determined efficiently. This is due to the fact, that according to Theorem 4.2 (in the appendix), the matrix representation is block diagonal. For instance, if \(p=2\):
\[U=\begin{pmatrix}\tilde{U}_{0}&0&0&0\\ 0&\tilde{U}_{1}&0&0\\ 0&0&\tilde{U}_{2}&0\\ 0&0&0&\tilde{U}_{3}\end{pmatrix} \tag{8}\]
Therefore, permeability can be decided by iteratively checking the off-diagonal blocks for non-zero entries.
## 7 Summary and Conclusions
In this paper, we gave a short introduction to why uncomputation is necessary in general and how to perform it. We introduced two ways of implementing and using the state-of-the-art algorithm Unqomp [7] (the auto_uncompute decorator and the uncompute method of the QuantumVariable class) in the Qrisp high-level programming framework. Moreover, we gave a short example of how to deploy these techniques, in order to have an even more elegant formulation of solving quadratic equations using Grover's algorithm [3] than in our previous article about Qrisp [9]. Finally, we elaborated on our extension of the Unqomp algorithm, which supports the uncomputation of more general quantum circuits an efficient way of deciding the necessary properties (permeability, qfree-ness) required for it to work.
|
2302.12918 | Deep Graph Stream SVDD: Anomaly Detection in Cyber-Physical Systems | Our work focuses on anomaly detection in cyber-physical systems. Prior
literature has three limitations: (1) Failing to capture long-delayed patterns
in system anomalies; (2) Ignoring dynamic changes in sensor connections; (3)
The curse of high-dimensional data samples. These limit the detection
performance and usefulness of existing works. To address them, we propose a new
approach called deep graph stream support vector data description (SVDD) for
anomaly detection. Specifically, we first use a transformer to preserve both
short and long temporal patterns of monitoring data in temporal embeddings.
Then we cluster these embeddings according to sensor type and utilize them to
estimate the change in connectivity between various sensors to construct a new
weighted graph. The temporal embeddings are mapped to the new graph as node
attributes to form weighted attributed graph. We input the graph into a
variational graph auto-encoder model to learn final spatio-temporal
representation. Finally, we learn a hypersphere that encompasses normal
embeddings and predict the system status by calculating the distances between
the hypersphere and data samples. Extensive experiments validate the
superiority of our model, which improves F1-score by 35.87%, AUC by 19.32%,
while being 32 times faster than the best baseline at training and inference. | Ehtesamul Azim, Dongjie Wang, Yanjie Fu | 2023-02-24T22:14:39Z | http://arxiv.org/abs/2302.12918v1 | # Deep Graph Stream SVDD: Anomaly Detection in Cyber-Physical Systems
###### Abstract
Our work focuses on anomaly detection in cyber-physical systems. Prior literature has three limitations: (1) Failing to capture long-delayed patterns in system anomalies; (2) Ignoring dynamic changes in sensor connections; (3) The curse of high-dimensional data samples. These limit the detection performance and usefulness of existing works. To address them, we propose a new approach called deep graph stream support vector data description (SVDD) for anomaly detection. Specifically, we first use a transformer to preserve both short and long temporal patterns of monitoring data in temporal embeddings. Then we cluster these embeddings according to sensor type and utilize them to estimate the change in connectivity between various sensors to construct a new weighted graph. The temporal embeddings are mapped to the new graph as node attributes to form weighted attributed graph. We input the graph into a variational graph auto-encoder model to learn final spatio-temporal representation. Finally, we learn a hypersphere that encompasses normal embeddings and predict the system status by calculating the distances between the hypersphere and data samples. Extensive experiments validate the superiority of our model, which improves F1-score by 35.87%, AUC by 19.32%, while being 32 times faster than the best baseline at training and inference.
## 1 Introduction
Cyber-physical systems (CPS) have been deployed everywhere and play a significant role in the real world, including smart grids, robotics systems, water treatment networks, etc. Due to their complex dependencies and relationships, these systems are vulnerable to abnormal system events (e.g., cyberattacks, system exceptions), which can cause catastrophic failures and expensive costs. In 2021, hackers infiltrated Florida's water treatment plants and boosted the sodium hydroxide level in the water supply by 100 times of the normal level [3]. This may endanger the physical health of all Floridians. To maintain stable and safe CPS, considerable research effort has been devoted to effectively detect anomalies in such systems using sensor monitoring data [19, 16].
Prior literature partially resolve this problem- however, there are three issues restricting their practicality and detection performance. **Issue 1: long-delayed patterns.** The malfunctioning effects of abnormal system events often do not
manifest immediately. Kravchik et al. employed LSTM to predict future values based on past values and assessed the system status using prediction errors[5]. But, constrained by the capability of LSTM, it is hard to capture long-delayed patterns, which may lead to suboptimal detection performance. _How can we sufficiently capture such long-delayed patterns?_**Issue 2: dynamic changes in sensor-sensor influence.** Besides long-delayed patterns, the malfunctioning effects may propagate to other sensors. Wang et al. captured such propagation patterns in water treatment networks by integrating the sensor-sensor connectivity graph for cyber-attack detection [17]. However, the sensor-sensor influence may shift as the time series changes due to system failures. Ignoring such dynamics may result in failing to identify propagation patterns and cause poor detection performance. _How can we consider such dynamic sensor-sensor influence?_**Issue 3: high-dimensional data samples.** Considering the labeled data sparsity issue in CPS, existing works focus on unsupervised or semi-supervised setting [17; 10]. But traditional models like One-Class SVM are too shallow to fit high-dimensional data samples. They have substantial time costs for feature engineering and model learning. _How can we improve the learning efficiency of anomaly detection in high-dimensional scenarios?_
To address these, we aim to effectively capture spatial-temporal dynamics in high-dimensional sensor monitoring data. In CPS, sensors can be viewed as nodes, and their physical connections resemble a graph. Considering that the monitoring data of each sensor changes over time and that the monitoring data of various sensors influences one another, we model them using a graph stream structure. Based on that, we propose a new framework called Deep Graph Stream Support Vector Data Description (**DGS-SVDD**). Specifically, to capture long-delayed patterns, we first develop a temporal embedding module based on transformer [15]. This module is used to extract these patterns from individual sensor monitoring data and embed them in low-dimensional vectors. Then, to comprehend dynamic changes in sensor-sensor connection, we estimate the influence between sensors using the previously learned temporal embedding of sensors. The estimated weight matrix is integrated with the sensor-sensor physically connected graph to produce an enhanced graph. We map the temporal embeddings to each node in the enhanced graph as its attributes to form a new attributed graph. After that, we input this graph into the variational graph auto-encoder (VGAE) [4] to preserve all information as final spatial-temporal embeddings. Moreover, to effectively detect anomalies in high-dimensional data, we adopt deep learning to learn the hypersphere that encompasses normal embeddings. The distances between the hypersphere and data samples are calculated to be criteria to predict the system status at each time segment. Finally, we conduct extensive experiments on a real-world dataset to validate the superiority of our work. In particular, compared to the best baseline model, DGS-SVDD improves F1-score by 35.87% and AUC by 19.32%, while accelerating model training and inference by 32 times.
## 2 Preliminaries
### Definitions
Definition 1: Graph Stream.A graph object \(\mathcal{G}_{i}\) describes the monitoring values of the Cyber-Physical System at timestamp \(i\). It can be defined as \(\mathcal{G}_{i}\) = (\(\mathcal{V}\),\(\mathcal{E}\),\(\mathbf{t}_{i}\)) where \(\mathcal{V}\) is the vertex (i.e., sensor) set with a size of \(n\); \(\mathcal{E}\) is the edge set with a size of \(m\), and each edge indicates the physical connectivity between any two sensors; \(\mathbf{t}_{i}\) is a list that contains the monitoring value of \(n\) sensors at the \(i\)-th timestamp. A graph stream is a collection of graph objects over the temporal dimension. The graph stream with the length of \(L_{x}\) at the \(t\)-th time segment can be defined as \(\mathbf{X}_{t}=[\mathcal{G}_{i},\mathcal{G}_{i+1},\cdots\mathcal{G}_{i+L_{x}- 1}]\).
Definition 2: Weighted Attributed Graph.The edge set \(\mathcal{E}\) of each graph object in the graph stream \(\mathbf{X}_{t}\) does not change over time, which is a binary edge set that reflects the physical connectivity between sensors. However, the correlations between different sensors may change as system failures happen. To capture such dynamics, we use \(\tilde{\mathcal{G}}_{t}=(\mathcal{V},\tilde{\mathcal{E}}_{t},\mathbf{U}_{t})\) to denote the weighted attributed graph at the \(t\)-th time segment. In the graph, \(\mathcal{V}\) is the same as the graph object in the graph stream, which is the vertex (i.e., sensor) set with a size of \(n\); \(\tilde{\mathcal{E}}_{t}\) is the weighted edge set, in which each item indicates the weighted influence calculated from the temporal information between two sensors; \(\mathbf{U}_{t}\) is the attributes of each vertex, which is also the temporal embedding of each node at the current time segment. Thus, \(\tilde{\mathcal{G}}_{t}\) contains the spatial-temporal information of the system.
### Problem Statement
Our goal is to detect anomalies in cyber-physical systems at each time segment. Formally, assuming that the graph stream data at the \(t\)-th segment is \(\mathbf{X}_{t}\), the corresponding system status is \(y_{t}\). We aim to find an outlier detection function that learns the mapping relation between \(\mathbf{X}_{t}\) and \(y_{t}\), denoted by \(f(\mathbf{X}_{t})\to y_{t}\). Here, \(y_{t}\) is a binary constant whose value is 1 if the system status is abnormal and 0 otherwise.
## 3 Methodology
In this section, we give an overview of our framework and then describe each technical part in detail.
### Framework Overview
Figure 1 shows an overview of our framework, named DGS-SVDD. Specifically, we start by feeding the DGS-SVDD model the graph stream data for one time segment. In the model, we first analyze the graph stream data by adopting the transformer-based temporal embedding module to extract temporal dependencies. Then, we use the learnt temporal embedding to estimate the dynamics
of sensor-sensor influence and combine it with information about the topological structure of the graph stream data to generate weighted attributed graphs. We then input the graph into the variational graph autoencoder (VGAE)-based spatial embedding module to get the spatial-temporal embeddings. Finally, we estimate the boundary of the embeddings of normal data using deep learning and support vector data description (SVDD), and predict the system status by measuring how far away the embedding sample is from the boundary.
### Embedding temporal patterns of the graph stream data
The temporal patterns of sensors may evolve over time if abnormal system events occur. We create a temporal embedding module that uses a transformer in a predictive manner to capture such patterns for accurate anomaly detection. To illustrate the following calculation process, we use the graph stream data \(\mathbf{X}_{t}\) at the \(t\)-th time segment as an example. We ignore the topological structure of the graph stream data at first during the temporal embedding learning process. Thus, we collect the time series data in \(\mathbf{X}_{t}\) to form a temporal matrix \(\mathbf{T}_{t}=[\mathbf{t}_{1},\mathbf{t}_{2},\cdots,\mathbf{t}_{L_{x}}]\), such that \(\mathbf{T}_{t}\in\mathbb{R}^{n\times L_{x}}\), where \(n\) is the number of sensors and \(L_{x}\) is the length of the time segment.
The temporal embedding module consists of an encoder and a decoder. For the encoder part, we input \(\mathbf{T}_{t}\) into it for learning enhanced temporal embedding \(\mathbf{U}_{t}\). Specifically, we first use the multi-head attention mechanism to calculate the attention matrices between \(\mathbf{T}_{t}\) and itself for enhancing the temporal patterns among different sensors by information sharing. Considering that the calculation process in each head is the same, we take \(\mathit{head}_{1}\) as an example to illustrate. To obtain the self-attention matrix \(\mathrm{Attn}(\mathbf{T}_{t},\mathbf{T}_{t})\), we input \(\mathbf{T}_{t}\) into \(\mathit{head}_{1}\), which can be formulated as follows,
\[\mathrm{Attn}(\mathbf{T}_{t},\mathbf{T}_{t})=\mathit{softmax}(\frac{(\mathbf{ T}_{t}\cdot\mathbf{W}_{t}^{Q})(\mathbf{T}_{t}\cdot\mathbf{W}_{t}^{K})^{\top}}{ \sqrt{L_{x}}})\cdot(\mathbf{T}_{t}\cdot\mathbf{W}_{t}^{V}) \tag{1}\]
Figure 1: An overview of our framework. There are four key components: transformer-based temporal embedding module, weighted attributed graph generator, VGAE-based spatiotemporal embedding module, and SVDD-based outlier detector.
where \(\mathbf{W}_{t}^{K}\in\mathbb{R}^{L_{x}\times d}\), \(\mathbf{W}_{t}^{Q}\in\mathbb{R}^{L_{x}\times d}\), and \(\mathbf{W}_{t}^{V}\in\mathbb{R}^{L_{x}\times d}\) are the weight matrix for "key", "query" and "value" embeddings; \(\sqrt{L_{x}}\) is the scaling factor. Assuming that we have \(h\) heads, we concatenate the learned attention matrix together in order to capture the temporal patterns of monitoring data from different perspectives. The calculation process can be defined as follows:
\[\mathbf{T}_{t}^{\prime}=\text{Concat}(\text{Attn}_{t}^{1},\text{Attn}_{t}^{2}, \cdots,\text{Attn}_{t}^{h})\cdot\mathbf{W}_{t}^{O} \tag{2}\]
where \(\mathbf{W}_{t}^{O}\in\mathbb{R}^{hd\times d_{\text{model}}}\) is the weight matrix and \(\mathbf{T}_{t}^{\prime}\in\mathbb{R}^{n\times d_{\text{model}}}\). After that, we input \(\mathbf{T}_{t}^{\prime}\) into a fully connected feed-forward network constructed by two linear layers to obtain the enhanced embedding \(\mathbf{U}_{t}\in\mathbb{R}^{n\times d_{\text{model}}}\). The calculation process can be defined as follows:
\[\mathbf{U}_{t}=\mathbf{T}_{t}^{\prime}+\text{Relu}(\mathbf{T}_{t}^{\prime} \cdot\mathbf{W}_{t}^{1}+\mathbf{b}_{t}^{1})\cdot\mathbf{W}_{t}^{2}+\mathbf{b }_{t}^{2} \tag{3}\]
where \(\mathbf{W}_{t}^{1}\) and \(\mathbf{W}_{t}^{2}\) are the weight matrix respectively and their shape information is \(\mathbb{R}^{d_{\text{model}}\times d_{\text{model}}}\); \(\mathbf{b}_{t}^{1}\) and \(\mathbf{b}_{t}^{2}\) are the bias item respectively and their shape information is \(\mathbb{R}^{n\times d_{\text{model}}}\).
For the decoder part, we input the learned embedding \(\mathbf{U}_{t}\) into a prediction layer to predict the monitoring value of the future time segment. The prediction process can be defined as follows:
\[\mathbf{\tilde{T}}_{t+1}=\mathbf{U}_{t}\cdot\mathbf{W}_{t}^{p}+\mathbf{b}_{t} ^{p} \tag{4}\]
where \(\mathbf{\tilde{T}}_{t+1}\in\mathbb{R}^{n\times L_{x}}\) is the prediction value of the next time segment; \(\mathbf{W}_{t}^{p}\in\mathbb{R}^{d_{\text{model}}\times L_{x}}\) is the weight matrix and \(\mathbf{b}_{t}^{p}\in\mathbb{R}^{n\times L_{x}}\) is the bias item. During the optimization process, we minimize the difference between the prediction \(\mathbf{\tilde{T}}_{t+1}\) and the real monitoring value \(\mathbf{T}_{t+1}\). The optimization objective can be defined as follows
\[\min\sum_{t=1}^{L_{x}}||\mathbf{T}_{t+1}-\mathbf{\tilde{T}}_{t+1}||^{2} \tag{5}\]
When the model converges, we have preserved temporal patterns of monitoring data in the temporal embedding \(\mathbf{U}_{t}\).
### Generating dynamic weighted attributed graphs
In CPS, different sensors connect with each other, which forms a sensor-sensor graph. As a result, the malfunctioning effects of system abnormal events may propagate over time following the graph structure. But, the sensor-sensor influence is not static and may vary as the monitoring data changes are caused by system anomaly events. To capture such dynamics, we want to build up weighted attributed graphs using sensor-type information and learned temporal embeddings. For simplicity, we take the graph stream data of \(t\)-th time segment \(\mathbf{X}_{t}\) as an example to illustrate the following calculation process.
Specifically, the adjacency matrix of \(\mathbf{X}_{t}\) is \(\mathbf{A}\in\mathbb{R}^{n\times n}\), which reflects the physical connectivity between different sensors. \(\mathbf{A}[i,j]=1\) when sensor \(i\) and \(j\) are
directly connected and \(\mathbf{A}[i,j]=0\) otherwise. From section 3.2, we have obtained the temporal embedding \(\mathbf{U}_{t}\in\mathbb{R}^{n\times d_{model}}\), each row of which represents the temporal embedding for each sensor. We assume that the sensors belonging to the same type have similar changing patterns when confronted with system anomaly events. Thus, we want to capture this characteristic by integrating sensor type information into the adjacency matrix. We calculate the sensor type embedding by averaging the temporal embedding of sensors belonging to the type. After that, we construct a type-type similarity matrix \(\mathbf{C}_{t}\in\mathbb{R}^{k\times k}\) by calculating the cosine similarity between each pair of sensor types, \(k\) being the number of sensor types. Moreover, we construct the similarity matrix \(\tilde{\mathbf{C}}_{t}\in\mathbb{R}^{n\times n}\) by mapping \(\mathbf{C}_{t}\) to each element position of \(\mathbf{A}\). For instance, if sensor 1 belongs to type 2 and sensor 2 belongs to type 3, we update \(\tilde{\mathbf{C}}_{t}[1,2]\) with \(\mathbf{C}_{t}[2,3]\). We then introduce the dynamic property to the adjacency matrix \(\mathbf{A}\) through element-wise multiplication between \(\mathbf{A}\) and \(\tilde{\mathbf{C}}_{t}\). Each temporal embedding of this time segment is mapped to the weighted graph as the node attributes according to sensor information. The obtained weighted attributed graph \(\tilde{\mathcal{G}}_{t}\) contains all spatial-temporal information of CPS for the \(t\)-th time segment. The topological influence of this graph may change over time.
### Representation learning for weighted attributed graph
To make the outlier detection model easily comprehend the information of \(\mathcal{G}_{t}\), we develop a representation learning module based on variational graph autoencoder (VGAE). For simplicity, we use \(\mathcal{G}_{t}\) to illustrate the representation learning process. For \(\mathcal{G}_{t}=(\mathcal{V},\tilde{\mathcal{E}}_{t},\mathbf{U}_{t})\), the adjacency matrix is \(\tilde{\mathbf{A}}_{t}\) made up by \(\mathcal{V}\) and \(\tilde{\mathcal{E}}_{t}\), and the feature matrix is \(\mathbf{U}_{t}\).
Specifically, this module follows the encoder-decoder paradigm. The encoder includes two Graph Convolutional Network(GCN) layers. The first GCN layer takes \(\mathbf{U}_{t}\) and \(\tilde{\mathbf{A}}_{t}\) as inputs and outputs a lower dimensional feature matrix \(\tilde{\mathbf{U}}_{t}\). The calculation process can be represented as follows:
\[\tilde{\mathbf{U}}_{t}=\text{Relu}(\tilde{\mathbf{D}}_{t}^{-1/2}\tilde{ \mathbf{A}}_{t}\hat{\mathbf{D}}_{t}^{-1/2}\mathbf{U}_{t}\tilde{\mathbf{W}}_{ 0}) \tag{6}\]
where \(\hat{\mathbf{D}}_{t}\) is the diagonal degree matrix of \(\mathcal{G}_{\sqcup}\) and \(\tilde{\mathbf{W}}_{0}\) is the weight matrix of the first GCN layer. The second GCN layer estimates the distribution of the graph embeddings. Assuming that such embeddings conform to the normal distribution \(\mathcal{N}(\mathbf{\mu}_{t},\mathbf{\delta}_{t})\), we need to estimate the mean \(\mathbf{\mu}_{t}\) and variance \(\mathbf{\delta}_{t}\) of the distribution. Thus, the encoding process of the second GCN layer can be formulated as follows:
\[\mathbf{\mu}_{t},\textit{log}(\mathbf{\delta}_{t}^{2})=\text{Relu}(\tilde{\mathbf{D} }_{t}^{-1/2}\mathbf{A}_{t}\tilde{\mathbf{D}}_{t}^{-1/2}\tilde{\mathbf{U}}_{t} \tilde{\mathbf{W}}_{1}) \tag{7}\]
where \(\tilde{\mathbf{W}}_{1}\) is the weight matrix of the second GCN layer. Then, we use the reparameterization technique to mimic the sample operation to obtain the graph embedding \(\mathbf{r}_{t}\), which can be represented as follows:
\[\mathbf{r}_{t}=\mathbf{\mu}_{t}+\mathbf{\delta}_{t}\times\mathbf{\epsilon}_{t} \tag{8}\]
where \(\mathbf{\epsilon}_{t}\) is the random variable vector, which is sampled from \(\mathcal{N}(0,I)\). Here, \(\mathcal{N}(0,I)\) represents the high-dimensional standard normal distribution.
The decoder part aims to reconstruct the adjacency matrix of the graph using \(\mathbf{r}_{t}\), which can be defined as follows:
\[\mathbf{\hat{A}}_{t}=\sigma(\mathbf{r}_{t}\mathbf{r}_{t}{}^{\top}) \tag{9}\]
where \(\mathbf{\hat{A}}_{t}\) is the reconstructed adjacency matrix and \(\mathbf{r}_{t}\mathbf{r}_{t}{}^{\top}=||\mathbf{r}_{t}||\ ||\mathbf{r}_{t}{}^{\top}|| \text{cos }\theta\).
During the optimization process, we aim to minimize two objectives: 1) the divergence between the prior embedding distribution \(\mathcal{N}(0,I)\) and the estimated embedding distribution \(\mathcal{N}(\mathbf{\mu}_{t},\mathbf{\delta}_{t})\); 2) the difference between the adjacency matrix \(\mathbf{A}_{t}\) and the reconstructed adjacency matrix \(\mathbf{\tilde{A}}_{t}\); Thus, the optimization objective function is as follows:
\[\min\sum_{t=1}^{T}\underbrace{\textit{KL}[q(\mathbf{r}_{t}|\mathbf{U}_{t}, \mathbf{A}_{t})||p(\mathbf{r}_{t})]}_{\text{KL divergence between }q(.)\text{ and }p(.)}+ \overbrace{||\mathbf{A}_{t}-\mathbf{\hat{A}}_{t}||^{2}}^{\text{Loss between }\mathbf{A}_{t}\text{ and }\mathbf{\hat{A}}_{t}} \tag{10}\]
where _KL_ refers to the Kullback-Leibler divergence; \(q(.|.)\) is the estimated embedding distribution and \(p(.)\) is the prior embedding distribution. When the model converges, the graph embedding \(\mathbf{r}_{t}\in\mathbb{R}^{n\times d_{\text{emb}}}\) contains spatiotemporal patterns of the monitoring data for the \(t\)-th time segment.
### One-Class Detection with SVDD
Considering the sparsity issue of labeled anomaly data in CPS, anomaly detection is done in an unsupervised setting. Inspired by deep SVDD [14], we aim to learn a hypersphere that encircles most of the normal data, with data samples located beyond it being anomalous. Due to the complex nonlinear relations among the monitoring data, we use deep neural networks to approximate this hypersphere.
Specifically, through the above procedure, we collecte the spatiotemporal embedding of all time segments, denoted by \([\mathbf{r}_{1},\mathbf{r}_{2},\cdots,\mathbf{r}_{T}]\). We input them into multi-layer neural networks to estimate the non-linear hypersphere. Our goal is to minimize the volume of this data-enclosing hypersphere. The optimization objective can be defined as follows:
\[\min_{\mathcal{W}}\underbrace{\frac{1}{n}\sum_{t=1}^{T}||\phi(\mathbf{r}_{t}; \mathcal{W})-c||^{2}}_{\text{Average sum of weights, using}\text{ squared error, for all normal}\text{ training instances (from T segments)}}+\overbrace{\frac{\lambda}{2}||\mathcal{W}||_{F}^{2}}^{\text{Regularization item}} \tag{11}\]
where \(\mathcal{W}\) is the set of weight matrix of each neural network layer; \(\phi(\mathbf{r}_{t};\mathcal{W})\) maps \(\mathbf{r}_{t}\) to the non-linear hidden representation space; \(c\) is the predefined hypersphere
center; \(\lambda\) is the weight decay regularizer. The first term of the equation aims to find the most suitable hypersphere that has the closest distance to the center \(c\). The second term is to reduce the complexity of \(\mathcal{W}\), which avoids overfitting. As the model converges, we get the network parameter for a trained model, \(\mathcal{W}^{*}\).
During the testing stage, given the embedding of a test sample \(\mathbf{r}_{o}\), we input it into the well-trained neural networks to get the new representation. Then, we calculate the anomaly score of the sample based on the distance between it and the center of the hypersphere. The process can be formulated as follows:
\[s(\mathbf{r}_{o})=||\phi(\mathbf{r}_{o};\mathcal{W}^{*})-c||^{2} \tag{12}\]
After that, we compare the score with our predefined threshold to assess the abnormal status of each time segment in CPS.
## 4 Experiments
We conduct extensive experiments to validate the efficacy and efficiency of our framework (DGS-SVDD) and the necessity of each technical component.
### Experimental Settings
#### 4.1.1 Data Description
We adopt the SWaT dataset [11], from the Singapore University of Technology and Design in our experiments. This dataset was collected from a water treatment testbed that contains 51 sensors and actuators. The collection process continued for 11 days. The system's status was normal for the first 7 days and for the final 4 days, it was attacked by a cyber-attack model. The statistical information of the SWaT dataset is shown in Table 1. Our goal is to detect attack anomalies as precisely as feasible. We only use the normal data to train our model. After the training phase, we validate the capability of our model by detecting the status of the testing data that contains both normal and anomalous data.
#### 4.1.2 Evaluation Metrics
We evaluate the model performance in terms of precision, recall, area under the receiver operating characteristic curve (ROC/AUC), and F1-score. We adopt the point-adjust way to calculate these metrics. In particular, abnormal observations typically occur in succession to generate anomaly segments and an anomaly alert can be triggered inside any subset of a real window for anomalies. Therefore, if one of the observations in an actual anomaly segment is detected as abnormal, we would consider the time points of the entire segment to have been accurately detected.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Data Type & Feature Number & Total Items & Anomaly Number & Normal/Anomaly \\ \hline Normal & 51 & 496800 & 0 & - \\ Anomalous & 51 & 449919 & 53900 & 7:1 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of SWaT Dataset
#### 4.1.2 Baseline Models
To make the comparison objective, we input the spatial-temporal embedding vector \(\mathbf{r}_{t}\) into baseline models instead of the original data. There are seven baselines in our work: **KNN**[12]: calculates the anomaly score of each sample according to the anomaly situation of its K nearest neighborhoods. **Isolation-Forest[8]**: estimates the average path length (anomaly score) from the root node to the terminating node for isolating a data sample using a collection of trees.**LODA[13]**: collects a list of weak anomaly detectors to produce a stronger one. LODA can process sequential data flow and is robust to missing data. **LOF[2]**: measures the anomalous status of each sample based on its local density. If the density is low, the sample is abnormal; otherwise, it is normal. **ABOD[6]**: is an angle-based outlier detector. If a data sample is located in the same direction of more than K data samples, it is an outlier; otherwise it is normal data. **OC-SVM[9]**: finds a hyperplane to divide normal and abnormal data through kernel functions.. **GANomaly[1]**: utilizes an encoder-decoder-encoder architecture. It evaluates the anomaly status of each sample by calculating the difference between the output embedding of two encoders.
### Experimental Results
#### 4.2.1 Overall Performance
Table 2 shows experimental results on the SWaT dataset, with the best scores highlighted in **bold**. As can be seen, DGS-SVDD outperforms other baseline models in the majority of evaluation metrics. Compared with the second-best baseline, DGS-SVDD improves precision by 19%, F1-score by 36% and AUC by 8%. This observation validates that DGS-SVDD is effective to detect anomalies accurately. The underlying driver for the success of our model is that DGS-SVDD can capture long-delayed temporal patterns and dynamic sensor-sensor influences in CPS. Another interesting observation is that the detection performance of distance-based or angle-based outlier detectors is poor. A possible reason is that these geometrical measurements are vulnerable to high-dimensional data samples.
#### 4.2.2 Ablation Study
To study the individual contribution of each component of DGS-SVDD, we perform ablation studies, the findings of which are summarized in Table 3 where **bold** indicates the best score. We build four variations of the
\begin{table}
\begin{tabular}{c c c c c} \hline Method & Precision (\%) & Recall (\%) & F1-score (\%) & AUC (\%) \\ \hline OC-SVM & 34.11 & 68.23 & 45.48 & 75 \\ Isolation-Forest & 35.42 & 81.67 & 49.42 & 80 \\ LOF & 15.81 & 93.88 & 27.06 & 63 \\ KNN & 15.24 & 96.77 & 26.37 & 61 \\ ABOD & 14.2 & **97.93** & 24.81 & 58 \\ GANomaly & 42.12 & 67.87 & 51.98 & 68.64 \\ LODA & 75.25 & 38.13 & 50.61 & 67.1 \\ DGS-SVDD & **94.17** & 82.33 & **87.85** & **87.96** \\ \hline \end{tabular}
\end{table}
Table 2: Experimental Results on SWaT dataset
DGS-SVDD model: 1) We feed unprocessed raw data into SVDD; 2) We only capture temporal patterns; 3) We capture the dynamics of sensor-sensor impact and spatial patterns in CPS; 4) We capture spatial-temporal patterns in CPS but discard the dynamics of sensor-sensor influence. We can find that DGS-SVDD outperforms its variants by a significant margin. The observation validates that each technical component of our work is indispensable. Another interesting observation is that removing the temporal embedding module dramatically degrades the detection performance, rendering the temporal embedding module the highest significance. Results from the final experiment show that capturing the dynamics of sensor-sensor influence really boosts model performance.
#### 4.2.2 Robustness Check and Parameter Sensitivity
Figure 2 shows the experimental results for robustness check and parameter sensitivity analysis. To check the model's robustness, we train DGS-SVDD on different percentages of the training data, starting from 10% to 100%. We can find that DGS-SVDD is stable when confronted with different training data from Figure 2(a). But, compared with other percentages, DGS-SVDD achieves the best performance when we train it on 50% training data. In addition, we vary the dimension of the final spatial-temporal embedding in order to check its impacts. From Figure 2(b) and 2(c), we can find that DGS-SVDD is barely sensitive to the the sliding window length and dimension of the spatiotemporal embeddings. This observation validates that DGS-SVDD is robust to the dimension parameters. A possible reason is that our representation learning module has sufficiently captured spatial-temporal patterns of monitoring data for anomaly detection.
#### 4.2.3 Study of Time Cost
We conduct six folds cross-validation to evaluate the time costs of different models. Figure 3 illustrates the comparison results. We can find that DGS-SVDD can be trained at a time competitive with simple models like
\begin{table}
\begin{tabular}{c|c c|c c c} \hline \hline \multicolumn{5}{c}{Method} & Precision (\%) Recall (\%) F1-score (\%) AUC (\%) \\ \hline Transformer-based Temporal & Weighted Attributed & VGAE-based Spatiotemporal & Precision (\%) Recall (\%) F1-score (\%) AUC (\%) \\ Embedding Module & Graph Generator & Embedding Module & & & \\ \hline � & � & � & 4.61 & 12.45 & 6.74 & 18.55 \\ ✓ & � & � & 69.98 & 64.75 & 67.26 & 78.14 \\ � & ✓ & ✓ & 12.16 & **99.99** & 21.68 & 18.22 \\ ✓ & � & ✓ & 87.79 & 76.68 & 81.86 & 82.45 \\ ✓ & ✓ & ✓ & **94.17** & 82.33 & **87.75** & **87.96** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation Study of DGS-SVDD
Figure 2: Experimental results for robustness check and parameter sensitivity
OC-SVM or LOF while outperforming them by a huge margin as seen from Table 2. This shows that DGS-SVDD effectively learns the representation of each time segment of the graph stream data. Another important observation is that the testing time of DGS-SVDD is consistent with the simpler baselines. A potential reason is that the network parameter \(\mathcal{W}^{*}\), as discussed in section 3.5, completely characterizes our one-class classifier. This allows fast testing by simply evaluating the network \(\phi\) with learnt parameters \(\mathcal{W}^{*}\).
## 5 Related Work
Anomaly Detection in Cyber-Physical Systems.Numerous existing literature have studied the exploitation of temporal and spatial relationships in data streams from CPS to detect anomalous points [5]. For instance, [5, 7] adopts a convolutional layer as the first layer of a Convolutional Neural Network to obtain correlations of multiple sensors in a sliding time window. Further, the extracted features are fed to subsequent layers to generate output scores. [7] proposed a GAN-based framework to capture the spatial-temporal correlation in multidimensional data. Both generator and discriminator are utilized to detect anomalies by reconstruction and discrimination errors.
Outlier detection with Deep SVDD.After being introduced in [14], deep SVDD and its many variants have been used for deep outlier detection. [18] designed _deep structure preservation SVDD_ by integrating deep feature extraction with the data structure preservation. [20] proposed a _Deep SVDD-VAE_, where VAE is used to reconstruct the input sequences while a spherical discriminative boundary is learned with the latent representations simultaneously, based on SVDD. Although these models have been successfully applied to detect anomalies in the domain of computer vision, this domain lacks temporal and spatial dependencies prevalent in graph stream data generated from CPS.
## 6 Conclusion
We propose DGS-SVDD, a structured anomaly detection framework for cyber-physical systems using graph stream data. To this end, we integrate spatiotem
Figure 3: Comparison of different models in terms of training and testing time cost
poral patterns, modeling dynamic characteristics, deep representation learning, and one-class detection with SVDD. Transformer-based encoder-decoder architecture is used to preserve the temporal dependencies within a time segment. The temporal embedding and the predefined connectivity of the CPS are then used to generate weighted attributed graphs from which the fused spatiotemporal embedding is learned by a spatial embedding module. A deep neural network, integrated with one-class SVDD is then used to group the normal data points in a hypersphere from the learnt representations. Finally, we conduct extensive experiments on the SWaT dataset to illustrate the superiority of our method as it delivers 35.87% and 19.32% improvement in F1-score and AUC respectively. For future work, we wish to integrate a connectivity learning policy into the transformer so that it just does not learn the temporal representation, rather it also models the dynamic influence among sensors. The code can be publicly accessed at [https://github.com/ehtesam3154/dgs_svdd](https://github.com/ehtesam3154/dgs_svdd).
|
2303.14098 | Nonlinear Fisher Particle Output Feedback Control and its application to
Terrain Aided Navigation | This paper presents state estimation and stochastic optimal control gathered
in one global optimization problem generating dual effect i.e. the control can
improve the future estimation. As the optimal policy is impossible to compute,
a sub-optimal policy that preserves this coupling is constructed thanks to the
Fisher Information Matrix (FIM) and a Particle Filter. This method has been
applied to the localization and guidance of a drone over a known terrain with
height measurements only. The results show that the new method improves the
estimation accuracy compared to nominal trajectories. | Emilien Flayac, Karim Dahia, Bruno Hérissé, Frédéric Jean | 2023-03-24T16:05:48Z | http://arxiv.org/abs/2303.14098v1 | # Nonlinear Fisher Particle Output Feedback Control and its application to Terrain Aided Navigation
###### Abstract
This paper presents state estimation and stochastic optimal control gathered in one global optimization problem generating dual effect i.e. the control can improve the future estimation. As the optimal policy is impossible to compute, a sub-optimal policy that preserves this coupling is constructed thanks to the Fisher Information Matrix (FIM) and a Particle Filter. This method has been applied to the localization and guidance of a drone over a known terrain with height measurements only. The results show that the new method improves the estimation accuracy compared to nominal trajectories.
## Introduction
Stochastic optimal control problems with imperfect state information arise when an optimal control problem contains uncertainties on the dynamics and when its state is partially observed. These problems have many applications in chemistry [4][14] and in the automotive industry [6] for unmanned vehicles, for example. The Dynamic Programming principle [5] theoretically allows one to find the optimal controls looked for as policies due to the randomness of the problem. In addition, in such problems, as one has an access to the state of the system only through some observations, a state estimator is also needed as an as a function of them. In some problems, the observations depend on the control, it is then said that the control has a _dual effect_[9]. It has a double role: it guides the system in a standard way and, at the same time, it can also look for more information about the system because it influences the observations [3].
Optimal policies are often impossible to compute directly because of the curse of dimensionality. Thus many sub-optimal policies have been developed to approximate the optimal one. A sub-optimal policy can be designed to keep the property of dual effect. It is mostly done when the control problem is mixed with a _parameter_ estimation problem [9]. Indeed, these methods are applied when learning about an unknown parameter of a system helps guiding it. We present a problem where the dual effect is used to improve _state_ estimation. Particle approximations are then very promising techniques. Indeed, they are very efficient to approximate stochastic optimization problems or to estimate the state of a system, even in presence of high uncertainties, high non-linearities and probability constraints.
Particle approximations are widely used in robust control. In [7], the planned trajectories consider uncertainties, obstacles or other probability constraints. Nevertheless, these methods do not include state estimation and do not compute control policies but control values. In [8], an optimization problem coupling state estimation by a _Moving Horizon Estimation_ (MHE) and control by _Model Predictive Control_ (MPC) is discussed but this problem does not include dual effect. In [12] and [13], a Particle Output MPC policy with a particle filter used for the estimation and inside the optimization problem is presented but, again, there is no coupling between the control and the future estimation. In [10], a dual controller based on a tree representation by particles is proposed. However, in the latter article, the particles inside the optimization problems are introduced by an Ensemble Kalman filter rather than with a Particle Filter. In [4], an implicit dual controller is computed thanks to a particle-based policy iteration approximation but it is extremely costly in practice and is limited to finite control spaces. In [14], an Output feedback method based on Unscented Kalman filter with a tree representation and measurements anticipation is proposed but the conditional probability density of the state is supposed to be gaussian at each time.
In this paper, we propose a particular stochastic optimization problem that merges state estimation and control. This problem makes explicitly appear dual effect additively in its cost which creates a coupling between the controls and the state estimators.
We also propose a sub-optimal policy of our new optimization problem based on two successive approximations. The first one consists in replacing a term by an equivalent and simpler one which maintains the coupling created by the dual effect. The second one is a particle approximation used, both inside the optimization problem to find the control, and outside it to estimate the state.
This paper is organized the following way: in section I, we describe our new stochastic optimization problem and give a comparison with classical problems. In section II, we describe the approximation of our problem and compare it to existing ones. We also give an application of our method with numerical results.
## I Setup of stochastic optimal control
### _Optimization problem coupling control and estimation_
#### I-A1 Stochastic dynamics and observation equation :
We consider a discrete-time stochastic dynamical system whose state is a stochastic process \(\left(X_{k}\right)_{k\in[0,T]}\) valued in \(\mathbb{R}^{n}\) with \(T\in\mathbb{N}^{*}\) which verifies \(\forall k\in[0,T-1]\):
\[X_{k+1} =f_{k}(X_{k},U_{k},\xi_{k}), \tag{1}\] \[X_{0} \sim p_{0},\]
where:
* \(p_{0}\) is a probability density and \(X_{0}\sim p_{0}\) means that \(p_{0}\) is the probability law of \(X_{0}\).
* \((U_{k})_{k\in\llbracket 0,T\rrbracket}\) is a stochastic process such that \(\forall k\in\llbracket 0,T-1\rrbracket\), \(U_{k}\) is valued in \(\mathbb{U}_{k}\subset\mathbb{R}^{m}\).
* \((\xi_{k})_{k\in\llbracket 0,T\rrbracket}\) is a stochastic process valued in \(\mathbb{R}^{d}\) which corresponds to the disturbances on the dynamics. We suppose that \(\forall k\in\llbracket 0,T-1\rrbracket\), \(\xi_{k}\sim p_{\xi_{k}}\), and that \(\xi_{k}\) is independent of \(\xi_{l}\) for \(k\neq l\) and of \(X_{0}\).
* \(\forall k\in\llbracket 0,T-1\rrbracket\), \(f_{k}:\mathbb{R}^{n}\times\mathbb{R}^{m}\times\mathbb{R}^{d}\longrightarrow \mathbb{R}^{n}\).
We also assume that the state of the system is available through some observations represented by a stochastic process \(\left(Z_{k}\right)_{k\in\llbracket 0,T\rrbracket}\) valued in \(\mathbb{R}^{d}\) which verifies, \(\forall k\in\llbracket 0,T\rrbracket\):
\[Z_{k}=h_{k}(X_{k},\eta_{k}), \tag{2}\]
where:
* \(\left(\eta_{k}\right)_{k\in\llbracket 0,T\rrbracket}\) is a stochastic process valued in \(\mathbb{R}^{q}\) which corresponds to the disturbances on the observations. We suppose that \(\forall k\in\llbracket 0,T\rrbracket\), \(\eta_{k}\sim p_{\eta_{k}}\) and that \(\eta_{k}\) is independent of \(\xi_{k}\), \(X_{0}\) and \(\eta_{l}\) for \(k\neq l\).
* \(\forall k\in\llbracket 0,T\rrbracket\), \(h_{k}:\mathbb{R}^{n}\times\mathbb{R}^{q}\longrightarrow\mathbb{R}^{p}\).
For \(k\in\llbracket 0,T\rrbracket\), we define the _information vector_\(I_{k}\) such as:
\[I_{0}=Z_{0},\qquad\qquad I_{k+1}=(I_{k},U_{k},Z_{k+1}). \tag{3}\]
#### Ii-A2 Presentation of our new optimization problem
As explained in [5] and [4], in stochastic control, one does not seek control _values_ like in deterministic control but _policies_ i.e. functions of a certain random variable. As \(I_{k}\) gathers all the data available for the controller, \(U_{k}\) will be looked for as a function of \(I_{k}\). Moreover, for the same reason about \(I_{k}\), any estimator of \(X_{k}\), denoted by \(\widehat{X}_{k}\), will also be looked for as a function of \(I_{k}\). Starting from this remark, \(\forall k\in\llbracket 0,T-1\rrbracket\), we define a _generalized_ control \(V_{k}=(U_{k},\widehat{X}_{k})\) and \(V_{T}=\widehat{X}_{T}\) that must verify:
\[V_{k} =(U_{k},\widehat{X}_{k})=(\mu_{k}(I_{k}),\pi_{k}(I_{k})), \tag{4}\] \[V_{T} =\widehat{X}_{T}=\pi_{T}(I_{T}),\]
where \(\mu_{k}\) maps an information vector \(I_{k}\) to a control \(U_{k}\) in the control space \(\mathbb{U}_{k}\) and \(\pi_{k}\) maps an information vector \(I_{k}\) to an estimator \(\widehat{X}_{k}\) in \(\mathbb{R}^{n}\). Thus, minimizing over \((V_{0},\ldots,V_{T})\) with the constraints (4) is equivalent to directly minimizing over \((\mu_{0},\ldots,\mu_{T-1})\) and \((\pi_{0},\ldots,\pi_{T})\). Finally, similarly to what is done in [8], we propose a stochastic optimization problem over the generalized control \(V_{k}\) that mixes control and state estimation. In addition, in our proposed approach, \((U_{0},\ldots,U_{T-1})\) and \((\widehat{X}_{0},\ldots,\widehat{X}_{T})\) are coupled, which means that the control \(U_{k}\) can influence the future estimators \((\widehat{X}_{k+1},\ldots,\widehat{X}_{T})\). In order to do this, we define generalized integral costs \(\left(\tilde{g}_{k}\right)_{k\in\llbracket 0,T-1\rrbracket}\) and a generalized final cost \(\tilde{g}_{N}\) such that, \(\forall k\in\llbracket 0,T-1\rrbracket\), each one can be decomposed in two terms as follow:
\[\tilde{g}_{k}(X_{k},V_{k},\xi_{k}) =g_{k}(X_{k},U_{k},\xi_{k})+f_{C}\left(C_{k}\right), \tag{5}\] \[\tilde{g}_{T}(X_{T},V_{T}) =g_{T}(X_{T})+f_{C}\left(C_{T}\right), \tag{6}\]
with
\[C_{k}=\mathbb{E}\left[\left(X_{k}-\widehat{X}_{k}\right)(X_{k}-\widehat{X}_{k} \right)^{T}\right], \tag{7}\]
where:
* \(\forall k\in\llbracket 0,T-1\rrbracket\), \(g_{k}\): \(\mathbb{R}^{n}\times\mathbb{R}^{m}\times\mathbb{R}^{d}\longrightarrow\mathbb{R}\) is a standard instantaneous cost and \(g_{N}\):\(\mathbb{R}^{n}\longrightarrow\mathbb{R}\) is a standard final cost. Here, standard means that these costs are a criterion of the system performance we want to optimize in the first place like a price or a distance for example.
* \(f_{C}:S_{n}^{++}(\mathbb{R})\longrightarrow\mathbb{R}\) is a cost on the covariance matrix of the estimator, defined in (7). \(f_{C}\) can be seen as a measure of the estimation error.
Therefore, minimizing the costs defined in (5) and (6) over \((V_{0},\ldots,V_{T})\) is equivalent to looking for a compromise between control and state estimation. With (1)-(7), we can define our generalized stochastic optimal control problem \((P_{CE})\) by:
\[\begin{array}{rcl}\underset{\mu_{0},\ldots,\pi_{T}}{\text{min}}&\mathbb{E} \left[\sum_{k=0}^{T-1}\tilde{g}_{k}(X_{k},V_{k},\xi_{k})+\tilde{g}_{T}(X_{T},V_{ T})\right]\\ \text{s.t.}&\forall k&\in\llbracket 0,T-1\rrbracket,\\ X_{k+1}&=&f_{k}(X_{k},U_{k},\xi_{k}),\\ Z_{k}&=&h_{k}(X_{k},\eta_{k}),\\ V_{k}&=&(\mu_{k}(I_{k}),\pi_{k}(I_{k})),\\ Z_{T}&=&h(X_{T},\eta_{T}),\\ V_{T}&=&\pi_{T}(I_{T}).\end{array}\]
With an appropriate choice of \(f_{C}\), the terms \(f_{C}\left(C_{k}\right)\) can force a coupling between \(U_{k-1}\) and \(\widehat{X}_{k}\) and in particular the control \(U_{k-1}\) can force the state \(X_{k}\) to reduce the error made by the estimator \(\widehat{X}_{k}\). Eventually, the sum of those terms creates a coupling between \(U_{k-1}\) and \((\widehat{X}_{k},\ldots,\widehat{X}_{T})\). Still, \((P_{CE})\) is computationally intractable because \((\pi_{0},\ldots,\pi_{T})\) and \((\mu_{0},\ldots,\mu_{T-1})\) are extremely hard to compute due to the curse of dimensionality. Moreover, if \(f_{C}\) is not linear, classical Dynamic Programming cannot be applied. In the following, we show, as in [8], that \((P_{CE})\) is a combination of two types of problems: a classical stochastic optimal control problem without state estimation and a sequence of state estimation problems with a a-priori-fixed control.
### _Link with classical stochastic optimal control_
If one chooses \(f_{C}\) to be constant then only remains the minimization over \((\mu_{0},\ldots,\mu_{T-1})\) and one recovers a stochastic optimal control problem with imperfect state information, denoted by \((P_{C})\):
\[(P_{C}): \underset{\mu_{0},\ldots,\mu_{T-1}}{\text{min}} \mathbb{E}\left[\sum_{k=0}^{T-1}g_{k}(X_{k},U_{k},\xi_{k})+g_{T}(X_{T })\right]\] \[\text{s.t.} X_{k+1} = f_{k}(X_{k},U_{k},\xi_{k}),\] \[Z_{k} = h_{k}(X_{k},\eta_{k}),\] \[U_{k} = \mu_{k}(I_{k}),\ \forall k\in\llbracket 0,T-1\rrbracket.\]
As shown in [5], the optimal policies of \((P_{C})\) can theoretically be found by solving the Bellman equation considering our problem \((P_{C})\) as a perfect state information problem where the new state is \(I_{k}\). If (1) and (2) are linear, and \(g_{k}\) and \(g_{T}\) are quadratic in both the state and the control, the optimal policy is linear and can be computed in closed form. However, in the non-linear case, as the dimension of \(I_{k}\) grows with time, \((P_{C})\) is very often intractable.
### _Link with state estimation_
If one supposes that \((\mu_{0},\ldots,\mu_{T-1})\) are constant, then only remains the minimization over \((\pi_{0},\ldots,\pi_{T})\) which gives a sequence of stochastic optimization problems, denoted by \((P_{E}^{k})_{k\in\llbracket 0,T\rrbracket}\) that correspond to state estimation problems. For \(\text{k}\in\llbracket 0,T\rrbracket\), \((P_{E}^{k})\) is defined by :
\[(P_{E}^{k}):\underset{\pi_{k}}{\text{min}} f_{C}\left(\mathbb{E}\left[(X_{k}-\widehat{X}_{k})(X_{k}-\widehat{X}_{k} )^{T}\right]\right)\] s.t. \[\widehat{X}_{k} = \pi_{k}(I_{k}).\]
If one chooses \(f_{C}(\cdot)=\text{tr}(\cdot)\) then:
\[f_{C}\left(\mathbb{E}\left[(X_{k}-\widehat{X}_{k})(X_{k}-\widehat{X}_{k})^{T} \right]\right)=\mathbb{E}\left[\left\|X_{k}-\widehat{X}_{k}\right\|_{2}^{2} \right],\]
and \((P_{E}^{k})\) becomes the optimal filtering problem described in [1] whose solution is known to be the conditional expectation of \(X_{k}\) with respect to \(I_{k}\) denoted by \(\mathbb{E}[X_{k}|I_{k}]\). If the equations (1) and (2) are linear with independent gaussian disturbances then \(\mathbb{E}[X_{k}|I_{k}]\) can be computed exactly thanks to the recursive equations of the Kalman filter. Otherwise, such exact equations do not exist and the problem becomes very hard. Contrary to the min-max problem described in [8], in our case, when we combine the problems \((P_{C})\) and \((P_{E}^{k})_{k\in\llbracket 0,T\rrbracket}\) to get \((P_{CE})\) the variables \((U_{0},\ldots,U_{T-1})\) and \((\widehat{X}_{0},\ldots,\widehat{X}_{T})\) are interestingly interdependent.
## II Tractable approximations of stochastic optimal control problems
The optimal policy of \((P_{CE})\) denoted by \((\mu_{0}^{*},\ldots,\mu_{T-1}^{*},\pi_{0}^{*},\ldots,\pi_{T}^{*})\) cannot be approached directly by space discretization of the information space because of its high dimension. So an approximation by a sub-optimal policy is proposed in this paper. First, before describing our approximation, we recall briefly a classification of stochastic control policies introduced in [3] and gives some example among existing methods. Then, we determine in which class our sub-optimal policy must be if we want to preserve the most important feature of \((P_{CE})\) that is to say the coupling between \(U_{k-1}\) and \((\widehat{X}_{k},\ldots,\widehat{X}_{T})\). Secondly, we explain how our approximation of \((\mu_{0}^{*},\ldots,\mu_{T-1}^{*},\pi_{0}^{*},\ldots,\pi_{T}^{*})\), denoted by \((\mu_{0}^{F},\ldots,\mu_{T-1}^{F},\pi_{0}^{F},\ldots,\pi_{T}^{F})\) is computed.
### _Classification of existing policies_
In [3], four classes of stochastic control policies for fixed-end time are defined according to the quantity of information used and the level of anticipation of the future. These classes of policies are defined as follow:
* _Open Loop_ (OL) policies. In this case the control, \(U_{k}\) for \(k\in\llbracket 0,T-1\rrbracket\), depends only on the initial information \(I_{0}\), the knowledge of dynamics (1) and of \((p_{\xi_{i}})_{\forall i\in\llbracket 0,T-1\rrbracket}\). The sequence is determined once for all at time \(k=0\) and never adapts itself to the available information. An application in robust path planning is described in [6]
* _Feedback_ (F) policies. In this class, \(U_{k}\) depends on \(I_{k}\), the dynamics (1), \((p_{\xi_{i}})_{\forall i\in\llbracket 0,T-1\rrbracket}\), the observation equations (2) up to time k and of \((p_{\eta_{i}})_{\forall i\in\llbracket 0,k\rrbracket}\). Unlike a OL-policy, a F-policy incorporates the current available information but never anticipates the fact that observations will be available at instants strictly greater than \(k\). Many sub-optimal policies using _Model Predictive Control_ (MPC) combined with any estimator are F' policies because the fixed-time horizon optimization problems are solved with the initial condition being the current state estimate. MPC is used with particle filter in [13] and [12]. Particle filter was already used to approximate stochastic control problems in [2]. This type of policies are reviewed in [11]. In [8], a policy combining worst case non-linear MPC and a _Moving Horizon Estimator_ (MHE) into one global min-max problem is discussed. Still, this policy does not explicitly include knowledge of future observations so it remains a F-policy.
* _m-measurement feedback_ (m-MF). In this class, \(U_{k}\) depends on \(I_{k}\), the dynamics (1), \((p_{\xi_{i}})_{\forall i\in\llbracket 0,T-1\rrbracket}\), the observation equations (2) up to time \(k+m\) and of \((p_{\eta_{i}})_{\forall i\in\llbracket 0,k+m\rrbracket}\) with \(m\leq T-k+1\). Similarly to F-controls, m-MF- controls can adapt themselves to the current situation and also anticipate new observations up to \(m\) instants after \(k\). For example, Scenario-Based MPC [11] or Adaptive MPC [11] produces m-MF policies. These controllers are said to be dual because, besides guiding the system to its initial goal, they also force the system to gain information about itself through state or parameter estimation. Examples of scenario based MPC are given in [14] and [10]. Another example of a dual controller using particle filter and policy iterations is discussed in [4].
* _Closed Loop_ (CL) policies. In this class, \(U_{k}\) depends on \(I_{k}\), the dynamics (1), \((p_{\xi_{i}})_{\forall i\in\llbracket 0,T-1\rrbracket}\), the observation equations (2) up to time T and of \((p_{\eta_{i}})_{\forall i\in\llbracket 0,T\rrbracket}\). This class is the extension of the m-MF class up to the final time T. Optimal policies obtained from Dynamic Programming belong to this class because each policy obtained from the backward Bellman equation minimizes a instantaneous cost plus a cost-to-go including all the future possible observations. We also suppose that \((\mu_{0}^{*},\ldots,\mu_{T-1}^{*},\pi_{0}^{*},\ldots,\pi_{T}^{*})\) belong to this class even if \(f_{C}\) is not linear.
Considering this classification, our sub-optimal policy must belong at least to the m-MF class and ideally to the CL class. Indeed, the goal of our method is to get a control at
time \(k-1\), \(\hat{U}_{k-1}\), that reduces the estimation error made by \((\widehat{X}_{k},\ldots,\widehat{X}_{T})\). We also know from equations (3) and (4) that the estimator \((\widehat{X}_{k},\ldots,\widehat{X}_{T})\) depends on the variables \((Z_{0},\ldots,Z_{T})\). Besides, equations (1) and (2) show that the control \(U_{k-1}\) cannot modify \((Z_{0},\ldots,Z_{k-1})\) but only \(Z_{k}\) and by recursion the next observations \((Z_{k+1},\ldots,Z_{T})\). Thus, our design of the control must incorporate the evolution of future observations thanks to equation (1) and (2). If we consider \((Z_{k+1},\ldots,Z_{T})\) then our sub-optimal policy is an CL policy. If we only include the evolution of \((Z_{k+1},\ldots,Z_{k+m})\) for \(m\leq T-k+1\), for computational reasons, then our policy is a m-MF one. In this paper, we described a version of our method that belongs to the CL class.
### _Proposed particle approximation of the problem mixing control and state estimation_
Our approximation, \((\mu_{0}^{F},\ldots,\mu_{T-1}^{F},\pi_{0}^{F},\ldots,\pi_{T}^{F})\), is computed thanks to two separated ideas. First, we replace the term \(f_{C}\left(C_{k}\right)\) by a term depending only on \(X_{k}\) and \(U_{k}\) removing the minimization over \(\widehat{X}_{k}\). Then, we approach the new problem by a sequence of deterministic problems solved online with a technique similar to the one presented in [12].
#### Iii-B1 Fisher approximation
As said previously, \((\mu_{0}^{F},\ldots,\mu_{T-1}^{F},\pi_{0}^{F},\ldots,\pi_{T}^{F})\) must keep the coupling effect between the control and the state estimation. We recall from section I-A that the terms in the generalized cost that produce this effect are the terms \(f_{C}\left(C_{k}\right)\). These terms also introduce a minimization over \((\widehat{X}_{0},\ldots,\widehat{X}_{T})\) without any other constraint that being a function of \(I_{k}\), making \((P_{CE})\) impossible to approximate directly by several deterministic problems. The coupling disappears if one does this with a MPC-like technique and without modification in the cost. Indeed, if one transforms \((P_{CE})\) in a deterministic problem fixing, for example, the disturbances to their mean, or with a Monte Carlo approximation then one does not look for policies anymore but for values so the constraints (4) disappear. Then, \((\widehat{X}_{0}\cdots\widehat{X}_{T})\) are unconstrained so, with, for instance, \(f_{C}(\cdot)=\mathrm{tr}(\cdot)\), one finds \(\forall k\in\llbracket 0,T\rrbracket\), \(X_{k}=\widehat{X}_{k}\). The computed value of \(\widehat{X}_{k}\) is useless and the interesting terms also disappear. To avoid this, we replace \(C_{k}\) by \(\left(J_{k}\right)^{-1}\) where \(J_{k}\) is the _Fisher Information Matrix (FIM)_ which only depends on the current and previous states and on the previous controls. Consequently, We have created a new stochastic optimization problem without optimization over \((\widehat{X}_{0},\ldots,\widehat{X}_{T})\). The new integral costs denoted by, \(\tilde{g}_{F}^{F}\), and final cost denoted by, \(\tilde{g}_{T}^{F}\), are then defined as follow \(\forall k\in\llbracket 0,T-1\rrbracket\):
\[\tilde{g}_{k}^{F}(X_{k},V_{k},\xi_{k}) =g_{k}(X_{k},U_{k},\xi_{k})+f_{C}\left(\left(J_{k}\right)^{-1} \right), \tag{8}\] \[\tilde{g}_{T}^{F}(X_{T},V_{T}) =g_{N}(X_{T})+f_{C}\left(\left(J_{T}\right)^{-1}\right), \tag{9}\]
where \(\left(J_{k}\right)_{k\in\llbracket 0,T\rrbracket}\) is the FIM computed recursively as in [15]. The new stochastic optimal control problem to solve is
\[(P_{CF}):\underset{\mu_{0}\ldots\mu_{T-1}}{\text{min}} \mathbb{E}\left[\sum_{k=0}^{T-1}\tilde{g}_{k}^{F}(X_{k},U_{k},\xi_ {k})+\tilde{g}_{T}^{F}(X_{T})\right]\] s.t. \[X_{k+1} = f_{k}(X_{k},U_{k},\xi_{k}),\] \[\begin{array}{rcl}Z_{k}&=&h_{k}(X_{k},\eta_{k}),\\ &U_{k}&=&\mu_{k}(I_{k}),\;\forall k\in\llbracket 0,T-1\rrbracket.\end{array}\]
As the estimators are not included in the optimization problem anymore, we suppose that some estimators are computed outside of \((P_{CF})\). Now, we have to justify why the coupling between the control and the state estimation still exists even if the estimators are not computed inside the optimization problem anymore. We know from [15] that \(J_{k}\) is invertible and for all non-biased estimator \(\widehat{X}_{k}\) of \(X_{k}\), we have:
\[\mathbb{E}\left[\left(X_{k}-\widehat{X}_{k}\right)(X_{k}-\widehat{X}_{k})^{T} \right]\geq J_{k}^{-1},\]
where \(\geq\) corresponds to a positive semi-definite inequality. Moreover, let us assume that we choose an unbiased estimator \(\widehat{X}_{k}\) whose covariance matrix \(C_{k}\) tends to the inverse of the FIM when \(k\rightarrow\infty\). Then if \(f_{C}\) is continuous, minimizing \(f_{C}\left(\left(J_{k}\right)^{-1}\right)\) is close to minimizing \(f_{C}(C_{k})\) after a certain time. Thus the optimal policy of \((P_{CF})\) gives a control that almost minimizes \(f_{C}(C_{k})\). In other words, the error made by the estimator \(\widehat{X}_{k}\) (in the sense of \(f_{C}\)) when estimating the optimal trajectory of \((P_{CF})\) is closed to be minimum. Consequently, the coupling between \(U_{k-1}\) and \(\widehat{X}_{k}\) still exists even if \(\widehat{X}_{k}\) is removed from the optimization problem. This is also true for all the future estimators then one recovers the full coupling between \(U_{k-1}\) and \((\widehat{X}_{k},\ldots,\widehat{X}_{T})\).
#### Iii-B2 Particle approximation
The second idea consists in approximating \((P_{CF})\) by a Monte Carlo method. We use a set of particles and weights coming from a Particle Filter. Therefore, we suppose that, for \(l\in\llbracket 0,T-1\rrbracket\), the conditional density of \(X_{l}\) w.r.t. \(I_{l}\) denoted by \(p(X_{l}|I_{l})\) is represented by a set of N particles \(\left(\tilde{x}_{l}^{(i)}\right)_{i\in\llbracket 1,N\rrbracket}\) and weights \(\left(\omega_{l}^{(i)}\right)_{i\in\llbracket 1,N\rrbracket}\). This approximation is based on the fact that \(p(X_{l}|I_{l})\) is a sufficient statistics for classic problems with imperfect state information ([5, 4]) meaning that the policies can be considered as functions of \(p(X_{l}|I_{l})\) instead of \(I_{l}\). Moreover, for computational reasons, we only use the \(N_{s}<N\) most likely particles from the set \(\left(\tilde{x}_{l}^{(i)}\right)_{i\in\llbracket 1,N\rrbracket}\). We note that, in \((\tilde{P}_{CF}^{l})\), the FIM is approximated with a Monte Carlo method.
Our particle approximation of \((P_{CF})\) consists in solving a sequence of deterministic problems \((\tilde{P}_{CF}^{l})_{l\in\llbracket 0,T-1\rrbracket}\) defined by:
\[\underset{\begin{subarray}{c}u_{l}\cdots u_{T-1}\\ \tilde{x}_{l+1}^{(i)}\cdots\tilde{x}_{T}^{(i)}\end{subarray}}{\text{min}} \sum_{k=l}^{T-1}\sum_{i=1}^{N_{s}}\omega_{l}^{(i)}\left(\tilde{g}_{k}^{F} \left(x_{k}^{(i)},u_{k},\xi_{k}\right)+\tilde{g}_{T}^{F}\left(x_{T}^{(i)} \right)\right)\] s.t. \[\forall k \in\llbracket l,T-1\rrbracket,\] \[\forall i \in\llbracket 1,N_{s}\rrbracket,\] \[\begin{array}{rcl}x_{l}^{(i)}&=&\tilde{x}_{l}^{(i)},\\ x_{k+1}^{(i)}&=&f_{k}(x_{k}^{(i)},u_{k},\xi_{k}^{(i)}).\end{array}\]
Finally, for \(1\in\llbracket 0,T-1\rrbracket\), we define our policy by:
\[\mu_{l}^{J}\left(p(X_{l}|I_{l})\right) =u_{l}^{*}, \tag{10}\] \[\pi_{l}^{J}(p(X_{l}|I_{l})) =\sum\nolimits_{i=1}^{N}\omega_{l}^{(i)}\tilde{x}_{l}^{(i)},\] (11) \[\pi_{T}^{J}(p(X_{T}|I_{T})) =\sum\nolimits_{i=1}^{N}\omega_{T}^{(i)}\tilde{x}_{T}^{(i)}. \tag{12}\]
Equality (10) means that we only apply the first optimal control found by solving \((\tilde{P}_{CF}^{l})\). Equality (11) and (12) mean that our estimator is \(\mathbb{E}[X_{k}|I_{k}]\) computed with a Monte Carlo method. Our feedback algorithm is summed up in Algorithm 1.
### _Application and Results_
#### Iii-C1 Description of our application
We applied this method to the guidance and localization of a drone by retraun-aided navigation. Our objective is to guide a drone in 3D from the unknown initial condition \(X_{0}\) to a target point \(x_{ta}\). To do so, we only measure the difference between the altitude of the drone and the altitude of the corresponding vertical point on the ground. More formally, at time \(k\), the state \(X_{k}\) is of dimension 6 and denoted \(X_{k}=(x_{k}^{1},x_{k}^{2},x_{k}^{3},v_{k}^{1},v_{k}^{2},v_{k}^{3})\) where \((x_{k}^{1},x_{k}^{2},x_{k}^{3})\) stands for a 3D position and \((v_{k}^{1},v_{k}^{2},v_{k}^{3})\) for a 3D speed. We suppose that (1) is linear i.e. \(\forall k\in\llbracket 0,T-1\rrbracket\):
\[X_{k+1}=FX_{k}+BU_{k}+\xi_{k},, \tag{13}\]
Where \(F\) and \(B\) represent the discrete-time dynamic of a double integrator with a fixed time step dt.
To represent the observations made by the system we introduce \(h_{map}:\mathbb{R}\times\mathbb{R}\longrightarrow\mathbb{R}\) which maps a horizontal position \((x^{1},x^{2})\) to the corresponding height on a terrain map. We suppose that \(h_{map}\) is known but as it is often constructed from empirical data coming from a real terrain, it is highly non linear. Then the observation equation (2) can be rewritten, \(\forall k\in\llbracket 0,T\rrbracket\):
\[Z_{k}=x_{k}^{3}-h_{map}(x_{k}^{1},x_{k}^{2})+\eta_{k}. \tag{14}\]
The challenge of this problem is to reconstruct a 6 dimensional state \(X_{k}\) and, in particular, the horizontal position of the drone \((x_{k}^{1},x_{k}^{2})\) with a 1 dimensional observation. The main issue of this problem is that (13) and (14) may not be observable depending on the area the drone is flying over. Indeed, let us assume that the drone flies over a flat area then one measurement of height on the map correspond to a whole horizontal area so the state estimation cannot be accurate. However, if the drone flies over a rough terrain, then one measurement of height matches a much smaller horizontal area and the state estimation can be more accurate. Therefore, the quantity that must be maximized is the gradient of \(h_{map}\). Actually, from [15], one can see that a quadratic term of this gradient appears in \(J_{k}\) contains useful information to maintain the coupling between control and state estimation, as predicted in the previous part.
```
1:Create a sample of \(N\) particles \(\tilde{x}_{0}^{(i)}\) according to the law \(\mathcal{N}(m_{0},P_{0})\) and initialize the weights \(\omega_{0}^{(i)}\)
2:for\(l=0,\cdots,T-1\)do
3: Solve \((\tilde{P}_{CF}^{l})\) starting from the set \(\tilde{x}_{l}^{(i)}\) and the weights \(\omega_{k}^{(i)}\).
4: Get a sequence of optimal control \(u_{l}^{*},\cdots,u_{T-1}^{*}\).
5: Draw realizations of \(\xi_{l}\), denoted by \(\xi_{l}^{(i)}\).
6: Compute the _a priori_ set at time \(l\), \(\left(\tilde{x}_{l}^{(p,i)}\right)_{i\in\llbracket 1,N\rrbracket}\), applying the dynamics (1) with control \(u_{l}^{*}\) i.e: \(\tilde{x}_{l}^{(p,i)}=f_{k}(\tilde{x}_{l}^{(i)},u_{l}^{*},\xi_{l}^{(i)})\).
7: Get the new observation \(y_{l+1}\).
8: Compute the new weights \(\left(\omega_{l+1}^{(i)}\right)_{i\in\llbracket 1,N\rrbracket}\).
9: Compute the _a posteriori_ set \(\left(x_{l+1}^{(i)}\right)_{i\in\llbracket 1,N\rrbracket}\) by re-sampling the _a priori_ set \(\left(\tilde{x}_{l}^{(p,i)}\right)_{i\in\llbracket 1,N\rrbracket}\).
```
**Algorithm 1** Fisher Feedback Control
The desired online goal of our method in this application is then to estimate the state of the system and simultaneously design controls that force the drone to fly over rough terrain so that the future estimation error diminishes. We also want the system to be guided precisely to the target \(x_{ta}\), eventually. Without state estimation improvement, we would like the drone to go in straight line to the target so we define the standard integral and final costs, \(\forall k\in\llbracket 0,T-1\rrbracket\), as follow:
\[g_{k}(X_{k},U_{k},\xi_{k}) =\alpha\|U_{k}\|_{2}^{2},\quad g_{T}(X_{T})=\gamma\|X_{T}-x_{ta} \|_{2}^{2},\]
where \(\alpha>0,\gamma>0\). To generate the estimation improve
Fig. 1: Plot of one trajectory obtained by fisher particle control and of the particles from the particle filter
ment, we choose the coupling cost as follow:
\[f_{C}\left(\left(J_{k}\right)^{-1}\right)=\frac{\beta}{\operatorname{tr}(J_{k})}, \forall k\in\llbracket 0,T\rrbracket, \tag{15}\]
where \(\beta>0\). In section I-C, we recalled that the natural cost would be \(f_{C}\left(\left(J_{k}\right)^{-1}\right)=\operatorname{tr}\left(\left(J_{k} \right)^{-1}\right)\). However, in order to avoid matrix inverses in the resolution of \((\tilde{P}_{CJ}^{l})\), we rather chose the cost defined in (15) that has the same monotony as the natural one in the matrix sense. The parameters \((\alpha,\beta,\gamma)\) allow one to modify the behaviour of the system. If one wants to go faster to the target one can increase \(\alpha\), on the contrary if one can afford to lose time and wants a more precise estimation then one can increase \(\beta\). We have only applied our method on an artificial analytical but the final desired application is to use our method on real maps.
#### Iv-B2 Results
Figure 1 represents a simulated trajectory (black) in 2D of our drone computed with the controls found by Algorithm 1 for one realization of the initial condition and of the disturbances. The figure also shows the particles (red) used to estimate the state of the trajectory. One can see that the set of particles tightens around the black trajectory. Other simulations have shown that it is not the case with a straight trajectory. Figure 2 compares the _Root Mean Square Error_ (RMSE) in \(x^{1}\) and \(x^{2}\) in the case of straight trajectories (\(\beta=0\)) to the case of curved trajectories (large \(\beta\)) that creates coupling, for 50 runs of our algorithm. One can see that making a detour over the hills reduces highly the error made on the horizontal position of the drone compared to a standard trajectory, designed to go as fast as possible to the target. One can remark that in our example of map the RMSE in \(x^{1}\) increases in both cases at the end of the runs. This is due to the ambiguity of our artificial terrain. One can also remark that our method allows the drone to avoid flat areas but not areas that would be non-flat and periodic.
## Conclusion
This paper considers a stochastic optimal control problem combining state estimation and standard control designed to create dual effect. As this problem is intractable, a new approximation of the optimal control policy based on the FIM and a Particle Filter is proposed. Numerical results are given and show the efficiency of the whole method compared to the one without dual effect. In future works, from a theoretical point of view, we would like to evaluate the error made by solving \((P_{CF})\) with a fixed estimator instead of \((P_{CE})\). From an application point of view, we would like to apply the method on real maps and implement our method in a receding horizon way and a better Particle filter to decrease the number of particle needed and speed up the computations.
|
2306.09171 | How are the people in the photos judged? Analysis of brain activity when
assessing levels of trust and attractiveness | Trust is the foundation of every area of life. Without it, it is difficult to
build lasting relationships. Unfortunately, in recent years, trust has been
severely damaged by the spread of fake news and disinformation, which has
become a serious social problem. In addition to trust, the factor influencing
interpersonal relationships is perceived attractiveness, which is currently
created to a large extent by digital media. Understanding the principles of
judging others can be helpful in fighting prejudice and rebuilding trust in
society. One way to learn about people's choices is to record their brain
activity as they make choices. The article presents an experiment in which the
faces of different people were presented, and the participants' task was to
assess how much they can trust a given person and how attractive they are.
During the study, the EEG signal was recorded, which was used to build models
of logistic regression classifiers. In addition, the most active areas of the
brain that participate in the assessment of trust and attractiveness of the
face were indicated. | Bernadetta Bartosik, Grzegorz M. Wojcik, Andrzej Kawiak, Aneta Brzezicka | 2023-06-15T14:49:54Z | http://arxiv.org/abs/2306.09171v1 | How are the people in the photos judged? Analysis of brain activity when assessing levels of trust and attractiveness
###### Abstract
Trust is the foundation of every area of life. Without it, it is difficult to build lasting relationships. Unfortunately, in recent years, trust has been severely damaged by the spread of fake news and disinformation, which has become a serious social problem. In addition to trust, the factor influencing interpersonal relationships is perceived attractiveness, which is currently created to a large extent by digital media. Understanding the principles of judging others can be helpful in fighting prejudice and rebuilding trust in society. One way to learn about people's choices is to record their brain activity as they make choices. The article presents an experiment in which the faces of different people were presented, and the participants' task was to assess how much they can trust a given person and how attractive they are. During the study, the EEG signal was recorded, which was used to build models of logistic regression classifiers. In addition, the most active areas of the brain that participate in the assessment of trust and attractiveness of the face were indicated.
attractiveness trust EEG source localisation logistic regression
## 1 Introduction
Since time immemorial, humans have been building first impressions about others based on appearance, and mainly on the face. In addition to basic information such as gender, age, emotions or ethnicity, more complex attributes such as intelligence, attractiveness, trustworthiness can also be read from the face (Koscinski, 2007; Oosterhof and Todorov, 2009). It is not always the first judgment that reflects the true face of the other person, but in many situations the first impression can be decisive. One of the primary factors swimming around in social relationships is the relationship between trustworthiness and beauty. Following the stereotype of " what is beautiful is good", it has been proven that attractive people are more likely to be considered trustworthy (Shinners, 2009). Another important factor is the emotions expressed. Individuals who exude positive energy and whose faces show joy will be judged as more trustworthy compared to those expressing sadness or anger at any given time (Sutherland et al., 2017). Additionally, gender is a factor that can affect trust. According to research, women and people with feminine or childlike facial features are given higher trust (Zebrowitz et al., 2015).
In everyday life, trust in another person plays a very important role during interpersonal interactions. The simplest example can be found in various social groups, in which people with trustworthy faces gain more approval (Tracy et al., 2020). Similar correlations can be observed with respect to people with attractive faces to whom it is easier to get along
in a group and have greater support (Dion et al., 1972). The behavior of the business and financial world also correlates with the findings on trust. It has been shown that during trust games, participants are willing to invest more money if they play with a person who looks trustworthy (Van't Wout and Sanfey, 2008; Chang et al., 2010). Comparable observations apply to the online sales market. A seller who includes a trustworthy photo next to his listing is more likely to make a sale, even if he doesn't have reviews (Ert et al., 2016). Those who appear trustworthy are more likely to get a positive response against submitted credit applications (Duarte et al. (2012). Inferences drawn from facial appearance form the basis of social judgments. Offenders with untrustworthy faces experienced more severe punishments compared to those judged trustworthy (Wilson and Rule, 2015; Ancans and Austers, 2018). A similar relationship was indicated when evidence of the crime committed was insufficient (Porter et al., 2010). Attractive criminals have been shown to receive more lenient sentences compared to unattractive ones (Umukoro, Egwuonwu, 2014; Beaver at al., 2019).
The past few years have been quite a challenge for society. During the pandemic, all sorts of restrictions and obligations were imposed. One of them was the order to wear masks in public spaces. Partially covering the face makes it more difficult to read some of the visual information just read from the face, and thus can lead to feelings of insecurity (Hall et al., 2007), which in turn can contribute to reduced levels of trust (Acar-Burkay et al., 2014). The results of the study did not show a decrease in trust in people wearing a face mask (Grundmann et al., 2021). There is evidence indicating an increase in perceived trustworthiness towards strangers (Cartaud et al., 2020).
From a very young age, humans have the ability to recognize faces (De Heering, Rossion, Maurer, 2012; Jessen, Grossmann, 2019; Mondloch, Gerada, Proietti, Nelson, 2019), and the time needed to detect a face (depending on the situation) is just over 100 ms (Crouzet, Kirchner, Thorpe, 2010; Martin, Davis, Riesenhuber, Thorpe, 2018). It has been proven that the brain is capable of recognizing different features in stages. It first recognizes external appearance features, such as gender, and then analyzes complex features describing personality, attractiveness, or emotional state (di Oleggio Castello, Gobbini, 2015; Dobs, Isik, Pantazis, Kanwisher, 2019).
During face viewing, different areas of the brain responsible for processing other features are activated. In credibility assessment tasks, elevated neuronal activity is most often recorded from the amygdala, the ventromedial prefrontal cortex and the interior insula. The amygdala is considered one of the main areas associated with the analysis of emotional and social stimuli, as well as during credibility assessment (Costafreda et al., 2008). For credible-looking faces, amygdala activity decreases, while it increases for unreliable faces (Haas et al., 2015). Damage to the amygdala results in impaired assessment of face credibility (Adolphs et al., 1998).
The issue of trust and attractiveness discussed has an impact on relationships between people and on decision-making. The subject of the study is the brain activity during the evaluation of the level of trust and attractiveness of people depicted in pictures. This article presents an experiment to identify the areas of the brain that are most active in the process of assessing the level of trust and attractiveness.
## 2 Design of the experiment
During the study, participants' brain activity was recorded. The task of the participants in the study was to answer a question about trust and attraction towards the people in the photos. The purpose of the study conducted was to record brain activity.
### Photo database
The experiment was prepared based on photos of the faces of men and women. Of the many sets, only those containing photos of real people were included. Artificially generated photo bases were omitted, as they can affect the level of reliability (Balas, Pacela, 2017). Photos were downloaded from online databases available for scientific research. The first is the Development Emotional Faces Stimulus Set (DEFSS) database (Meuwissen et al., 2017). The collection consists of 404 photos of men and women up to the age of 30. The photos show faces expressing various emotions and natural facial expressions. The photos were vetted for the emotions depicted before publication, which is an added benefit. The second dataset is the Multi - Racial Mega - Resolution (MR2) dataset (Strohminger et al., 2016). The MR2 database contains photos of the faces of 74 people who do not express emotions. The collection contains photos of people of different origins. A distinction can be made here between persons of European, Asian and African race. The photos have been verified by age gender and race, among other factors.
For the purposes of the study, only photos showing neutral facial expressions that do not suggest any emotion were selected. One of the criteria for the selection of photos was to present the face from the front, so that the entire face is visible. Taking into account the indicated criteria, 100 photos of faces were selected. Maintaining gender balance, half of the photos present women's faces and another half present men's faces. Figure 1 shows some of the selected images for the experimental base.
### Pilot study
During the pilot study (Bartosik et al, 2021), a survey was conducted in which participants rated the level of confidence and attractiveness of the people in the photos. The survey was conducted using the aforementioned facial photos. As a result of statistical analysis, four groups of photos were identified: attractive and trustworthy, attractive and untrustworthy, unattractive and trustworthy, unattractive and untrustworthy. Each listed group consists of six photos with the highest ratings among the photos, racial and gender division was taken into account.
### Experiment
The task of the participants in the experiment was to indicate the level of confidence and attractiveness of the person depicted in the photo. To design the experiment, 26 photos were selected, which were extracted based on statistical analysis in a pilot study. The design work was carried out using OpenSesame software. Figure 2 shows the scheme of the experiment. The participant in the study went through the stages one by one starting with a black screen, a fixation point, a question card and a card with the photo to be evaluated. The display time of the fixation point and the photo varied randomly between 100 - 1200 ms for the fixation point and 150 - 20000 ms for the photo. The study participant answered two questions: "How much are you able to trust the person in the photo?" and "How attractive is the person in the photo?" using a five-point Likert scale, where a value of five means most and a value and one means very little. The participant in the study answered the questions for each picture presented.
### Participants in the experiment
The study involved 61 young people, students of computer science at the Maria Curie-Sklodowska University in Lublin. People aged 18-23 were invited. Registration for the study took place via an online form, in which head circumference and time availability had to be provided. To ensure the confidentiality of personal information, each participant was assigned a randomly generated ID number. In order to create the most representative research group, students were recruited according to specific criteria. It was assumed that right-handed people with short hair could participate in the study, because long hair causes more noise in the signal. Due to the low number of short-haired women in computer science, only men were recruited for the study. Only people without permanent or serious health problems within a year that would hinder the conduct of the study or affect the quality of the collected data could participate in the study. As an element of preparation for the study, participants were asked not to consume alcohol for at least three days before the planned study.
Figure 1: A collection of images going into the study.
### EEG signal recording
High-quality equipment distributed by Electrical Geodesic System Inc (EGI) was used to record the EEG signal. The measurement laboratory was equipped with an amplifier that allows signal recording through 256 channels (HydroCel GSN 130 Geodesic Sensor Net) at a frequency of up to 500 Hz. Signal preprocessing including removal of artifacts such as eye blinks and eye movements was carried out using EGI system software scripts. The prepared EEG signal was subjected to further calculations. During the experiment, the signal from 256 electrodes was collected. The ERP signal was determined for each electrode. Based on the determined ERP signal and source localization techniques, the mean electrical charge (MEC) that flows through the BA placed under the electrodes during the CPTI cognitive processing time interval was estimated. Using the determined ERP signals, a sLORETA source localization analysis was performed.
## 3 Results of the experiment
### Statistical method
Analysis of the collected data was carried out using a logistic regression model, which easily describes the relationship between the dependent variables in dichotomous form and the explanatory variables. Two issues were used in the experiment: trust and attractiveness, so two sets of data were obtained. Using the selected model, an analysis was performed for both sets. In the first set, the dependent variable is whether the research participant trusted or distrusted and the descriptor variables are the different brain areas. In the second set of data, the describing variables remain the same, the dependent variable changes, which takes two states: attractive, not attractive.
The logistic regression model was built in Python (version 3.9.9) using the scikit-learn library (version 1.2.2)
### Data preparation
Before classifying the data, data sets had to be prepared. The first step was the selection of the time interval. The average activity of brain areas over time was determined. The Desikan-Killiany atlas was used in the sets. Based on the time course of the signal, the sections with higher brain activity were selected. In both data sets, the highest brain activity was observed in the intervals between 250 and 350 ms after the stimulus onset. The second data preparation
Figure 2: The course of the experiment.
step was to determine the average electrical charge with respect to the obtained time intervals. The final set contained the MEC values in the time interval for each brain area. The next step was to divide the data into learning and test sets. Sixty subjects participated in the experiment. For each person, two events were determined for each dependent variable, that is, for the trust variable: trusted, distrusted, and for the attractiveness variable: attractive, unattractive. This yielded about 120 results for each dependent variable (in the case of the trust variable, several events were omitted due to insufficient quality). A standard 80:20 split was used for data selection, where 80% was the data of the learning set, 20% of the validation set. The final step was to reduce the number of independent variables so as to pinpoint the most important decision-making areas to the brain. The Desikan-Killiany atlas that was used divides the brain into 34 regions, while each region is divided into two hemispheres, so that the data initially contained 68 descriptor variables. Optimization of the number of variables was carried out using the Recursive Feature Elimination with Cross-Validation (RFECV) method. This is an algorithm that allows the least significant features to be eliminated recursively using cross-validation to select the best subset.
### Data classification
In the process of eliminating the number of descriptive variables, it was possible to identify the more relevant brain areas influencing the participant's decision. In the case of the model based on the dependent variable "confidence", from the original 68 detailed areas detailing the right and left hemispheres (34 total areas), it was possible to reduce the number of variables to 10 detailed areas: bankssts L, bankssts R, frontalpole L, fusiform R, lateral orbitofrontal R, medial orbitofrontal L, medial orbitofrontal R, middle temporal L, pars opercularis, rostral anterior cingulate) (Figure 3).
Based on these 10 areas, a classifier was built that makes it possible to predict the participant's decision with satisfactory efficiency. Accuracy for the obtained model is 0.78. Below (Figure 4, 5) are graphs describing the model of the "trust" variable. The classification quality measures presented in the table 1 confirm that a good quality classifier was successfully built to predict trust ratings based on personality traits.
Figure 3: The most significant brain areas for the “Trust” model.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Accuracy & Precision & Recall & F1-score \\ \hline
0.78 & 0.92 & 0.73 & 0.81 \\ \hline \end{tabular}
\end{table}
Table 1: Classification measures that represent classifier quality for trust
Figure 4: ROC curve for the ”Trust” model.
Figure 5: Confusion matrix for the ”Trust” model.
During the elimination of features for the model based on the dependent variable "attractiveness" it was possible to extract 8 areas of detail: bankssts R, cuneus R, entorhinal L, fusiform L, inferior parietal R, inferior temporal L, lateral occipital L, supramarginal R (8 general areas: bankssts, cuneus, entorhinal, fusiform, inferior parietal, inferior temporal, lateral occipital, supramarginal)(Figure 6).
The classifier, which was built on the basis of the most relevant descriptive variables, obtained Accuracy of 0.76. Figure 7, 8 shows the characteristics of the built model. The classification quality measures presented in the table 2 confirm that a good quality classifier was successfully built to predict attractiveness ratings based on personality traits.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Accuracy & Precision & Recall & F1-score \\ \hline
0.78 & 0.92 & 0.73 & 0.81 \\ \hline \end{tabular}
\end{table}
Table 2: Classification measures that represent classifier quality for attractiveness
Figure 6: The most significant brain areas for the ”Attractiveness” model.
In the process of data preparation, the most significant descriptor variables were determined for each dependent variable. Based on these variables, it can be determined which brain regions are most active during each decision. The bankssts and fusiform regions appear in both models, indicating that for both trust and attraction decisions, these are active areas.
## 4 Discussion
The most relevant brain regions that are involved in decision-making were verified. The study included the collection of information on trust and attraction, qualities that come into play on a daily basis and often determine the success of a continuing relationship. Stimulus stimulation activates different areas of the brain to process the information. In the experiment, participants were stimulated by displaying pictures of men and women. The participant's task was to indicate the level of confidence in the depicted face and determine whether the face was attractive or not. The recorded brain activity was used to build classifiers predicting confidence ratings and attractiveness ratings. The feature elimination algorithm listed eight brain regions each for the two dependent variables of greatest importance during stimulus analysis. The bankssts, frontalpole, fusiform, lateral orbitofrontal, medial orbitofrontal, middle temporal, pars
Figure 8: Confusion matrix for the ”Attractiveness” model.
Figure 7: ROC curve for the ”Attractiveness” model.
opercularis, and rostral anterior cingulate regions have the greatest influence on the trusted/un-trusted classification. The attractive/unattractive classification was based on the bankssts, cuneus, entorhinal, fusiform, inferior parietal, inferior temporal, lateral occipital, supramarginal regions. Most of these regions are responsible for processing social signals, processing faces, evaluating emotions.
Banksssts is an area encompassing the superior temporal sulcus (STS), which is responsible for, among other things, processing social signals such as processing faces, credibility, intentions (Ethofer et al., 2006). The fusiform area, also known as the fusiform gyrus, is a region of the brain located in the temporal lobe, near the occipital lobe. It is primarily responsible for visual processing and recognition of faces and other complex visual stimuli such as objects, animals, and words (Rangarajan et al., 2014). The occipital area, also known as the occipital lobe, is a region of the brain located at the back of the head, behind the parietal and temporal lobes. It is primarily responsible for visual processing and perception, including the interpretation of color, shape, movement, and depth (Nagy et al., 2012). The first two areas are among the variables describing the "trust" model, while the last two are among those variables of the "attraction" model. All areas are involved in the processing of visual stimuli. According to Haxby et al. the STS, OFA (occipital face area) and FFA (fusiform face area) areas, which are part of the above-described regions, constitute the perceptual face processing system.
The frontal pole, also known as the rostral prefrontal cortex, is a region of the brain located in the front of the frontal lobe, at the very top of the brain. It is believed to play a key role in executive functions, such as planning, decision-making, working memory. Damage to the vmPFC area, which is included in this region, results in deficits in social function (Moretti et al., 2009). The middle temporal area, also known as the middle temporal gyrus, is a region of the brain located in the temporal lobe, just above the fusiform gyrus. It is primarily responsible for visual motion processing and object recognition. The inferior temporal region, also known as the inferior temporal lobe, is an area of the brain located in the temporal lobe, below the medial temporal lobe. It is primarily responsible for high-level visual processing, including object and face recognition (Perrett et al., 1982), categorization and visual memory. The medial orbitofrontal cortex, also known as the medial prefrontal cortex, is a region of the brain located in the frontal lobe, just above the eyes. It is involved in a variety of cognitive and emotional processes, including decision-making, reward processing, social behavior, and emotion regulation. The rostral anterior cingulate cortex, also known as the dorsal anterior cingulate cortex, is a region of the brain located in the frontal lobe, just behind the medial prefrontal cortex. It is involved in a variety of cognitive and emotional processes, including attentional control, conflict monitoring, decision-making, and emotion regulation. The supramarginal gyrus is a part of the parietal lobe of the brain located in the posterior portion of the lateral sulcus, also known as the Sylvian fissure. It participates in social cognition, including emotion recognition and empathy, but this is not its main function. The cuneus is a brain region located in the occipital lobe, which is situated at the back of the brain. The cuneus plays an important role in visual processing, particularly in the processing of visual information from the eyes. It is involved in the early stages of visual processing, such as the recognition of basic features like lines and edges, and the detection of visual motion.
## 5 Conclusions
The above study documents that the use of source location algorithms (sLORETA) and machine learning classifiers make it possible to predict trust ratings and attractiveness ratings with fairly high accuracy. It was verified which areas are significant depending on the dependent variable. A previous study (Bartosik et al, 2021) focused on presenting the most important personality traits in the process of trust and attractiveness ratings. In the future, it is planned to test whether and which personality traits influence decision-making based on brain activity.
|
2301.12290 | Shot-down stable processes | The shot-down process is a strong Markov process which is annihilated, or
shot down, when jumping over or to the complement of a given open subset of a
vector space. Due to specific features of the shot-down time, such processes
suggest new type of boundary conditions for nonlocal differential equations. In
this work we construct the shot-down process for the fractional Laplacian in
Euclidean space. For smooth bounded sets $D$, we study its transition density
and characterize Dirichlet form. We show that the corresponding Green function
is comparable to that of the fractional Laplacian with Dirichlet conditions on
$D$. However, for nonconvex $D$, the transition density of the shot-down stable
process is incomparable with the Dirichlet heat kernel of the fractional
Laplacian for $D$. Furthermore, Harnack inequality in general fails for
harmonic functions of the shot-down process. | Krzysztof Bogdan, Kajetan Jastrzȩbski, Moritz Kassmann, Michał Kijaczko, Paweł Popławski | 2023-01-28T20:08:30Z | http://arxiv.org/abs/2301.12290v1 | # Shot-down stable processes
###### Abstract.
The shot-down process is a strong Markov process which is annihilated, or shot down, when _jumping over_ or to the complement of a given open subset of a vector space. Due to specific features of the shot-down time, such processes suggest new type of boundary conditions for nonlocal differential equations. In this work we construct the shot-down process for the fractional Laplacian in Euclidean space. For smooth bounded sets \(D\), we study its transition density and characterize Dirichlet form. We show that the corresponding Green function is comparable to that of the fractional Laplacian with Dirichlet conditions on \(D\). However, for nonconvex \(D\), the transition density of the shot-down stable process is incomparable with the Dirichlet heat kernel of the fractional Laplacian for \(D\). Furthermore, Harnack inequality in general fails for harmonic functions of the shot-down process.
Key words and phrases:shot-down process, fractional Laplacian, Dirichlet form, Green function, Harnack inequality 2010 Mathematics Subject Classification: 35R09, 31C25 (primary), 60J35, 60J75 (secondary).
_Key words and phrases._ shot-down process, fractional Laplacian, Dirichlet form, Green function, Harnack inequality. _Data sharing:_ not applicable as no data were generated or analysed during the study. K. Bogdan was partially supported by NCN grant 2017/27/B/ST1/01339. P. Pomlawski and M. Kijaczko were partially supported by the NCN grant 2014/14/M/ST1/00600. M. Kassmann gratefully acknowledges support by (a) the Foundation for Polish Science in form of a Alexander von Humboldt Polish Honorary Research Fellowship and (b) the German Science Foundation via CRC 1283.
Introduction
Let \(D\) be a nonempty open subset of \(\mathbb{R}^{d}\). The study of diffusions in \(D\) naturally leads to the question what happens when the process reaches the boundary, \(\partial D\). There are various options and nomenclatures, including _killing_, _absorption_, _censoring_, _resetting_, _resurrection_, _reflection_, _diffusion along the boundary_, etc., which lead to diversified boundary value problems. The situation is similar for general Markov (jump) processes \(X=(X(t),t\geq 0)\). Analysis of these problems has generated a lot of research in partial differential equations, potential theory and stochastic analysis, see, e.g., [29]. The simplest option is to _kill_ the Markov process at the first exit time of \(D\),
\[\tau_{D}:=\inf\,\{t>0:X(t)\in D^{c}\},\]
where \(D^{c}=\mathbb{R}^{d}\setminus D\). The _killed process_ on \(D\) is then defined by
\[X^{D}(t):=\begin{cases}X(t)&\text{for }t\in[0,\tau_{D}),\\ \partial&\text{for }t\in[\tau_{D},\infty).\end{cases}\]
Here \(\partial\) is an arbitrary isolated point attached to \(\mathbb{R}^{d}\), called _cemetery_.
The aim of this work is to introduce and to study a new variant of killing. To this end we define the _shot-down time_,
\[\sigma_{D}=\inf\,\{t>0:[X(t^{-}),X(t)]\cap D^{c}\neq\emptyset\}\,.\]
Here, as usual, \(X(t^{-})=\lim_{s\to t^{-}}X_{s}\) and \([v,w]\) is the line segment between \(v\in\mathbb{R}^{d}\) and \(w\in\mathbb{R}^{d}\). Trivially, \(\sigma_{B}\leq\sigma_{D}\) if \(B\subset D\), \(\sigma_{D}\leq\tau_{D}\), and \(\sigma_{D}=\tau_{D}\) if \(D\) is convex. Further, if \(D^{\prime}\) is a connected component of \(D\) containing \(X_{0}\), then \(\sigma_{D}=\sigma_{D^{\prime}}\), therefore below we sometimes assume that \(D\) is a _domain_, i.e., a nonempty connected open subset of \(\mathbb{R}^{d}\). Given \(x\in D\), we define \(D_{x}=\{y\in D:[x,y]\subset D\}\). Only jumps from \(x\in D\) to \(D_{x}\) are possible without shooting-down.
Analogous to the killed process \(X^{D}\), the _shot-down process_\(\hat{X}^{D}\) is defined by
\[\hat{X}^{D}(t):=\begin{cases}X(t)&\text{for }t\in[0,\sigma_{D}),\\ \partial&\text{for }t\in[\sigma_{D},\infty).\end{cases}\]
Figure 1. \(D\) is an annulus in \(\mathbb{R}^{2}\). The process \(X\) is shot down when \([X(t^{-}),X(t)]\) intersects the complement of \(D\).
In particular, \(\hat{X}^{D}(t)\) can neither visit \(D^{c}\) nor \(D\setminus D_{X(t-)}\), for \(t\geq 0\).
So far we have not specified to which jump processes \(X\) we apply the shooting-down procedure. In principle the construction is very general, but in this work we focus on the isotropic \(\alpha\)-stable Levy process [27]. The process is associated with the fractional Laplacian \(\Delta^{\alpha/2}:=-(-\Delta)^{\alpha/2}\) as generator, see, e.g., [26]. The resulting shot-down process will sometimes be called _shot-down stable_. For many results, we also restrict ourselves to bounded smooth (\(C^{1,1}\)) sets \(D\), see Definition 1.1. We leave largely open the case of less regular open sets, even Lipschitz, and more general Markov processes killed at the shot-down time.
The paper is organized as follows. In the Section 1 we present necessary definitions and elementary facts concerning the shot-down process. In Section 2 we define the heat kernel of the process and discuss the relationship between the heat kernels of the killed process and the shot-down process. In Section 3 we compare the killing measures of the two processes. In Section 4 we compute the corresponding Dirichlet form. Sharp estimates of the Green function of the shot-down process are provided in Section 5. A counterexample for the Harnack inequality in Section 6 ends the paper.
Let us describe the results in more detail. One important aim of our work is to study the heat kernel \(\hat{p}_{D}\) of the shot-down process. To this end, in Theorem 1.6 we first assert that \(\sigma_{D}\) is a stopping time for general open sets \(D\). In Theorem 2.8 we show that \(\hat{p}_{D}\) is symmetric in the space variables. In Theorem 2.21 we prove for bounded smooth \(D\) that \(\hat{p}_{D}\) is comparable to the heat kernel \(p_{D}\) of the killed process if and only if \(D\) is convex.
Then we study the quadratic form of the shot-down process for bounded \(C^{1,1}\) domains \(D\). For this, we first estimate the intensity of shooting down. Recall that, given \(x\in D\), \(D_{x}\) consists of all the points where the process can possibly jump from \(x\) without being shot down. Denote
\[\nu(z)=\mathcal{A}_{d,\alpha}|z|^{-d-\alpha}\,,\quad z\in\mathbb{R}^{d}\,, \tag{1}\]
where \(\mathcal{A}_{d,\alpha}=2^{\alpha}\Gamma((d+\alpha)/2)\pi^{-d/2}/|\Gamma(- \alpha/2)|\). Since \(\nu(y-x)\), \(x,y\in\mathbb{R}^{d}\), is the jump intensity of the process \(X\) from \(x\) to \(y\), the shooting-down intensity for \(D\) is defined by
\[\iota_{D}(x):=\int\limits_{\mathbb{R}^{d}\setminus D_{x}}\nu(y-x)\,dy,\,\,\,x \in D.\]
In Theorem 3.3 we prove a bound for the difference between the shooting-down intensity and the standard killing intensity for \(D\),
\[\kappa_{D}(x):=\int\limits_{\mathbb{R}^{d}\setminus D}\nu(y-x)\,dy,\,\,\,x \in D.\]
The result is used to establish a representation of the corresponding Dirichlet form in Theorem 4.2 and to give a precise description of the domain of the form.
We then move on to the Green function. Here the shot-down stable process behaves like the killed stable process and the two respective Green functions are comparable for every bounded smooth domain \(D\), which is proved in Theorem 5.10. As mentioned
before, such comparability in general fails for the corresponding heat kernels (Theorem 2.21); yet both results build on the study of \(\hat{p}_{D}\) in Section 2. The elliptic Harnack inequality, too, turns out to be sensitive to the shooting-down mechanism--we show in Section 6 how Harnack inequality may fail for positive harmonic functions of the shot-down process. Given the comparability of the Green functions, this is a rather unexpected result, which may be attributed to irregularity of the mapping \(x\mapsto D_{x}\).
As a motivation, we mention that there is considerable interest in applied sciences on modeling physical motion by _Levy flights_. For some microscopic and macroscopic objects, e.g., bacteria or sharks, it is observed that they migrate by trajectories that have straight stretches. The notion of shot-down processes should be relevant for such studies. To our best knowledge, the mathematical model of shot-down processes is new, but a related concept of _visibility constrained jump process_ was introduced in [24]. As in our work, the visibility constrained jump process can jump from \(x\in D\) only to \(D_{x}\subset D\). However, the emphasis of [24] is on functional inequalities rather than stochastic processes and the process is _censored_ rather then killed, should the line segment \([X(t^{-}),X(t)]\) intersect the complement of \(D\). Namely, the process in [24] is _restarted_ at \(X(t-)\), so the _censoring_ is meant in the sense of [8].
Concluding the Introduction, we like to mention various approaches in literature to define _reflection_ from \(D^{c}\) for jump processes and nonlocal operators, because their geometric setting, namely, the essential use of both \(X(t^{-})\) and \(X(t)\), inspired our study of the shot-down processes. Here we refer the reader to [3], [18], [30] and [11]. In particular, the Introduction of [11] gives a general perspective on reflections, _Neumann conditions_ and _concatenation_ of Markov processes.
## 1. Preliminaries
In what follows \(\mathbb{R}^{d}\) is the Euclidean space of dimension \(d\geq 1\), \(x\cdot y\) is the Euclidean scalar product of \(x,y\in\mathbb{R}^{d}\), and \(|y|\) is the length of \(y\). For \(x\in\mathbb{R}^{d}\) and \(A,B\subset\mathbb{R}^{d}\), we let \(\operatorname{dist}(x,A)=\inf\{|x-y|:y\in A\}\) and \(\operatorname{dist}(A,B)=\inf\{|y-z|:y\in A,z\in B\}\). We define \(A^{c}=\mathbb{R}^{d}\setminus A\) and, for \(r>0\), \(B_{r}(x)=B(x,r)=\{z\in\mathbb{R}^{d}:|z-x|<r\}\).
**Definition 1.1**.: _An open set \(D\subset\mathbb{R}^{d}\) is \(C^{1,1}\) at scale \(r>0\) if for every \(Q\in\partial D\) there are balls \(I=B(x^{\prime},r)\subset D\) and \(O=B(x^{\prime\prime},r)\subset D^{c}\) tangent at \(Q\). We call \(I\) and \(O\) the inner and outer ball, respectively and we call \(r\) localization radius._
A bounded open set \(D\subset\mathbb{R}^{d}\) is \(C^{1,1}\) if and only if its boundary can be represented locally as the graph of a function whose gradient is Lipschitz continuous [1, Def. 1.1 and Lemma 2.2]. Similarly, open set \(D\subset\mathbb{R}^{d}\) is Lipschitz if its boundary is locally equal to the graf of a Lipschitz function, etc.
All our functions, measures and sets are Borel either by construction or assumptions. As usual, \(dy\), \(dx\) etc. stand for the Lebesgue measure on \(\mathbb{R}^{d}\), and the considered integrals are assumed to be well defined, to wit, nonnegative or absolutely convergent.
Let \(0<\alpha<2\) and let \(X=(X_{t},t\geq 0)\) be the standard isotropic \(\alpha\)-stable Levy process in \(\mathbb{R}^{d}\)[27, 26]. In particular, \(X\) is right continuous with left limits. The process is determined by the jump (Levy) measure with the density function (1). The constant
in (1) is so chosen that
\[\int\limits_{\mathbb{R}^{d}}[1-\cos(\xi\cdot z)]\nu(z)\,dz=|\xi|^{\alpha},\quad \xi\in\mathbb{R}^{d}.\]
For every \(t>0\), we consider the continuous probability density \(p_{t}:\mathbb{R}^{d}\to(0,\infty)\) with
\[\int\limits_{\mathbb{R}^{d}}p_{t}(x)e^{ix\cdot\xi}\,dx=e^{-t|\xi|^{\alpha}}, \quad\xi\in\mathbb{R}^{d}.\]
Namely, we let
\[p_{t}(x)=(2\pi)^{-d}\int\limits_{\mathbb{R}^{d}}e^{-ix\cdot\xi}e^{-t|\xi|^{ \alpha}}\,d\xi,\quad t>0,\ x\in\mathbb{R}^{d}.\]
The function has the _scaling_ property:
\[p_{t}(x)=t^{-d/\alpha}p_{1}(t^{-1/\alpha}x),\quad\text{ or }\quad p_{r^{\alpha}t} (rx)=r^{-d}p_{t}(x),\quad t>0,\ r>0,\ x\in\mathbb{R}^{d}.\]
According to the Levy-Khinchine formula, the measures \(p_{t}(x)dx\) form a probability convolution semigroup on \(\mathbb{R}^{d}\), and \(p_{t}(x)dx\) is the distribution of \(X_{t}\). It is well known that for \(\alpha=1\),
\[p_{t}(x)=\frac{C_{d}t}{(t^{2}+|x|^{2})^{(d+1)/2}},\quad t>0,\ x\in\mathbb{R}^ {d},\]
where \(C_{d}=\Gamma((d+1)/2)/\pi^{(d+1)/2}\). It is also well known that
\[p_{t}(x)\approx t^{-d/\alpha}\wedge\frac{t}{|x|^{d+\alpha}},\quad t>0,\ x\in \mathbb{R}^{d}. \tag{2}\]
Here \(\wedge\) stands for the minimum and (2) means that there is a number \(c\in(0,\infty)\) (i.e., a _constant_, depending on \(d\) and \(\alpha\)) such that \(c^{-1}p_{t}(x)\leq t^{-d/\alpha}\wedge\frac{t}{|x|^{d+\alpha}}\leq cp_{t}(x)\). This notation for _comparability_ is used throughout the paper. We denote
\[p(t,x,y)=p_{t}(x,y)=p_{t}(y-x).\]
The process \(X_{t}\) is Markovian with the time-homogeneous transition probability
\[(t,x,A)\mapsto\int\limits_{A}p(t,x,y)\,dy,\quad t>0,\,x\in\mathbb{R}^{d},A \subset\mathbb{R}^{d}.\]
We let \(\mathbb{P}_{x}\) and \(\mathbb{E}_{x}\) be the law and expectation for the process starting at \(x\in\mathbb{R}^{d}\), respectively. We consider the operator semigroup \(\{P_{t},t\geq 0\}\) on bounded or nonnegative function \(f\). Thus, for \(x\in\mathbb{R}^{d}\), \(P_{t}f(x)=\mathbb{E}_{x}f(X_{t})\) and
\[P_{t}f(x)=\int\limits_{\mathbb{R}^{d}}p(t,x,y)f(y)\,dy,\quad t>0,\]
see [28, Section 2] for a direct construction of the semigroup. We will be interested in semigroups obtained by _killing_\(X\) at suitable stopping times, chiefly at the shot-down time for open sets \(D\subset\mathbb{R}^{d}\). Recall that \(X\) is defined on the measurable space \(\Omega\) of cadlag paths: \([0,\infty)\to\mathbb{R}^{d}\)[27], so that \(X_{t}=X_{t}(\omega)\) for \(t>0,\omega\in\Omega\). As usual, we
suppress \(\omega\) from the notation. For \(t>0\), we also denote by \(D_{t}\) the space of cadlag paths: \([0,t]\to\mathbb{R}^{d}\).
In the Introduction we defined \(\tau_{D}\), the first exit time from \(D\), \(X^{D}\), the process killed upon exiting \(D\), \(\sigma_{D}\), the shot-down time, and \(\hat{X}^{D}\), the shot-down process. We now introduce the corresponding operator semigroups,
\[P_{t}^{D}f(x)=\mathbb{E}_{x}[t<\tau_{D};f(X_{t})]\]
and
\[\hat{P}_{t}^{D}f(x)=\mathbb{E}_{x}[t<\sigma_{D};f(X_{t})],\quad t\geq 0,\,x\in \mathbb{R}^{d}.\]
Here, as usual, \(f\) is nonnegative or absolutely integrable and \(\mathbb{E}[A;F]=\int_{A}Fd\mathbb{P}\).
### Measurability
We will prove that the first time a cadlag process flies over the complement of an open set, that is the shot-down time, is a stopping time. As we shall see, the setting of this subsection is more general than in the rest of the paper.
We first list auxiliary definitions and facts. Most of them can be found in [16, Chapter 1]. Let \((\Omega,\mathcal{F},(\mathcal{F}_{t})_{t\in\mathbb{R}_{+}},\mathbb{P})\) be a filtered probability space.
**Definition 1.2**.: _A set \(H\subseteq[0,\infty)\times\Omega\) is progressively measurable if for all \(t\in[0,\infty)\),_
\[H\cap([0,t]\times\Omega)\in\mathcal{B}\left([0,t]\right)\otimes\mathcal{F}_{t}.\]
Here, as usual, \(\mathcal{B}(T)\) denotes the Borel subsets of \(T\). We consider a topological space \(\mathcal{S}\) and a function \(X:[0,\infty]\times\Omega\to\mathcal{S}\) which is \(\mathcal{F}\times\mathcal{B}([0,\infty))\)-measurable, i.e., a stochastic process in \(\mathcal{S}\). For \(t>0\), by \(X^{t}\) we denote the restriction of \(X\) to the set \([0,t]\times\Omega\), i.e., we define \(X^{t}(s,\omega)=X(s,\omega)\) for \(0\leq s\leq t\), \(\omega\in\Omega\). For convenience, we write \(Y\in\mathcal{D}/\mathcal{A}\) when \(Y^{-1}[A]\in\mathcal{D}\), \(A\in\mathcal{A}\).
**Definition 1.3**.: _Process \(X\) is progressively measurable if for all \(t\in[0,\infty)\),_
\[X^{t}\in\left(\mathcal{B}\left([0,t]\right)\otimes\mathcal{F}_{t}\right)/ \mathcal{B}(\mathcal{S}).\]
**Lemma 1.4**.: Set \(H\) is progressively measurable if and only if the process \(\mathbb{1}_{H}\) is progressively measurable.
Proof.: According to the definition of progressive measurability of a stochastic process, \(\mathbb{1}_{H}\) is progressively measurable if and only if for all \(t\in[0,\infty)\), \(\mathbb{1}_{H}^{t}\) is measurable with respect to \(\mathcal{B}\left([0,t]\right)\otimes\mathcal{F}_{t}\). The latter is equivalent to \(H\cap([0,t]\times\Omega)\in\mathcal{B}\left([0,t]\right)\otimes\mathcal{F}_{t}\).
**Lemma 1.5**.: For a stochastic process \(X:[0,\infty)\times\Omega\to\mathcal{S}\), the following conditions are equivalent:
* For every \(B\in\mathcal{B}(\mathcal{S})\), the set \(X^{-1}[B]\) is progressively measurable.
* \(X\) is progressively measurable.
Proof.: Fix \(B\in\mathcal{B}(\mathcal{S})\) and \(t\in[0,\infty)\). By Lemma 1.4 we have \(X^{-1}[B]\cap([0,t]\times\Omega)\in\mathcal{B}\left([0,t]\right)\otimes \mathcal{F}_{t}\), which is exactly the definition of progressive measurability of \(X\). Conversely, suppose \(X\) is progressively measurable. Then for all \(t\in[0,\infty)\), we have \(X^{t}\in\left(\mathcal{B}\left([0,t]\right)\times\mathcal{F}_{t}\right)/ \mathcal{B}(\mathcal{S})\), so \(X^{-1}[B]\) is progressively measurable for every \(B\in\mathcal{B}(\mathcal{S})\)
We now assume that \(\mathcal{S}\) is a metric vector space, e.g., \(\mathcal{S}=\mathbb{R}^{d}\). For \(x,y\in\mathcal{S}\), let \([x,y]=\{(1-\lambda)x+\lambda y,\lambda\in[0,1]\}\), the interval with endpoints \(x,y\). Let us consider an open set \(D\subseteq\mathcal{S}\) and let \(F(D)=\{(x,y)\in\mathcal{S}\times\mathcal{S}:[x,y]\cap D^{c}\neq\emptyset\}\).
As usual, a right-continuous stochastic process with left limits is called a \(c\dot{a}dl\dot{a}g\) process. A left-continuous stochastic processes with right limits is called \(c\dot{a}glad\). Given the setting of this subsection, the following result is slightly more general than needed in the rest of the paper.
**Theorem 1.6**.: \(\sigma_{D}=\inf\left\{t\geq 0:[X(t^{-}),X(t)]\cap D^{c}\neq\emptyset\right\}\) _is a stopping time if the process \(X\) is \(c\dot{a}dl\dot{a}g\) and adapted._
To prove the result, we consider the usual product topology on \(\mathcal{S}\times\mathcal{S}\).
**Lemma 1.7**.: The set \(F(D)=\{(x,y)\in\mathcal{S}\times\mathcal{S}:[x,y]\cap D^{c}\neq\emptyset\}\) is closed in \(\mathcal{S}\times\mathcal{S}\).
Proof.: We take an arbitrary convergent sequence \((x_{n},y_{n})\in\mathcal{S}\times\mathcal{S}\), such that \(x_{n}\to x\), \(y_{n}\to y\) and \([x_{n},y_{n}]\cap D^{c}\neq\emptyset\). Consider a sequence \((z_{n})_{n=1}^{\infty}\), such that \(z_{n}\in[x_{n},y_{n}]\cap D^{c}\) for every \(n\). Then \(z_{n}=\lambda_{n}x_{n}+(1-\lambda_{n})y_{n}\) for some \(\lambda_{n}\in[0,1]\). By the Bolzano\(-\)Weierstrass theorem, we can pick a convergent subsequence \((\lambda_{n_{k}})\). Then \((x_{n_{k}})\,,(y_{n_{k}})\,,(\lambda_{n_{k}})\) are convergent sequences, so the sequence \(z_{n_{k}}=\lambda_{n_{k}}x_{n_{k}}+(1-\lambda_{n_{k}})y_{n_{k}}\) is also convergent, \(z_{n_{k}}\to z\) as \(k\to\infty\). Since \(z_{n_{k}}\in[x_{n_{k}},y_{n_{k}}]\) for every \(n_{k}\), we have that \(z\in[x,y]\) and since \(D^{c}\) is closed, we also have \(z\in D^{c}\). Therefore \([x,y]\cap D^{c}\neq\emptyset\), hence \((x,y)\in F(D)\).
**Lemma 1.8**.: If \(X_{t}\) is \(c\dot{a}dl\dot{a}g\), then \(X_{t^{-}}\) is \(c\dot{a}glad\).
Proof.: Let \(\rho(x,y)\) be the metric of \(S\). We let \(Y_{t}=X_{t^{-}},Y_{0}=X_{0}\). We will show that:
1. \(Y_{t}\) is left-continuous: \(\lim_{s\nearrow t}Y_{s}=Y_{t}\), i.e., \(\lim_{s\nearrow t}\lim_{u\nearrow s}X_{u}=\lim_{s\nearrow t}X_{s}\) for \(t\geq 0\).
2. \(Y_{t}\) has right limits: \(\lim_{s\searrow t}Y_{s}\) exists, i.e., \(\lim_{s\searrow t}\lim_{u\nearrow s}X_{u}\) exists for \(t\geq 0\).
We use Heine's definition of continuity to show the first condition. Set any time \(t_{0}>0\) and pick any sequence \(s_{n}<t_{0}\) which converges to \(t_{0}\). Since \(Y_{s_{n}}=X_{s_{n}^{-}}\) and \(X_{s_{n}}\) has left limits, we can pick a sequence \(s_{n}^{\prime}<t_{0}\) also convergent to \(t_{0}\), such that \(\rho(X_{s_{n}^{\prime}}-Y_{s_{n}})\leq\frac{1}{n}\) for all \(n\).
For each \(\varepsilon>0\), there exists \(k\in\mathbb{N}\) such that \(\varepsilon>\frac{1}{k}\). By the definition of \(Y_{t_{0}}\), there exists \(N>0\), such that for every \(m\geq\max\{N,k\}\), we have \(\rho(Y_{t_{0}}-X_{s_{m}^{\prime}})<\varepsilon-\frac{1}{k}\), hence
\[0\leq\rho(Y_{s_{m}}-Y_{t_{0}})\leq\rho(Y_{s_{m}}-X_{s_{m}^{\prime}})+\rho(X_{ s_{m}^{\prime}}-Y_{t_{0}})<\frac{1}{m}+\varepsilon-\frac{1}{k}<\frac{1}{k}+ \varepsilon-\frac{1}{k}=\varepsilon.\]
Therefore \(Y_{s_{m}}\xrightarrow[m\to\infty]{}Y_{t_{0}}\). The second condition can be proven in a similar way.
**Lemma 1.9** ([16], Theorem 1, p. 38).: If process \(X\) is adapted and right- or left-continuous, then it is progressively measurable.
**Lemma 1.10** ([4], Theorem 2.7).: If \(X\) is progressively measurable with values in \(\mathcal{S}\) and \(B\) is a Borel subset of \(\mathcal{S}\), then \(U_{B}=\inf\left\{t\geq 0:X_{t}\in B\right\}\) is a stopping time.
Proof of Theorem 1.6.: By Lemma 1.8, \(X\) is \(c\dot{a}dl\dot{a}g\) and \(X_{t^{-}}\) is \(c\dot{a}gl\dot{a}d\). By Lemma 1.9, both \(X_{t}\) and \(X_{t^{-}}\) are progressively measurable. By the definition of the product \(\sigma\)-field, \((X(t),X(t_{-}))\) is progressively measurable with respect to \(\mathcal{B}(\mathcal{S}\times\mathcal{S})\). By Lemma 1.7, we know that \(F(D)\) is closed, so Borel. The result follows from Lemma 1.10.
### Ikeda-Watanabe formula
For the rest of the paper, we return to the setting of the fractional Laplacian in the Euclidean space \(\mathbb{R}^{d}\). Let \(D\subset\mathbb{R}^{d}\) be open (we will make additional assumptions on \(D\) later on). In this section we prove the Ikeda-Watanabe-type formula (4) for the shot-down stable processes. To this end we use the so-called Levy system for \(X\) and follow the presentation for the killed process in [12, Section 4.2]. Note that \(\hat{P}_{t}^{D}\mathbb{1}_{A}(x)\leq P_{t}^{D}\mathbb{1}_{A}(x)\) for all \(A\subset\mathbb{R}^{d}\). By the Radon-Nikodym theorem, for all \(x\in\mathbb{R}^{d},\ t>0\) and (Borel functions) \(f\geq 0\), we can write
\[\hat{P}_{t}^{D}f(x)=\int\limits_{\mathbb{R}^{d}}\hat{p}_{D}(t,x,y)f(y)\,dy,\]
where \(\hat{p}_{D}(t,x,y)\) is defined for almost all \(y\) and \(\hat{p}_{D}(t,x,y)\leq p_{D}(t,x,y)\leq p_{t}(x,y)\). Here \(p_{D}\) is the Dirichlet heat kernel of \(D\) for \(\Delta^{\alpha/2}\), discussed in Section 2. Then,
\[\mathbb{E}_{x}\int_{0}^{\sigma_{D}}f(t,X_{t})\,dt=\int_{0}^{\infty}\int_{ \mathbb{R}^{d}}\hat{p}_{D}(t,x,y)f(t,y)\,dy\,dt. \tag{3}\]
**Theorem 1.11**.: _The joint distribution of \((\sigma_{D},X_{\sigma_{D}^{-}},X_{\sigma_{D}})\) restricted to \(\{X_{\sigma_{D}^{-}}\in D\}\) and calculated under \(\mathbb{P}_{x}\) for \(x\in D\), satisfies for \(A\subset D,B\subset(\overline{D})^{c},I\subset[0,\infty)\),_
\[\mathbb{P}_{x}[\sigma_{D}\in I,X_{\sigma_{D}^{-}}\in A,X_{\sigma_{D}}\in B]= \int\limits_{I}\int\limits_{A}\int\limits_{B}\nu(w-y)\hat{p}_{D}(t,x,y)\,dw\, dy\,dt. \tag{4}\]
Proof.: For a bounded interval \(I\subset[0,\infty)\), we let \(F(u,y,w)=\mathbb{1}_{I}(u)\mathbb{1}_{A}(y)\mathbb{1}_{B}(w)\) and
\[M(t)=\sum_{\begin{subarray}{c}0<u\leq t\\ |\Delta X_{u}|\neq 0\end{subarray}}F(u,X_{u^{-}},X_{u})-\int_{0}^{t}\int_{ \mathbb{R}^{d}}F(u,X_{u},X_{u}+z)\nu(z)\,dz\,du,\ 0\leq t<\infty.\]
By the Levy system [12, Lemma 4.1],
\[\mathbb{E}_{x}\sum_{\begin{subarray}{c}0<u\leq\infty\\ |\Delta X_{u}|\neq 0\end{subarray}}F(u,X_{u^{-}},X_{u})=\mathbb{E}_{x}\int_{0}^{ \infty}\int_{\mathbb{R}^{d}}F(u,X_{u},X_{u}+z)\nu(z)\,dz\,du.\]
Thus, \(\mathbb{E}_{x}M(t)=0\). In fact, \(M(t)\) is a martingale. Indeed, let \(0\leq s\leq t\). By considering the Levy process \(u\mapsto X_{s+u}-X_{s}\), independent of \(\{X_{r},\,0\leq r\leq s\}\), we calculate the conditional expectation:
\[\mathbb{E}_{x}\bigg{[}\sum_{\begin{subarray}{c}s\leq u\leq t\\ |\Delta X_{u}|\neq 0\end{subarray}}F(u,X_{u^{-}},X_{u})-\int_{s}^{t}\int_{ \mathbb{R}^{d}}F(u,X_{u},X_{u}+z)\nu(z)\,dz\,du\ \bigg{|}\ X_{r},0\leq r\leq s\bigg{]}=0.\]
Since
\[|M(t)|\leq\sum_{\begin{subarray}{c}0<u<\infty\\ |\Delta X_{u}|\neq 0\end{subarray}}F(u,X_{u^{-}},X_{u})+\int_{0}^{\infty}\int_{ \mathbb{R}^{d}}F(u,X_{u},X_{u}+z)\nu(z)\,dz\,du,\]
and the right hand side has expectation not bigger than \(2|I|\nu(\{|z|\geq\operatorname{dist}(A,B)\})<\infty\), we see that \(M\) is a uniformly integrable \(c\dot{a}d\dot{a}g\) martingale. By stopping the martingale
at \(\sigma_{D}\), we obtain [21, Section 12.5]
\[\mathbb{E}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
The following theorem is an analog of the first part of [17, Theorem 2.4]. It can be proven in much the same way as in [17], but we give the proof for convenience and completeness.
**Theorem 2.2**.: \(\mathbb{P}_{x}(t<\sigma_{D};X_{t}\in A)=\int\limits_{A}\hat{p}_{D}(t,x,y)\,dy\) _for \(t>0\), \(x\in\mathbb{R}^{d}\), \(A\subset\mathbb{R}^{d}\)._
Proof.: We note that for fixed \(t>0\), \(\mathbb{P}_{x}(\sigma_{D}=t)=0\). Indeed, \(\mathbb{P}_{x}(X_{t}\in\partial D)=0\) and by [27, (1.10)] \(\mathbb{P}_{x}-a.s.\)\(X\) is continuous at \(t\), which implies that \(\mathbb{P}_{x}(\sigma_{D}\neq t)=1\). Therefore,
\[\begin{split}\mathbb{P}_{x}(t<\sigma_{D};X_{t}\in A)& =\mathbb{P}_{x}(X_{t}\in A)-\mathbb{P}_{x}(\sigma_{D}\leq t;X_{t} \in A)\\ &=\int\limits_{A}p(t,x,y)\,dy-\mathbb{P}_{x}(\sigma_{D}<t;X_{t} \in A).\end{split} \tag{7}\]
Let \(0<u<t\), \(n\geq 1\) and \(1\leq k\leq 2^{n}\). We define stopping times
\[S_{n}=\begin{cases}ku2^{-n},&\text{if }(k-1)u2^{-n}\leq\sigma_{D}<ku2^{-n},\\ \infty,&\text{if }\sigma_{D}\geq u.\end{cases}\]
By the Markov property [16],
\[\begin{split}\mathbb{P}_{x}(\sigma_{D}<u;X_{t}\in A)&= \sum\limits_{k=1}^{2^{n}}\mathbb{P}_{x}\bigg{(}\frac{(k-1)u}{2^{n}}\leq\sigma_ {D}<\frac{ku}{2^{n}};X_{t}\in A\bigg{)}\\ &=\sum\limits_{k=1}^{2^{n}}\mathbb{E}_{x}\bigg{[}\frac{(k-1)u}{2^{n }}\leq\sigma_{D}<\frac{ku}{2^{n}},\mathbb{P}_{X_{ku2^{-n}}}(X_{t-ku2^{-n}}\in A )\bigg{]}\\ &=\mathbb{E}_{x}\bigg{[}\sigma_{D}<u;\int\limits_{A}p(t-S_{n},X_{S_{ n}},y)\,dy\bigg{]}.\end{split} \tag{8}\]
If \(\sigma_{D}<u\), then \(t-S_{n}\geq t-u>0\) and we get \(S_{n}\downarrow\sigma_{D}\). We let \(n\to\infty\). Then we let \(u\uparrow t\) in (8). By the right continuity of the process, joint continuity of \(p(t,x,y)\), dominated convergence, Fubini-Tonelli and monotone convergence,
\[\begin{split}\mathbb{P}_{x}(\sigma_{D}<t;X_{t}\in A)& =\lim\limits_{u\uparrow t}\mathbb{P}_{x}(\sigma_{D}<u;X_{t}\in A )\\ &=\lim\limits_{u\uparrow t}\mathbb{E}_{x}\left[\sigma_{D}<u;\int \limits_{A}p(t-\sigma_{D},X_{\sigma_{D}},y)\,dy\right]\\ &=\mathbb{E}_{x}\left[\sigma_{D}<t;\int\limits_{A}p(t-\sigma_{D}, X_{\sigma_{D}},y)\,dy\right]\\ &=\int\limits_{A}\mathbb{E}_{x}\bigg{[}\sigma_{D}<t;p(t-\sigma_{D},X_{\sigma_{D}},y)\bigg{]}\,dy.\end{split}\]
By (7),
\[\mathbb{P}_{x}(t<\sigma_{D};X_{t}\in A) =\int\limits_{A}p(t,x,y)\,dy-\int\limits_{A}\mathbb{E}_{x}\bigg{[} \sigma_{D}<t;p(t-\sigma_{D},X_{\sigma_{D}},y)\bigg{]}\,dy\] \[=\int\limits_{A}\hat{p}_{D}(t,x,y)\,dy.\qed\]
The heat kernel of the killed process enjoys the scaling: \(r^{\alpha}p_{rD}(r^{\alpha}t,rx,ry)=p_{D}(t,x,y)\). Here \(r>0\), \(x\in\mathbb{R}^{d}\), \(rD=\{rx:x\in D\}\). We will prove a similar equality for \(\hat{p}_{D}\).
**Lemma 2.3**.: \(\mathcal{L}_{rx}(\sigma_{rD})=\mathcal{L}_{x}(r^{\alpha}\sigma_{D})\) for \(r>0\).
Proof.: We have:
\[\sigma_{rD} = \inf\left\{t>0:[X(t-),X(t)]\cap(rD)^{c}\neq\emptyset\right\},\] \[= \inf\left\{t>0:[X(t-)/r,X(t)/r]\cap D^{c}\neq\emptyset\right\}.\]
The distribution of \(X(t)/r\) when \(X\) starts at \(rx\) is the same as the distribution of \(X\left(t/r^{\alpha}\right)\) when \(X\) starts at \(x\). In the same sense \(X(t-)/r\) and \(X\left(t/r^{\alpha}-\right)\) equal in distribution, and the last infimum above equals in distribution to
\[\inf\left\{t>0:[X\left(t/r^{\alpha}-\right),X\left(t/r^{\alpha}\right)]\cap D ^{c}\neq\emptyset\right\}=r^{\alpha}\sigma_{D}.\qed\]
**Lemma 2.4**.: \(r^{d}\hat{p}_{rD}(r^{\alpha}t,rx,ry)=\hat{p}_{D}(t,x,y)\) for \(r>0\), \(x\in\mathbb{R}^{d}\).
Proof.: By Hunt's formula (2.1) and Lemma 2.3,
\[r^{d}\hat{p}_{rD}(r^{\alpha}t,rx,ry) =r^{d}p_{rD}(r^{\alpha}t,rx,ry)-r^{d}\mathbb{E}_{rx}[\sigma_{rD} <r^{\alpha}t,p(r^{\alpha}t-\sigma_{rD},X_{\sigma_{rD}},ry)]\] \[=p_{D}(t,x,y)-r^{d}\mathbb{E}_{x}[r^{\alpha}\sigma_{D}<r^{\alpha }t;p(r^{\alpha}(t-\sigma_{D}),X_{r^{\alpha}\sigma_{D}},ry)]\] \[=p_{D}(t,x,y)-\mathbb{E}_{x}[\sigma_{D}<t;r^{d}p(r^{\alpha}(t- \sigma_{D}),rX_{\sigma_{D}},ry)]\] \[=p_{D}(t,x,y)-\mathbb{E}_{x}[\sigma_{D}<t;p(t-\sigma_{D},X_{ \sigma_{D}},y)]=\hat{p}_{D}(t,x,y).\qed\]
### The Chapman-Kolmogorov equation
In this section we prove the Chapman-Kolmogorov equation for \(\hat{p}_{D}(t,x,y)\). Let \(\hat{\nu}(x,y):=\nu(y-x)\mathbb{1}_{D_{x}^{c}}(y)\), where \(D_{x}=\{y\in D:[x,y]\subset D\}\). Of course, \(\hat{\nu}(x,y)=\hat{\nu}(y,x)\). We start with a regularity result.
**Lemma 2.5**.: The function \(y\mapsto\hat{p}_{D}(t,x,y)\) is continuous on \(D\) for all \(t>0\), \(x\in D\).
Proof.: Fix \(t>0\), \(x\in D\). By Hunt's formula, we have
\[\hat{p}_{D}(t,x,y)=p_{D}(t,x,y)-\int_{0}^{t}\int_{D}\int_{D}\ \hat{p}_{D}(s,x,w)\hat{\nu}(w,z)p_{D}(t-s,z,y)\,dz\,dw\,ds.\]
Since \(p_{D}(t,x,\cdot)\) is continuous, it is enough to prove the continuity of the function
\[y\mapsto\int_{0}^{t}\int_{D}\int_{D}\ \hat{p}_{D}(s,x,w)\hat{\nu}(w,z)p_{D}(t-s,z,y) \,dz\,dw\,ds.\]
To this end, take \(D\ni y_{n}\to y\in D\). Let \(\delta=\inf_{n}\operatorname{dist}(y_{n},D^{c})>0\), and \(A=\{z\in D:\operatorname{dist}(z,D^{c})<\delta/2\}\).
If \(z\in A\), then \(|z-y_{n}|\geq\delta/2\) for every \(n\), so
\[\hat{p}_{D}(s,x,w)\hat{\nu}(w,z)p_{D}(t-s,z,y_{n})\mathbb{1}_{A}(z)\leq C2^{d+ \alpha}t\delta^{-d-\alpha}\hat{p}_{D}(s,x,w)\hat{\nu}(w,z)\mathbb{1}_{A}(z).\]
This majorant is integrable by the Ikeda-Watanabe formula:
\[\int_{0}^{t}\int_{D}\int_{A}\hat{p}_{D}(s,x,w)\hat{\nu}(w,z)\,dz\,dw\,ds=\mathbb{P }_{x}(\sigma_{D}<t,X_{\sigma_{D}}\in A)\leq 1<\infty,\]
so our integrand converges by the Lebesgue dominated convergence theorem for \(n\to\infty\). Now, assume \(z\in D\setminus A\). We have \(B(z,\delta/2)\subset D_{z}\), thus
\[\hat{p}_{D}(s,x,w)\hat{\nu}(w,z)p_{D}(t-s,z,y_{n})\mathbb{1}_{D \setminus A}(z)\] \[\leq\mathcal{A}2^{d+\alpha}\delta^{-d-\alpha}\hat{p}_{D}(s,x,w)p _{D}(t-s,z,y_{n})\mathbb{1}_{D\setminus A}(z)\] \[\leq C\mathcal{A}2^{d+\alpha}\delta^{-d-\alpha}\hat{p}_{D}(s,x,w )\left((t-s)^{-d/\alpha}\wedge\frac{t-s}{|z-y_{n}|^{d+\alpha}}\right)\mathbb{1} _{D\setminus A}(z).\]
Let \(\lambda=\sup_{n}|y-y_{n}|\). Then \(\inf_{n}|z-y_{n}|\geq|z-y|-\lambda\), so
\[\hat{p}_{D}(s,x,w)\hat{\nu}(w,z)p_{D}(t-s,z,y_{n})\mathbb{1}_{D \setminus A}(z)\] \[\leq\frac{C\mathcal{A}2^{d+\alpha}}{\delta^{d+\alpha}}\hat{p}_{D }(s,x,w)\left((t-s)^{-d/\alpha}\mathbb{1}_{B(y,2\lambda)}(z)+\frac{2^{d+\alpha }(t-s)}{|z-y|^{d+\alpha}}\mathbb{1}_{B(y,2\lambda)^{c}}(z)\right)\mathbb{1}_{D \setminus A}(z).\]
This function is integrable with respect to \(dw\,dz\). By the Lebesgue dominated convergence theorem, we get
\[\int_{D}\int_{D\setminus A}\hat{p}_{D}(s,x,w)\hat{\nu}(w,z)p_{D}( t-s,z,y_{n})\,dz\,dw\] \[\to\int_{D}\int_{D\setminus A}\hat{p}_{D}(s,x,w)\hat{\nu}(w,z)p_{ D}(t-s,z,y)\,dz\,dw.\]
Further, by the symmetry of \(p_{D}\), we can write
\[\int_{D}\int_{D\setminus A}\hat{p}_{D}(s,x,w)\hat{\nu}(w,z)p_{D}( t-s,z,y_{n})\,dz\,dw\] \[\leq\mathcal{A}2^{d+\alpha}\delta^{-d-\alpha}\int_{D}\int_{D \setminus A}\hat{p}_{D}(s,x,w)p_{D}(t-s,y_{n},z)\,dz\,dw\leq\mathcal{A}2^{d+ \alpha}\delta^{-d-\alpha},\]
and, applying the Lebesgue dominated convergence theorem, we get
\[\int_{0}^{t}\int_{D}\int_{D\setminus A}\hat{p}_{D}(s,x,w)\hat{ \nu}(w,z)p_{D}(t-s,z,y_{n})\,dz\,dw\,ds\] \[\to\int_{0}^{t}\int_{D}\int_{D\setminus A}\hat{p}_{D}(s,x,w)\hat{ \nu}(w,z)p_{D}(t-s,z,y)\,dz\,dw\,ds,\]
as \(n\to\infty\). Thus, \(D\ni y\mapsto p_{D}(t,x,\cdot)\) is continuous.
The proof of the following theorem is based on the proof of [17, Theorem 2.4].
**Theorem 2.6** (Chapman-Kolmogorov equation).: _For all \(t>s>0,x,y\in D\),_
\[\hat{p}_{D}(t,x,y)=\int_{D}\hat{p}_{D}(s,x,z)\hat{p}_{D}(t-s,z,y)\,dz.\]
Proof.: Fix \(t>s>0,x\in D\). Let \(A\subset D\). By the Markov property,
\[\int_{A}\hat{p}_{D}(t,x,y)\,dy=\mathbb{P}_{x}\left(t<\sigma_{D},X_{t} \in A\right)=\mathbb{E}_{x}\left[s<\sigma_{D};\mathbb{P}_{X_{s}}\left(t-s< \sigma_{D},X_{t-s}\in A\right)\right]\] \[\quad=\mathbb{E}_{x}\left[s<\sigma_{D};\int_{A}\hat{p}_{D}(t-s,X_ {t-s},y)\,dy\right]=\int_{A}\int_{D}\hat{p}_{D}(s,x,z)\hat{p}_{D}(t-s,z,y)\,dz \,dy.\]
Thus \(\hat{p}_{D}(t,x,y)=\int_{D}dz\ \hat{p}_{D}(s,x,z)\hat{p}_{D}(t-s,z,y)\) for almost every \(y\in D\). The identity actually holds for all \(y\in D\). Indeed, the function on the left-hand side is continuous in \(y\). Since \(\hat{p}_{D}(t-s,z,y)\leq C(t-s)^{-d/\alpha}\), by the Lebesgue dominated convergence theorem the right-hand side is also continuous. The proof is complete.
**Lemma 2.7**.: \(\hat{p}_{D}(t,x,y)>0\) for \(x,y\in D\), \(t>0\).
Proof.: There is \(n\geq 2\) and a sequence of balls \(\{B(x_{i},r_{i})\}_{i=1}^{n}\) in \(D\) such that \(x_{1}=x\), \(x_{n}=y\) and \(B(x_{i},r_{i})\cap B(x_{i+1},r_{i+1})\neq\emptyset\) for \(i=1\ldots n\). By the Chapman-Kolmogorov equation and domain monotonicity of the heat kernel
\[\hat{p}_{D}(t,x,y) =\int_{D}\cdots\int_{D}\hat{p}_{D}(t/n,x,z_{1})\cdots\hat{p}_{D}( t/n,z_{n-1},y)\,dz_{1}\cdots\,dz_{n-1}\] \[\geq\int_{B_{1}}\cdots\int_{B_{n-1}}\hat{p}_{B_{1}}(t/n,x,z_{1}) \cdots\hat{p}_{B_{n}}(t/n,z_{n-1},y)\,dz_{1}\cdots\,dz_{n-1}.\]
Since the balls are convex sets, the right-hand side equals
\[\int_{B_{1}}\cdots\int_{B_{n-1}}p_{B_{1}}(t/n,x,z_{1})\cdots p_{B_{n}}(t/n,z_{n -1},y)\,dz_{1}\cdots\,dz_{n-1},\]
which is bigger than \(0\) by the strict positivity of the heat kernel of killed process; see [14] or [10].
### Symmetry
The goal of this section is to prove the following result.
**Theorem 2.8**.: _For all \(x,y\in\mathbb{R}^{d},\ t>0\), we have \(\hat{p}_{D}(t,x,y)=\hat{p}_{D}(t,y,x)\)._
The proof is given at the end of the section, after several auxiliary results. Fix \(x,y\in\mathbb{R}^{d},\ t>0\). We will construct, in Lemma 2.11, the so-called bridge between \(x\) and \(y\) in time \(t\). We begin by defining the finite-dimensional distributions of the bridge.
**Definition 2.9**.: _For \(n\in\mathbb{N},\ s_{1},\ldots,s_{n}\in(0,t),\ s_{1}<\cdots<s_{n}\), we define measure \(\pi_{s_{1},\ldots,s_{n}}\) on \((\mathbb{R}^{d})^{n}\) by_
\[\pi_{s_{1},\ldots,s_{n}}(A_{1}\times\cdots\times A_{n})=\int_{A_{n}}\cdots\int _{A_{1}}\frac{p(s_{1},x,z_{1})p(s_{2}-s_{1},z_{1},z_{2})\ldots p(t-s_{n},z_{n},y)}{p(t,x,y)}dz_{1}\ldots dz_{n}\]
_for \(A_{1},\ldots,A_{n}\subset\mathbb{R}^{d}\). We also define_
\[\pi_{0,s_{1},\ldots,s_{n}}(A_{0}\times A_{1}\times\cdots\times A_{n}) =\delta_{x}(A_{0})\pi_{s_{1},\ldots,s_{n}}(A_{1}\times\cdots \times A_{n}),\] \[\pi_{s_{1},\ldots,s_{n},t}(A_{1}\times\cdots\times A_{n}\times A_{ n+1}) =\pi_{s_{1},\ldots,s_{n}}(A_{1}\times\cdots\times A_{n})\delta_{y}(A_{n+1}),\] \[\pi_{0,s_{1},\ldots,s_{n},t}(A_{0}\times A_{1}\times\cdots\times A _{n}\times A_{n+1}) =\delta_{x}(A_{0})\pi_{s_{1},\ldots,s_{n}}(A_{1}\times\cdots \times A_{n})\delta_{y}(A_{n+1}).\]
_As usual, we extend the definition to \(0\leq s_{1}\leq s_{n}\leq t\) by skipping the repeated \(s_{i}\)'s, e.g.,_
\[\pi_{s,s}(A_{1}\times A_{2})=\pi_{s}(A_{1}\cap A_{2}).\]
**Lemma 2.10**.: There is a constant \(C>0\) such that for all \(s\in(0,t)\) and \(\varepsilon>0\),
\[\pi_{s}(z:|z-y|\geq\varepsilon)\leq\frac{C(t-s)}{p(t,x,y)\varepsilon^{d+\alpha}},\]
and for \(0<s_{1}<s_{2}<t,\varepsilon>0\),
\[\pi_{s_{1},s_{2}}((z_{1},z_{2})\in\mathbb{R}^{d}\times\mathbb{R}^{d}:|z_{1}-z_ {2}|\geq\varepsilon)\leq\frac{C(s_{2}-s_{1})}{p(t,x,y)\varepsilon^{d+\alpha}}.\]
Proof.: By (2),
\[p(s,z,w)\approx s^{-d/\alpha}\wedge\frac{s}{|z-w|^{d+\alpha}}.\]
To prove the first inequality we write
\[\pi_{s}(z:|z-y|\geq\varepsilon) =\int_{B_{\varepsilon}(y)^{c}}\frac{p(s,x,z)p(t-s,z,y)}{p(t,x,y)}\,dz\] \[\leq\frac{C(t-s)}{p(t,x,y)\varepsilon^{d+\alpha}}\int_{B_{ \varepsilon}(y)^{c}}p(s,x,z)\,dz\] \[\leq\frac{C(t-s)}{p(t,x,y)\varepsilon^{d+\alpha}}.\]
To prove the second inequality we write
\[\pi_{s_{1},s_{2}}((z_{1},z_{2}):|z_{1}-z_{2}|\geq\varepsilon)=\int _{\mathbb{R}^{d}}\int_{B_{\varepsilon}(z_{1})^{c}}\frac{p(s_{1},x,z_{1})p(s_{2 }-s_{1},z_{1},z_{2})p(t-s_{2},z_{2},y)}{p(t,x,y)}\,dz_{2}\,dz_{1}\] \[\leq\frac{C(s_{2}-s_{1})}{\varepsilon^{d+\alpha}}\int_{\mathbb{R }^{d}}\int_{B_{\varepsilon}(z_{1})^{c}}\frac{p(s_{1},x,z_{1})p(t-s_{2},z_{2}, y)}{p(t,x,y)}\,dz_{2}\,dz_{1}\leq\frac{C(s_{2}-s_{1})}{p(t,x,y)\varepsilon^{d+ \alpha}}.\qed\]
**Lemma 2.11**.: There exists a probability measure \(\mathbb{P}_{x}^{t,y}\) on \(\Omega\), whose finite-dimensional distributions are given by \(\pi_{s_{1},\ldots,s_{n}}\), \(0\leq s_{1}\leq\cdots\leq s_{n}\leq t\).
Proof.: By the Chapman-Kolmogorov equation for \(p\), \(\pi_{s_{1},\ldots,s_{n}}\) are consistent probability measures and so we can use the Kolmogorov existence theorem. We will verify the remaining conditions of [5, Theorem 13.6], namely
\[\pi_{s_{1},s,s_{2}}((z_{1},z,z_{2}):|z_{1}-z|\wedge|z_{2}-z|\geq\lambda)\leq \frac{1}{\lambda^{\beta}}(F(s_{2})-F(s_{1}))^{\gamma},0\leq s_{1}\leq s\leq s _{2}\leq t, \tag{9}\]
with \(F\) a suitable nondecreasing, continuous function on \([0,t]\), \(\lambda>0\), \(\beta\geq 0\), \(\gamma>1\), and
\[\lim_{h\downarrow 0}\pi_{s,s+h}((z_{1},z_{2}):|z_{1}-z_{2}|\geq\varepsilon)=0,0 \leq s<t. \tag{10}\]
To verify (9) we write
\[\pi_{s_{1},s,s_{2}}((z_{1},z,z_{2}):|z_{1}-z|\wedge|z_{2}-z|\geq\lambda)\] \[=\int_{\mathbb{R}^{d}}\int_{B_{\lambda}(z_{1})^{c}}\int_{B_{ \lambda}(z)^{c}}\frac{p(s_{1},x,z_{1})p(s-s_{1},z_{1},z)p(s_{2}-s,z,z_{2})p(t-s _{2},z_{2},y)}{p(t,x,y)}\,dz_{2}\,dz\,dz_{1}\] \[\leq C\int_{\mathbb{R}^{d}}\int_{B_{\lambda}(z_{1})^{c}}\int_{B_{ \lambda}(z)^{c}}p(s_{1},x,z_{1})\frac{s-s_{1}}{|z_{1}-z|^{d+\alpha}}\frac{s_{2 }-s}{\lambda^{d+\alpha}}p(t-s_{2},z_{2},y)\,dz_{2}\,dz\,dz_{1}\] \[\leq C\frac{1}{\lambda^{d+\alpha}}(s_{2}-s)(s-s_{1})\] \[\leq C\frac{1}{\lambda^{d+\alpha}}(s_{2}-s_{1})^{2}.\]
Thus the first condition holds with \(\beta=d+\alpha\), \(\gamma=2\), \(F(x)=\sqrt{C}x\).
To see that (10) is also satisfied, for \(0<s<s+h<t\) we use Lemma 2.10, to get
\[\pi_{s,s+h}((z_{1},z_{2}):|z_{1}-z_{2}|\geq\varepsilon)\leq\frac{Ch}{p(t,x,y) \varepsilon^{d+\alpha}}\to 0\]
as \(h\to 0\).
**Corollary 2.12**.: Under \(\mathbb{P}^{t,y}_{x}\), \(X\) is stochastically continuous on \([0,t]\).
Proof.: First, we will prove the stochastic continuity on \([0,t)\). To this end fix \(s\in[0,t)\), \(\varepsilon>0\) and take \(r\neq s\). Then by Lemma 2.10,
\[\mathbb{P}^{t,y}_{x}(|X_{s}-X_{r}|\geq\varepsilon)\leq\frac{C|s-r|}{p(t,x,y) \varepsilon^{d+\alpha}}\to 0\]
as \(r\to s\). To see stochastic continuity at \(t\), by using Lemma 2.10, for \(s<t\) we get
\[\mathbb{P}^{t,y}_{x}(|X_{s}-y|\geq\varepsilon)\leq\frac{C(t-s)}{p(t,x,y) \varepsilon^{d+\alpha}}\to 0\quad\text{ as }s\to t.\qed\]
We denote \(X_{0-}=X_{0}\).
**Corollary 2.13**.: For every \(s\in[0,t]\), we have \(\mathbb{P}^{t,y}_{x}(X_{s-}=X_{s})=1\).
Proof.: There is nothing to prove for \(s=0\), so let \(s\in(0,t]\). We have
\[1-\mathbb{P}^{t,y}_{x}(X_{s-}=X_{s})=\mathbb{P}^{t,y}_{x}\left(\exists_{k>0}|X _{s-}-X_{s}|>\frac{1}{k}\right)\leq\sum_{k>0}\mathbb{P}^{t,y}_{x}\left(|X_{s-} -X_{s}|>\frac{1}{k}\right),\]
so it suffices to show that \(\mathbb{P}^{t,y}_{x}(|X_{s-}-X_{s}|>\frac{1}{k})=0\) for any \(k>0\). To this end fix \(0<s_{n}\uparrow s\). Since \(X\) has \(c\dot{a}d\dot{a}g\) trajectories and is stochastically continuous,
\[\mathbb{P}^{t,y}_{x}\left(|X_{s-}-X_{s}|>\frac{1}{k}\right)\leq \mathbb{P}^{t,y}_{x}\left(|X_{s-}-X_{s_{n}}|+|X_{s_{n}}-X_{s}|>\frac{1}{k}\right)\] \[\leq\mathbb{P}^{t,y}_{x}\left(|X_{s-}-X_{s_{n}}|>\frac{1}{2k} \right)+\mathbb{P}^{t,y}_{x}\left(|X_{s_{n}}-X_{s}|>\frac{1}{2k}\right)\to 0\]
as \(n\to\infty\), which finishes the proof.
We define the time-reversed process \(X^{\prime}\) by \(X^{\prime}_{s}=X_{(t-s)-}\) for \(s\in[0,t)\), \(X^{\prime}_{t}=X_{0}\). Then \(X^{\prime}\) also has _cadlag_ trajectories. We are ready to prove the following result.
**Lemma 2.14**.: For all measurable \(A\subset D_{t}\), we have \(\mathbb{P}^{t,x}_{y}(X^{\prime}\in A)=\mathbb{P}^{t,y}_{x}(X\in A)\).
Proof.: It is enough to compare the finite-dimensional distributions of \(X\) and \(X^{\prime}\), since they uniquely define the law of a stochastic process. To this end, fix measurable sets \(A_{1},\ldots,A_{n}\subset\mathbb{R}^{d}\) and \(0<s_{1}<\cdots<s_{n}<t\). Using Corollary 2.13, the symmetry of \(p\) and the fact that \((t-s_{i-1})-(t-s_{i})=s_{i}-s_{i-1}\), for \(i=n,n-1,\ldots,2\), we get
\[\mathbb{P}^{t,x}_{y}(X^{\prime}_{s_{1}}\in A_{1},\ldots,X^{\prime }_{s_{n}}\in A_{n})=\mathbb{P}^{t,x}_{y}(X_{(t-s_{1})-}\in A_{1},\ldots,X_{(t- s_{n})-}\in A_{n})\] \[=\mathbb{P}^{t,x}_{y}(X_{(t-s_{1})-}\in A_{1},\ldots,X_{(t-s_{n}) -}\in A_{n},X_{(t-s_{1})-}=X_{t-s_{1}},\ldots,X_{(t-s_{n})-}=X_{t-s_{n}})\] \[=\mathbb{P}^{t,x}_{y}(X_{t-s_{1}}\in A_{1},\ldots,X_{t-s_{n}}\in A _{n},X_{(t-s_{1})-}=X_{t-s_{1}},\ldots,X_{(t-s_{n})-}=X_{t-s_{n}})\] \[=\mathbb{P}^{t,x}_{y}(X_{t-s_{1}}\in A_{1},\ldots,X_{t-s_{n}}\in A _{n})=\mathbb{P}^{t,x}_{y}(X_{t-s_{n}}\in A_{n},\ldots,X_{t-s_{1}}\in A_{1})\] \[=\int_{A_{1}}\ldots\int_{A_{n}}\frac{p(t-s_{n},y,z_{n})p(s_{n}-s_{ n-1},z_{n},z_{n-1})\ldots p(s_{2}-s_{1},z_{2},z_{1})p(s_{1},z_{1},x)}{p(t,y,x)}\,dz_{n} \ldots dz_{1}\] \[=\int_{A_{n}}\ldots\int_{A_{1}}\frac{p(s_{1},x,z_{1})p(s_{2}-s_{1 },z_{1},z_{2})\ldots p(s_{n}-s_{n-1},z_{n-1},z_{n})p(t-s_{n},z_{n},y)}{p(t,x,y)} \,dz_{1}\ldots dz_{n}\] \[=\mathbb{P}^{t,y}_{x}(X_{s_{1}}\in A_{1},\ldots,X_{s_{n}}\in A_{n }).\]
**Proposition 2.15**.: For all \(x,y\in\mathbb{R}^{d}\), \(t>0\), we have \(\mathbb{P}^{t,y}_{x}(\sigma_{D}\geq t)=\mathbb{P}^{t,x}_{y}(\sigma_{D}\geq t)\).
Proof.: Clearly, \(\{\sigma_{D}\geq t\}=\{[X_{s-},X_{s}]\subset D,0<s<t\}\). By Lemma 2.14,
\[\mathbb{P}^{t,y}_{x}(\sigma_{D}(X)\geq t) =\mathbb{P}^{t,y}_{x}([X_{s-},X_{s}]\subset D,0<s<t)=\mathbb{P}^{ t,y}_{x}([X^{\prime}_{s},X^{\prime}_{s-}]\subset D,0<s<t)\] \[=\mathbb{P}^{t,y}_{x}(\sigma_{D}(X^{\prime})\geq t)=\mathbb{P}^{ t,x}_{y}(\sigma_{D}(X)\geq t).\qed\]
**Lemma 2.16**.: For all \(x,y\in\mathbb{R}^{d}\), \(t>0\), we have \(\hat{p}_{D}(t,x,y)=\mathbb{P}^{t,y}_{x}(\sigma_{D}\geq t)p(t,x,y)\).
Proof.: Dividing both sides of Hunt's formula by \(p(t,x,y)\) we get
\[\frac{\hat{p}_{D}(t,x,y)}{p(t,x,y)}=1-\frac{\mathbb{E}_{x}\left[\sigma_{D}<t;p (t-\sigma_{D},X_{\sigma_{D}},y)\right]}{p(t,x,y)}=1-\mathbb{P}^{t,y}_{x}(\sigma _{D}<t)=\mathbb{P}^{t,y}_{x}(\sigma_{D}\geq t).\qed\]
**Corollary 2.17**.: \(0\leq\hat{p}_{D}(t,x,y)\leq p_{D}(t,x,y)\leq p(t,x,y)\) for all \(x,y\in\mathbb{R}^{d}\), \(t>0\).
Proof.: Since \(\sigma_{D}\leq\tau_{D}\), Lemma 2.16 and its analogue for \(p_{D}\) and \(\tau_{D}\) yield the result.
Proof of Theorem 2.8.: Use Proposition 2.15, Lemma 2.16 and symmetry of \(p(t,\cdot,\cdot)\).
### Perturbation formula
We note the following extension of Hunt's formula.
**Lemma 2.18**.: Let \(U\) be open, \(U\subset D\). Then, for \(t>0\) and \(x,y\in\mathbb{R}^{d}\) we have
\[\hat{p}_{U}(t,x,y)=\hat{p}_{D}(t,x,y)-\mathbb{E}_{x}[\sigma_{U}<t;[X_{\sigma_{ U}-},X_{\sigma_{U}}]\subset D;\hat{p}_{D}(t-\sigma_{U},X_{\sigma_{U}},y)].\]
Proof.: We have
\[p(t,x,y)-\hat{p}_{D}(t,x,y)=\mathbb{E}_{x}[\sigma_{D}<t;p(t-\sigma_{D},X_{ \sigma_{D}},y)]\]
\[p(t,x,y)-\hat{p}_{U}(t,x,y)=\mathbb{E}_{x}[\sigma_{U}<t;p(t-\sigma_{U},X_{\sigma_{ U}},y)].\]
Subtracting we get
\[\hat{p}_{D}(t,x,y)-\hat{p}_{U}(t,x,y) =\mathbb{E}_{x}\left[\sigma_{U}<t;p(t-\sigma_{U},X_{\sigma_{U}},y )\right]-\mathbb{E}_{x}\left[\sigma_{D}<t;p(t-\sigma_{D},X_{\sigma_{D}},y)\right]\] \[=\mathbb{E}_{x}\left[\sigma_{U}<t,\sigma_{U}<\sigma_{D};p(t- \sigma_{U},X_{\sigma_{U}},y)\right]\] \[\quad-\mathbb{E}_{x}\left[\sigma_{D}<t,\sigma_{U}<\sigma_{D};p(t- \sigma_{D},X_{\sigma_{D}},y)\right]\] \[=\mathbb{E}_{x}\left[\sigma_{U}<t,[X_{\sigma_{U}-},X_{\sigma_{U}} ]\subset D;p(t-\sigma_{U},X_{\sigma_{U}},y)\right]\] \[\quad-\mathbb{E}_{x}\left[\sigma_{D}<t,[X_{\sigma_{U}-},X_{\sigma _{U}}]\subset D;p(t-\sigma_{D},X_{\sigma_{D}},y)\right],\]
where the penultimate equality follows since \(\sigma_{U}\leq\sigma_{D}\) and both integrands are equal on the set \(\sigma_{U}=\sigma_{D}\). Denote the first expectation by \(I_{1}\) and the second by \(I_{2}\). Let \(\mathcal{F}_{\sigma_{U}}\) be the usual \(\sigma\)-field of pre-\(\sigma_{U}\) events and \(\theta_{\sigma_{U}}\) be the usual shift, e.g., \(\theta_{\sigma_{U}}X_{t}=X_{t+\sigma_{U}}\). By the strong Markov property, since \(\sigma_{D}=\sigma_{U}+\sigma_{D}\circ\theta_{\sigma_{U}}\), we get
\[I_{2} =\mathbb{E}_{x}\left[\sigma_{D}<t,\sigma_{U}<t,[X_{\sigma_{U}-},X _{\sigma_{U}}]\subset D;p(t-\sigma_{D},X_{\sigma_{D}},y)\right]\] \[=\mathbb{E}_{x}\left[\sigma_{U}<t,[X_{\sigma_{U}-},X_{\sigma_{U} }]\subset D;\mathbb{E}\left[\mathbb{1}_{\sigma_{D}<t}\ p(t-\sigma_{D},X_{ \sigma_{D}},y)|\mathcal{F}_{\sigma_{U}}]\right]\] \[=\mathbb{E}_{x}\left[\sigma_{U}<t,[X_{\sigma_{U}-},X_{\sigma_{U} }]\subset D;\mathbb{E}\left[\mathbb{1}_{\sigma_{U}+\sigma_{D}\circ\theta_{ \sigma_{U}}<t}\ p(t-\sigma_{U}-\sigma_{D}\circ\theta_{\sigma_{U}},X_{\sigma_{ D}}\circ\theta_{\sigma_{U}},y)|\mathcal{F}_{\sigma_{U}}\right]\right]\] \[=\mathbb{E}_{x}\left[\sigma_{U}<t,[X_{\sigma_{U}-},X_{\sigma_{U} }]\subset D;\mathbb{E}_{X_{s}}\left[\sigma_{D}<t-s;p(t-s-\sigma_{D},X_{\sigma _{D}},y)\right]|_{s=\sigma_{U}}\right]\] \[=\mathbb{E}_{x}\left[\sigma_{U}<t,[X_{\sigma_{U}-},X_{\sigma_{U} }]\subset D;p(t-\sigma_{U},X_{\sigma_{U}},y)-\hat{p}_{D}(t-\sigma_{U},X_{\sigma _{U}},y)\right],\]
so
\[I_{1}-I_{2}=\mathbb{E}_{x}[\sigma_{U}<t,[X_{\sigma_{U}-},X_{\sigma_{U}}] \subset D;\hat{p}_{D}(t-\sigma_{U},X_{\sigma_{U}},y)].\qed\]
The Hunt formula defines the kernel of the killed process and the kernel of the shot-down process in terms of \(p\). Here is a direct relationship between \(p_{D}\) and \(\hat{p}_{D}\).
**Lemma 2.19**.: For \(t>0\) and \(x,y\in\mathbb{R}^{d}\), the following equality holds
\[\hat{p}_{D}(t,x,y)=p_{D}(t,x,y)-\mathbb{E}_{x}\left[\sigma_{D}<t,X_{\sigma_{D} }\in D\setminus D_{X_{\sigma_{D}-}};p_{D}(t-\sigma_{D},X_{\sigma_{D}},y)\right].\]
Proof.: Combining definitions of \(p_{D}\) and \(\hat{p}_{D}\) for \(t>0\), \(x,y\in\mathbb{R}^{d}\), we get
\[p_{D}(t,x,y)-\hat{p}_{D}(t,x,y)=\mathbb{E}_{x}[\sigma_{D}<t;p(t-\sigma_{D},X_{ \sigma_{D}},y)]-\mathbb{E}_{x}[\tau_{D}<t;p(t-\tau_{D},X_{\tau_{D}},y)].\]
We have \(\sigma_{D}\leq\tau_{D}\) so the last expression equals
\[\mathbb{E}_{x}[\sigma_{D}<t,\sigma_{D}<\tau_{D};p(t-\sigma_{D},X_{\sigma_{D}},y )]-\mathbb{E}_{x}[\tau_{D}<t,\sigma_{D}<\tau_{D};p(t-\tau_{D},X_{\tau_{D}},y)].\]
We write \(\hat{r}^{D}(t,x,y)=\mathbb{E}_{x}[\sigma_{D}<t;p(t-\sigma_{D},X_{\sigma_{D}},y)]\) to simplify the notation. Making use of the main part of the proof of [2, Proposition 2.3], we will show below that
\[\mathbb{E}_{x}[\tau_{D}<t,\sigma_{D}<\tau_{D};p(t-\tau_{D},X_{\tau_{D}},y)]= \mathbb{E}_{x}[\sigma_{D}<t,\sigma_{D}<\tau_{D};\hat{r}^{D}(t-\sigma_{D},X_{ \sigma_{D}},y)]. \tag{11}\]
Indeed, the strong Markov property yields
\[\mathbb{E}_{x} [\tau_{D}<t,\sigma_{D}<\tau_{D};\hat{r}^{D}(t-\sigma_{D},X_{\sigma_{D }},y)]\] \[=\mathbb{E}_{x}\bigg{[}\sigma_{D}<t,\sigma_{D}<\tau_{D};\mathbb{E}_ {X_{\sigma_{D}}}[\tau_{D}<t-s;p(t-s-\tau_{D},X_{\tau_{D}},y)]\big{|}_{s=\sigma_ {D}}\bigg{]}\] \[=\mathbb{E}_{x}\bigg{[}\sigma_{D}<t,\,\sigma_{D}<\tau_{D},\tau_{D }\circ\Theta_{\sigma_{D}}+\sigma_{D}<t;p(t-\tau_{D}\circ\Theta_{\sigma_{D}}- \sigma_{D},X_{\tau_{D}}\circ\Theta_{\sigma_{D}},y)\bigg{]}.\]
On the set \(\sigma_{D}<\tau_{D}\) we have \(\tau_{D}\circ\Theta_{\sigma_{D}}+\sigma_{D}=\tau_{D}\) and \(X_{\tau_{D}}\circ\Theta_{\sigma_{D}}=X_{\tau_{D}}\), so simplifying the last expression we get
\[\mathbb{E}_{x}\left[\sigma_{D}<t,\,\sigma_{D}<\tau_{D};p(t-\tau_{D},X(\tau_{D} ),y)\right],\]
which ends the proof of (11). From (11) we obtain
\[p_{D}(t,x,y)-\hat{p}_{D}(t,x,y)=\mathbb{E}_{x}\left[\sigma_{D}<t,\,\sigma_{D} <\tau_{D};p_{D}(t-\sigma_{D},X_{\sigma_{D}},y)\right].\qed\]
Note that the condition \(\sigma_{D}<\tau_{D}\) can be written as \(X_{\sigma_{D}}\in D\setminus D_{X_{\sigma_{D}-}}\). The next (Duhamel) formula shows \(\hat{p}_{D}\) as a _nonlocal perturbation_ of \(p_{D}\), in the sense of [13].
**Corollary 2.20**.: For Lipschitz \(D\), \(x,y\in\mathbb{R}^{d}\) and \(t>0\), the following equality holds
\[\hat{p}_{D}(t,x,y)=p_{D}(t,x,y)-\int\limits_{0}^{t}\int\limits_{D}\int\limits _{D\setminus D_{w}}\hat{p}_{D}(s,x,w)\nu(z-w)p_{D}(t-s,z,y)\,dz\,dw\,ds.\]
Proof.: Remark 1.13 gives the joint probability distribution of \(\sigma_{D}\), \(X_{\sigma_{D}-}\) and \(X_{\sigma_{D}}\). Substituting this in Lemma 2.19, we obtain the statement.
### Incomparability of the heat kernels \(\hat{p}_{D}\) and \(p_{D}\)
For the remainder of the paper, we assume that \(D\) is nonempty open bounded \(C^{1,1}\) set.
Recall that \(\hat{p}_{D}\leq p_{D}\). Sharp estimates of the Dirichlet heat kernel \(p_{D}\) are well known [14], see also [10]. It was our initial naive conjecture that \(\hat{p}_{D}\) is comparable to \(p_{D}\) in short time if \(D\) is connected. However, the next result shows that this is true only if \(D\) is convex; for nonconvex \(D\), there exist points \(x,y\in D\) such that \(\hat{p}_{D}(t,x,y)\) is much smaller than \(p_{D}(t,x,y)\) in short time.
**Theorem 2.21**.: \(\hat{p}_{D}\) _and \(p_{D}\) are comparable if and only if \(D\) is convex._
Proof.: If \(D\) is convex, then \(\tau_{D}=\sigma_{D}\) and \(p_{D}=\hat{p}_{D}\), and there is nothing to prove. If \(D\) is nonconvex, then we can find \(x,y\in D\) and \(Q\in D^{c}\) such that \(Q\in[x,y]\), i.e., \(Q=\lambda x+(1-\lambda)y\) for some \(0\leq\lambda\leq 1\). If \(\operatorname{dist}(Q,D)>0\), then \(r=:\operatorname{dist}(Q,D)\wedge\operatorname{dist}(x,D^{c})\wedge \operatorname{dist}(y,D^{c})>0\), and we consider the balls \(B(x,r),B(y,r)\subset D\). For all the points \(z\in B(x,r)\), \(w\in B(y,r)\), we have \(\lambda z+(1-\lambda)w\in B(Q,r)\). The latter is needed below--but before we proceed, we also consider the case of \(Q\in\partial D\). In this case we translate \(Q\) slightly in the direction of the center of the exterior ball tangent to \(\partial D\) at \(Q\) (see Definition 1.1), and we translate \(x\) and \(y\) by the same vector, which moves us to the first case. To summarize--for nonconvex \(C^{1,1}\) set \(D\) there are balls \(B(x,r),B(y,r)\subset D\) such that for all \(z\in B(x,r)\) and \(w\in B(y,r)\), the line segment \([z,w]\) intersects the interior of \(D^{c}\), in particular \(\operatorname{dist}(B(x,r),B(y,r))>0\), see Figure 2. Denote \(B=B(x,r)\). By Lemma 2.18,
\[\hat{p}_{D}(t,x,y)=\mathbb{E}_{x}[\tau_{B}<t,[X_{\tau_{B-}},X_{\tau_{B}}]\subset D; \hat{p}_{D}(t-\tau_{B},X_{\tau_{B}},y)]+p_{B}(t,x,y).\]
The second term above equals \(0\), because \(y\notin B\). The condition \([X_{\tau_{B}-},X_{\tau_{B}}]\subset D\) implies that \(X_{\tau_{B}}\notin B(y,r)\), i.e., \(|X_{\tau_{B}}-y|\geq r\). Thus,
\[\hat{p}_{D}(t,x,y)\leq\mathbb{P}_{x}(\tau_{B}<t)\sup_{\begin{subarray}{c}s<t\\ |z|>r\end{subarray}}p_{s}(z).\]
If \(0<s<t\leq 1\) and \(|z|>r\), then
\[p_{s}(z)\approx s^{-d/\alpha}\wedge\frac{s}{|z|^{d+\alpha}}\leq cs\leq ct.\]
Since \(\mathbb{P}_{x}(\tau_{B}<t)\) converges to \(0\) as \(t\to 0\), we get \(\hat{p}_{D}(t,x,y)=o(t)\). But for small \(t>0\) we have \(p_{D}(t,x,y)\approx t|x-y|^{-d-\alpha}\)[14, Theorem 1.1].
## 3. The killing measures
Recall the killing intensities \(\kappa_{D}\) and \(\iota_{D}\), defined in the Introduction
**Definition 3.1**.: _The killing intensity for \(D\) is \(\kappa_{D}(x)=\int\limits_{D^{c}}\nu(y-x)\,dy,\,\,\,x\in D.\)_
Given \(x\in D\), recall the definition \(D_{x}=\{y\in D:[x,y]\subset D\}\); we regard \(D_{x}\) as the set of points that are "visible in \(D\) from \(x\)"--the set contains precisely those points where the process can jump from \(x\) without being shot down.
**Definition 3.2**.: _The shooting-down intensity for \(D\) is \(\iota_{D}(x)=\int\limits_{D_{x}^{c}}\nu(y-x)\,dy,\,\,\,x\in D.\)_
Clearly, \(\iota_{D}\geq\kappa_{D}>0\) and \(\iota_{D},\kappa_{D}\) are continuous on \(D\). Let
\[\delta_{D}(x)=\operatorname{dist}(x,D^{c}),\,\,\,\,\,x\in\mathbb{R}^{d}.\]
We will estimate the difference \(\iota_{D}-\kappa_{D}\) for \(C^{1,1}\) open sets \(D\) in terms of \(\delta_{D}\).
Let \(x\in D\), an open set in \(\mathbb{R}^{d}\) that is \(C^{1,1}\) at scale \(r>0\). Let \(Q\in\partial D\) be such that \(\delta_{D}(x)=|x-Q|\). Let \(I_{x}=B(x^{\prime},r)\) and \(O_{x}=B(x^{\prime\prime},r)\) be the inner and outer balls tangent at \(Q\), see Figure 3. We have
\[\kappa_{D}(x)\leq\int\limits_{|y|>\delta_{D}(x)}\mathcal{A}|y|^{-d-\alpha}dy=c \delta_{D}(x)^{-\alpha},\]
Figure 2. Situation in the proof of Theorem 2.21
and
\[\kappa_{D}(x)\geq\int\limits_{\partial_{x}}\mathcal{A}|y|^{-d-\alpha}dy\geq c \delta_{D}(x)^{-\alpha}\quad\text{ if }\delta_{D}(x)\leq r.\]
Since \(D^{c}\subset D^{c}_{x}\), we have \(\kappa_{D}\leq\iota_{D}\), as noted before. Furthermore,
\[D^{c}_{x}\setminus D^{c}\subset I^{c}_{x}\setminus D^{c}=D\setminus I_{x} \subset O^{c}_{x}\setminus I_{x}=\mathbb{R}^{d}\setminus(O_{x}\cup I_{x}),\]
hence
(12) \[0\leq\iota_{D}(x)-\kappa_{D}(x)=\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
and it is clear that
\[\int\limits_{\mathbb{R}\times\mathbb{R}_{-}\setminus O_{x}}\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
We get
\[\int\limits_{(\mathbb{R}^{d}\setminus(O_{x}\cup L_{x}))\cap\{y\in \mathbb{R}^{d}:|y|<\frac{r\sqrt{3}}{2}\}\ \ (\mathbb{R}^{d}\setminus(O_{x}\cup L_{x}))\cap\{y\in\mathbb{R}^{d}:|y|<r\}}\] \[=\int\limits_{\{y\in\mathbb{R}^{d}:|\tilde{y}|<r\}}\int\limits_{0} ^{|\tilde{y}|^{2}}\left(\frac{|\tilde{y}|^{2}}{2}+y_{d}+x_{d}\right)^{\frac{-d -\alpha}{2}}dy_{d}\,d\tilde{y}.\]
Further, by cylindrical coordinates,
\[\int\limits_{\{y\in\mathbb{R}^{d}:|\tilde{y}|<r\}}\int\limits_{0} ^{|\tilde{y}|^{2}}\left(\frac{|\tilde{y}|^{2}}{2}+y_{d}+x_{d}\right)^{\frac{- d-\alpha}{2}}dy_{d}\,d\tilde{y} =c\int_{0}^{r}\int_{0}^{\rho^{2}}\left(\rho^{2}+y_{d}^{2}+x_{d}^{2 }\right)^{\frac{-d-\alpha}{2}}dy_{d}\,d\rho\] \[\leq c\int_{0}^{r}\rho^{d-2}\rho^{2}\left(\rho^{2}+x_{d}^{2} \right)^{\frac{-d-\alpha}{2}}d\rho\] \[=\int_{0}^{x_{d}}\rho^{d}\left(\rho^{2}+x_{d}^{2}\right)^{\frac{ -d-\alpha}{2}}d\rho+\int_{x_{d}}^{r}\rho^{d}\left(\rho^{2}+x_{d}^{2}\right)^{ \frac{-d-\alpha}{2}}d\rho.\]
For \(\alpha\neq 1\), we estimate the first integral with \(\int_{0}^{x_{d}}\rho^{d}(x_{d}^{2})^{\frac{-d-\alpha}{2}}d\rho=cx_{d}^{-\alpha -1}\) and the second integral with \(\int_{x_{d}}^{r}\rho^{d}(\rho^{2})^{\frac{-d-\alpha}{2}}d\rho\leq cx_{d}^{- \alpha-1}\). Similarly, for \(\alpha=1\), the first integral may be bounded by \(\int_{0}^{x_{d}}\rho^{d}(x_{d}^{2})^{\frac{-d-\alpha}{2}}d\rho\leq c\), and the second integral by \(\int_{x_{d}}^{r}\rho^{d}(\rho^{2})^{\frac{-d-\alpha}{2}}d\rho=c(\log r-\log x _{d})\). Therefore, again,
\[\iota_{D}(x)-\kappa_{D}(x)\leq c\begin{cases}\delta_{D}(x)^{-\alpha+1},&\alpha >1,\\ \log\left(e+\frac{1}{\delta_{D}(x)}\right),&\alpha=1,\\ 1,&\alpha<1.\end{cases} \tag{15}\]
## 4. Quadratic forms
From now on we assume that \(D\) is a bounded \(C^{1,1}\) set in \(\mathbb{R}^{d}\). As usual, we let \(P_{t}^{D}f(x)=\int_{\mathbb{R}^{d}}p_{D}(t,x,y)f(y)\,dy\) and \(\hat{P}_{t}^{D}f(x)=\int_{\mathbb{R}^{d}}\hat{p}_{D}(t,x,y)f(y)\,dy\), \(t>0\), \(x\in\mathbb{R}^{d}\). It is well known that \(\{P_{t}^{D}\}_{t>0}\) is a strongly continuous contraction semigroup on \(L^{2}(D)\). Let \(f\in L^{2}(D)\), that is \(f\in L^{2}(\mathbb{R}^{d})\) and \(f=0\) on \(D^{c}\). For \(t>0\), we define as usual,
\[\mathcal{E}_{t}^{D}[f]=\frac{1}{t}\langle f-P_{t}^{D}f,f\rangle=\frac{1}{t} \langle f,f\rangle-\frac{1}{t}\langle P_{t/2}^{D}f,P_{t/2}^{D}f\rangle. \tag{16}\]
Clearly, \(\mathcal{E}_{t}^{D}[f]\geq 0\). Let \(E_{\lambda}\) be the spectral family of projections corresponding to (the generator of) \(P_{t}^{D}\)[20]. Then
\[\mathcal{E}_{t}^{D}[f]=\int_{[0,\infty)}\frac{1}{t}(1-e^{-\lambda t})\,d[E_{ \lambda}f,f]=\int_{[0,\infty)}\int_{0}^{\lambda}e^{-\mu t}d\mu\,d[E_{\lambda}f,f]\]
is finite and nonincreasing in \(t\), as is well known, see [20, Lemma 1.3.4]. We define
\[\mathcal{E}^{D}[f]=\sup_{t>0}\mathcal{E}^{D}_{t}[f]=\lim_{t\to 0^{+}} \mathcal{E}^{D}_{t}[f],\]
the Dirichlet form of the killed process. It is well known that \(\mathcal{E}^{D}[f]=\mathcal{E}^{\mathbb{R}^{d}}[f]\). Since \(p(t,x,y)=p(t,y,x)\), \(\int p(t,x,y)\,dy=1\) and \(p(t,x,y)/t\leq c\nu(y-x)\) by (2), we get
\[\mathcal{E}^{D}[f] =\mathcal{E}^{\mathbb{R}^{d}}[f]=\lim_{t\to 0^{+}}\frac{1}{t}\int_{ \mathbb{R}^{d}}\int_{\mathbb{R}^{d}}p(t,x,y)f(x)[f(x)-f(y)]\,dy\,dx\] \[=\lim_{t\to 0^{+}}\frac{1}{2}\int_{\mathbb{R}^{d}}\int_{ \mathbb{R}^{d}}[f(y)-f(x)]^{2}p(t,x,y)/t\,dy\,dx \tag{17}\] \[=\frac{1}{2}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}[f(y)-f(x) ]^{2}\nu(y-x)\,dy\,dx.\]
Since \(f=0\) on \(D^{c}\), we get the following Hardy-type inequality
\[\mathcal{E}^{D}[f]\geq 2\frac{1}{2}\int_{D}\int_{D^{c}}f(x)^{2}\nu(y-x)\,dy\, dx=\int_{D}f(x)^{2}\kappa_{D}(x)\,dx,\quad f\in L^{2}(D). \tag{18}\]
**Lemma 4.1**.: \(\{\hat{P}^{D}_{t}\}_{t>0}\) is a strongly continuous contraction semigroup on \(L^{2}(D)\).
Proof.: The contractivity is trivial: by Corollary 2.17 and Jensen's inequality, for \(f\geq 0\),
\[\int_{\mathbb{R}^{d}}\left(\int_{\mathbb{R}^{d}}\hat{p}_{D}(t,x,y )f(y)\,dy\right)^{2}dx \leq\int_{\mathbb{R}^{d}}\left(\int_{\mathbb{R}^{d}}p(t,x,y)f(y) \,dy\right)^{2}dx\] \[\leq\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}p(t,x,y)f(y)^{2}\, dy\,dx=\|f\|_{L^{2}(D)}^{2}.\]
For general \(f\in L^{2}(D)\), we write \(f=f_{+}-f_{-}\), where \(f_{+}=f\lor 0\) and \(f_{-}=(-f)\lor 0\), and we note that \(\|f\|_{L^{2}(D)}^{2}=\|f_{+}\|_{L^{2}(D)}^{2}+\|f_{-}\|_{L^{2}(D)}^{2}=\|\,|f| \,\|_{L^{2}(D)}^{2}\), while \(\left|\int_{\mathbb{R}^{d}}\hat{p}_{D}(t,x,y)f(y)\,dy\right|\leq\int_{\mathbb{ R}^{d}}\hat{p}_{D}(t,x,y)|f(y)|\,dy\). This extends the contractivity to arbitrary (signed) \(f\in L^{2}(D)\). Such extensions will be used tacitly in what follows. The semigroup property follows from Chapman-Kolmogorov equations for \(\hat{p}_{D}\).
For \(t>0\) and nonnegative \(f,g\in L^{2}(D)\), by Corollary 2.20, Theorem 2.8, Theorem 3.3, the inequality \(ab\leq\frac{1}{2}\left(a^{2}+b^{2}\right)\), Hardy inequality (18), and strong continuity
of the semigroup \(P_{t}^{D}\), we obtain
\[0\leq\langle P_{t}^{D}f-\hat{P}_{t}^{D}f,g\rangle_{L^{2}(D)}\] \[=\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\int_{0}^{t}\int_{D} \int_{D\setminus D_{w}}\hat{p}_{D}(s,x,w)\nu(z-w)p_{D}(t-s,z,y)f(y)g(x)\,dz\,dw \,ds\,dx\,dy\] \[=\int_{0}^{t}\int_{D}\int_{D\setminus D_{w}}\hat{P}_{s}^{D}g(w) \nu(z-w)P_{t-s}^{D}f(z)\,dz\,dw\,ds\] \[\leq\frac{1}{2}\int_{0}^{t}\int_{D}\int_{D\setminus D_{w}}\left( \hat{P}_{s}^{D}g(w)^{2}+P_{t-s}^{D}f(z)^{2}\right)\nu(z-w)\,dz\,dw\,ds\] \[\leq c\int_{0}^{t}\int_{D}P_{s}^{D}g(w)^{2}\kappa_{D}(w)\,dw\,ds+c \int_{0}^{t}\int_{D}P_{t-s}^{D}f(z)^{2}\kappa_{D}(z)\,dz\,ds\] \[\leq c\int_{0}^{t}\mathcal{E}^{D}[P_{s}^{D}g]\,ds+c\int_{0}^{t} \mathcal{E}^{D}[P_{s}^{D}f]\,ds\] \[=c\left(\|f\|_{L^{2}(D)}^{2}-\|P_{t}^{D}f\|_{L^{2}(D)}^{2}+\|g\|_ {L^{2}(D)}^{2}-\|P_{t}^{D}g\|_{L^{2}(D)}^{2}\right)\to 0\quad\text{as}\quad t\to 0.\]
It follows that \(\hat{P}_{t}^{D}=P_{t}^{D}-(P_{t}^{D}-\hat{P}_{t}^{D})\) is weakly, hence strongly continuous on \(L^{2}(D)\), see [19, Theorem 1.6].
By Section 2.5 the heat kernels of \(X\) and \(\hat{X}\) are not comparable in general. In this section, however, we will show that their quadratic forms are comparable.
The Dirichlet form of the shot-down process is defined in the usual fashion: we let
\[\hat{\mathcal{E}}_{t}^{D}[f]=\frac{1}{t}\langle f-\hat{P}_{t}^{D}f,f\rangle= \frac{1}{t}\langle f,f\rangle-\frac{1}{t}\langle\hat{P}_{t/2}^{D}f,\hat{P}_{t/ 2}^{D}f\rangle \tag{19}\]
and consider the (monotone) limit
\[\hat{\mathcal{E}}^{D}[f]=\lim_{t\to 0^{+}}\hat{\mathcal{E}}_{t}^{D}[f]=\sup_{t>0} \hat{\mathcal{E}}_{t}^{D}[f].\]
The domains of the considered forms \(\mathcal{E}^{D}\) and \(\hat{\mathcal{E}}^{D}\) are, respectively,
\[\mathcal{D}(\mathcal{E}^{D})=\{f\in L^{2}(D):\mathcal{E}^{D}[f]<\infty\}, \qquad\mathcal{D}(\hat{\mathcal{E}}^{D})=\{f\in L^{2}(D):\hat{\mathcal{E}}^{D} [f]<\infty\}.\]
They are linear subspaces of \(L^{2}(D)\) because of the Cauchy-Schwarz inequality, e.g.,
\[\hat{\mathcal{E}}_{t}^{D}(f,g):=\frac{1}{t}\langle f-\hat{P}_{t}^{D}f,g\rangle \leq\hat{\mathcal{E}}_{t}^{D}[f]^{1/2}\hat{\mathcal{E}}_{t}^{D}[g]^{1/2},\quad t >0,\quad f,g\in L^{2}(D). \tag{20}\]
Then, \(\mathcal{D}(\hat{\mathcal{E}}^{D})\subset\mathcal{D}(\mathcal{E}^{D})\). Indeed, if \(f\in\mathcal{D}(\hat{\mathcal{E}}^{D})\), then the following is bounded for \(t>0\),
\[\hat{\mathcal{E}}_{t}^{D}[f] =\frac{1}{t}\left(\langle f,f\rangle-\langle\hat{P}_{t}^{D}f,f \rangle\right)\] \[=\frac{1}{t}\big{(}\langle f_{+},f_{+}\rangle-\langle\hat{P}_{t}^ {D}f_{+},f_{+}\rangle+\langle f_{-},f_{-}\rangle-\langle\hat{P}_{t}^{D}f_{-},f_ {-}\rangle+\langle\hat{P}_{t}^{D}f_{+},f_{-}\rangle+\langle\hat{P}_{t}^{D}f_{- },f_{+}\rangle\big{)},\]
and so \(\hat{\mathcal{E}}_{t}^{D}[f_{+}]\) is bounded for \(t>0\). Thus \(f_{+}\in\mathcal{D}(\hat{\mathcal{E}}^{D})\). Since \(f_{+}\geq 0\), by (16) and (19) we get \(\hat{\mathcal{E}}_{t}^{D}[f_{+}]\geq\mathcal{E}_{t}^{D}[f_{+}]\), so \(f_{+}\in\mathcal{D}(\mathcal{E}^{D})\). Similarly, \(f_{-}\in\mathcal{D}(\mathcal{E}^{D})\), hence \(f\in\mathcal{D}(\mathcal{E}^{D})\), as needed. Here is the main result of this section.
**Theorem 4.2**.: _Let \(D\subset\mathbb{R}^{d}\) be an open bounded \(C^{1,1}\) set, \(\alpha\in(0,2)\), and \(\nu\) as in (1). For \(f\in\mathcal{D}(\mathcal{E}^{D})\),_
\[\hat{\mathcal{E}}^{D}[f]=\mathcal{E}^{D}[f]+\int_{D}\int_{D\setminus D_{w}}f(w )\nu(z-w)f(z)\,dz\,dw. \tag{21}\]
_Furthermore, \(\int_{D}\int_{D\setminus D_{w}}|f(w)\nu(z-w)f(z)|\,dz\,dw\leq\mathcal{E}^{D}[ f]<\infty\) and \(\mathcal{D}(\hat{\mathcal{E}}^{D})=\mathcal{D}(\mathcal{E}^{D})\)._
Proof.: Let \(f\in\mathcal{D}(\mathcal{E}^{D})\) and \(t>0\). We have \(\hat{\mathcal{E}}^{D}_{t}[f]=\mathcal{E}^{D}_{t}[f]+\frac{1}{t}\langle P_{t}f- \hat{P}_{t}f,f\rangle\). For now, we omit the factor \(\frac{1}{t}\). Then the second term of the above sum and Corollary 2.20 yield
\[\langle P^{D}_{t}f-\hat{P}^{D}_{t}f,f\rangle=\int_{\mathbb{R}^{d}}\int_{ \mathbb{R}^{d}}\left(p_{D}(t,x,y)-\hat{p}_{D}(t,x,y)\right)f(y)f(x)\,dy\,dx:=I _{1}-I_{2},\]
where
\[I_{1} =\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\int_{0}^{t}\int_{D} \int_{D\setminus D_{w}}p_{D}(s,x,w)\nu(z-w)p_{D}(t-s,z,y)f(y)f(x)\,dz\,dw\,ds \,dx\,dy\] \[=\int_{0}^{t}\int_{D}\int_{D\setminus D_{w}}\nu(z-w)P^{D}_{s}f(w )P^{D}_{t-s}f(z)\,dz\,dw\,ds,\]
and
\[I_{2} =\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}\int_{0}^{t}\int_{D} \int_{D\setminus D_{w}}\int_{0}^{s}\int_{D}\int_{D\setminus D_{w^{\prime}}} \hat{p}_{D}(s^{\prime},x,w^{\prime})\nu(z^{\prime}-w^{\prime})p_{D}(s-s^{ \prime},z^{\prime},w)\nu(z-w)\] \[\quad\times p_{D}(t-s,z,y)f(y)f(x)\,dz^{\prime}\,dw^{\prime}\,ds ^{\prime}\,dz\,dw\,ds\,dx\,dy.\]
Since \(\hat{p}_{D}(t,u,v)\) and \(p_{D}(t,u,v)\) are symmetric in the space variables,
\[I_{2} =\int_{0}^{t}\int_{D}\int_{D\setminus D_{w}}\int_{0}^{s}\int_{D} \int_{D\setminus D_{w^{\prime}}}\nu(z^{\prime}-w^{\prime})p_{D}(s-s^{\prime}, z^{\prime},w)\nu(z-w)\] \[\quad\times\hat{P}^{D}_{s^{\prime}}f(w^{\prime})P^{D}_{t-s}f(z)\, dz^{\prime}\,dw^{\prime}\,ds^{\prime}\,dz\,dw\,ds.\]
Then,
\[2|I_{2}| \leq\int_{0}^{t}\int_{D}\int_{D\setminus D_{w}}\int_{0}^{s}\int_{ D}\int_{D\setminus D_{w^{\prime}}}\nu(z^{\prime}-w^{\prime})p_{D}(s-s^{\prime}, z^{\prime},w)\nu(z-w)\] \[\quad\times\left(P^{D}_{s^{\prime}}f(w^{\prime})^{2}+P^{D}_{t-s}f (z)^{2}\right)\,dz^{\prime}\,dw^{\prime}\,ds^{\prime}\,dz\,dw\,ds=K_{1}+K_{2} =2K_{1}.\]
To justify the latter equality, namely \(K_{1}=K_{2}\), we change variables \(t-s=u^{\prime}\), \(t-s^{\prime}=u\), use the symmetry of \(\nu\), the symmetry of \(p_{D}\) in the space variables and the equivalence: \(z\in D\setminus D_{w}\) if and only if \(w\in D\setminus D_{z}\).
Since \(|P^{D}_{t}f|\leq P^{D}_{t}|f|\), \(|\hat{P}^{D}_{t}f|\leq\hat{P}^{D}_{t}|f|\) and \(|f|\in\mathcal{D}(\mathcal{E}^{D})\) by (17), to show that \(K_{1}\) is small, we may and do assume that \(f\geq 0\). Denote \(F_{D}(w)=\kappa_{D}(w)-\iota_{D}(w)\) and consider the following factor of the above integral
\[L=L(s-s^{\prime},z^{\prime}):=\int_{D}\int_{D\setminus D_{w}}p_{D}(s-s^{ \prime},z^{\prime},w)\nu(z-w)\,dz\,dw=\int_{D}p_{D}(s-s^{\prime},z^{\prime},w )F_{D}(w)\,dw.\]
It is well known [14, Theorem 1.1] that for any \(T>0\) and all \(t\leq T,x,y\in D\),
\[p_{D}(t,x,y)\approx\left(1\wedge\frac{\delta_{D}(x)^{\alpha/2}}{\sqrt{t}}\right) \left(1\wedge\frac{\delta_{D}(y)^{\alpha/2}}{\sqrt{t}}\right)\left(t^{-d/ \alpha}\wedge\frac{t}{|x-y|^{d+\alpha}}\right)\]
with comparability constant depending only on \(\alpha,\,D\) and \(T\). Thus, for small \(t\), \(L/\left(1\wedge\frac{\delta_{D}(x^{\prime})^{\alpha/2}}{\sqrt{s-s^{\prime}}}\right)\) is comparable to
\[I_{3} :=\int_{D}\left(1\wedge\frac{\delta_{D}(w)^{\alpha/2}}{\sqrt{s-s^{ \prime}}}\right)\left((s-s^{\prime})^{-d/\alpha}\wedge\frac{s-s^{\prime}}{|z^{ \prime}-w|^{d+\alpha}}\right)F_{D}(w)\,dw\] \[=\int_{\delta_{D}(w)>(s-s^{\prime})^{1/\alpha}}\left((s-s^{ \prime})^{-d/\alpha}\wedge\frac{s-s^{\prime}}{|z^{\prime}-w|^{d+\alpha}} \right)F_{D}(w)\,dw\] \[\quad+\int_{\delta_{D}(w)\leq(s-s^{\prime})^{1/\alpha}}\frac{ \delta_{D}(w)^{\alpha/2}}{\sqrt{s-s^{\prime}}}\left((s-s^{\prime})^{-d/\alpha }\wedge\frac{s-s^{\prime}}{|z^{\prime}-w|^{d+\alpha}}\right)F_{D}(w)\,dw\] \[=\int_{\delta_{D}(w)>(s-s^{\prime})^{1/\alpha},\,|z^{\prime}-w|>( s-s^{\prime})^{1/\alpha}}\frac{s-s^{\prime}}{|z^{\prime}-w|^{d+\alpha}}F_{D}(w) \,dw\] \[\quad+\int_{\delta_{D}(w)>(s-s^{\prime})^{1/\alpha},\,|z^{\prime}- w|\leq(s-s^{\prime})^{1/\alpha}}(s-s^{\prime})^{-d/\alpha}F_{D}(w)\,dw\] \[\quad+\int_{\delta_{D}(w)\leq(s-s^{\prime})^{1/\alpha},\,|z^{ \prime}-w|>(s-s^{\prime})^{1/\alpha}}\frac{\delta_{D}(w)^{\alpha/2}}{\sqrt{s-s ^{\prime}}}\frac{s-s^{\prime}}{|z^{\prime}-w|^{d+\alpha}}F_{D}(w)\,dw\] \[\quad+\int_{\delta_{D}(w)\leq(s-s^{\prime})^{1/\alpha},\,|z^{ \prime}-w|\leq(s-s^{\prime})^{1/\alpha}}\frac{\delta_{D}(w)^{\alpha/2}}{\sqrt{ s-s^{\prime}}}(s-s^{\prime})^{-d/\alpha}F_{D}(w)\,dw.\]
Let \(\alpha>1\). Then, by (15), \(F_{D}(w)\lesssim\delta_{D}(w)^{1-\alpha}\), hence, \(I_{3}\) does not exceed a multiple of
\[\int_{\delta_{D}(w)>(s-s^{\prime})^{1/\alpha},\,|z^{\prime}-w|>( s-s^{\prime})^{1/\alpha}}\frac{(s-s^{\prime})^{1/\alpha}}{|z^{\prime}-w|^{d+ \alpha}}\,dw\] \[+\int_{\delta_{D}(w)>(s-s^{\prime})^{1/\alpha},\,|z^{\prime}-w| \leq(s-s^{\prime})^{1/\alpha}}(s-s^{\prime})^{1/\alpha-1-d/\alpha}\,dw\] \[+\int_{\delta_{D}(w)\leq(s-s^{\prime})^{1/\alpha},\,|z^{\prime}- w|>(s-s^{\prime})^{1/\alpha}}\frac{(s-s^{\prime})^{1/\alpha}}{|z^{\prime}-w|^{d+ \alpha}}\,dw\] \[+\int_{\delta_{D}(w)\leq(s-s^{\prime})^{1/\alpha},\,|z^{\prime}- w|\leq(s-s^{\prime})^{1/\alpha}}(s-s^{\prime})^{1/\alpha-1-d/\alpha}\,dw\lesssim(s-s^{ \prime})^{1/\alpha-1}.\]
If \(\alpha=1\), then by (15), we similarly obtain that \(K_{1}\lesssim\log\left(e+1/(s-s^{\prime})\right)\) and if \(\alpha<1\), then \(I_{3}\) is bounded, because \(F_{D}\) is bounded and \(p_{D}\) is a subprobability density.
Summarizing,
\[I_{3}\leq cg(s-s^{\prime}), \tag{22}\]
where, for \(s>0\),
\[g(s):=\begin{cases}s^{1/\alpha-1},\,\alpha>1,\\ \log\left(e+\frac{1}{s}\right),\,\alpha=1,\\ 1,\,\alpha<1.\end{cases}\]
By (15), Hardy inequality (18) and (22) we have
\[K_{1} \lesssim\int_{0}^{t}\int_{0}^{s}\int_{D}\int_{D\setminus D_{w^{ \prime}}}g(s-s^{\prime})\left(1\wedge\frac{\delta_{D}(z^{\prime})^{\alpha/2}}{ \sqrt{s-s^{\prime}}}\right)\nu(z^{\prime}-w^{\prime})P_{s^{\prime}}^{D}f(w^{ \prime})^{2}\,dz^{\prime}\,dw^{\prime}\,ds^{\prime}\,ds\] \[\leq\int_{0}^{t}\int_{0}^{s}\int_{D}\int_{D\setminus D_{w^{ \prime}}}g(s-s^{\prime})\nu(z^{\prime}-w^{\prime})P_{s^{\prime}}^{D}f(w^{ \prime})^{2}\,dz^{\prime}\,dw^{\prime}\,ds^{\prime}\,ds\] \[\lesssim\int_{0}^{t}\int_{0}^{s}\int_{D}g(s-s^{\prime})\kappa_{D }(w^{\prime})P_{s^{\prime}}^{D}f(w^{\prime})^{2}\,dw^{\prime}\,ds^{\prime}\,ds\] \[\leq\int_{0}^{t}\int_{0}^{s}g(s-s^{\prime})\mathcal{E}^{D}[P_{s^ {\prime}}^{D}f]\,ds^{\prime}\,ds.\]
We easily see that the function \(G(s):=\int_{0}^{s}g(s^{\prime})\,ds^{\prime}\) is finite, nonnegative and bounded for small \(s\), with \(G(0)=0\), hence, by the monotonicity of \(s\mapsto\mathcal{E}^{D}[P_{s}^{D}f]\), we get
\[K_{1}\leq\int_{0}^{t}G(s)\sup_{s^{\prime}\in[0,s]}\mathcal{E}^{D}[P_{s^{\prime }}^{D}f]\,ds=\mathcal{E}^{D}[f]\int_{0}^{t}G(s)\,ds=o(t),\,\text{as}\,\,\,t\to 0^{+}.\]
We conclude that \(\frac{1}{t}I_{2}\to 0\), as \(t\to 0^{+}\).
We now let \(f\in\mathcal{D}(\mathcal{E}^{D})\) be arbitrary, that is, not necessarily nonnegative, and focus on the integral
\[I_{1}=\int_{0}^{t}\int_{D}\int_{D\setminus D_{w}}P_{s}^{D}f(w)\nu(z-w)P_{t-s} ^{D}f(z)\,dz\,dw\,ds.\]
It is finite for nonnegative, hence arbitrary \(f\in\mathcal{D}(\mathcal{E}^{D})\). Indeed, by Theorem 3.3, Cauchy-Schwarz inequality, Hardy inequality (18) and spectral theorem,
\[I_{1} \leq\sqrt{\int_{0}^{t}\int_{D}\int_{D\setminus D_{w}}P_{s}^{D}f(w )^{2}\nu(z-w)\,dz\,dw\,ds}\sqrt{\int_{0}^{t}\int_{D}\int_{D\setminus D_{w}}P_ {t-s}^{D}f(z)^{2}\nu(z-w)\,dz\,dw\,ds}\] \[\lesssim\sqrt{\int_{0}^{t}\int_{D}P_{s}^{D}f(w)^{2}\kappa_{D}(w) \,dw\,ds}\sqrt{\int_{0}^{t}\int_{D}P_{t-s}^{D}f(z)^{2}\kappa_{D}(z)\,dz\,ds}\] \[\leq\int_{0}^{t}\mathcal{E}^{D}[P_{s}^{D}f]\,ds=\|f\|_{L^{2}(D)}^ {2}-\|P_{t}^{D}f\|_{L^{2}(D)}^{2}<\infty.\]
for nonnegative, hence arbitrary \(f\in\mathcal{D}(\mathcal{E}^{D})\).
Moreover, \(\{P_{t}^{D}\}_{t>0}\) is a semigroup of contractions, so by (18),
\[\left|\frac{1}{t}I_{1}-\int_{D}\int_{D\setminus D_{w}}f(w)\nu(z-w )f(z)\,dz\,dw\right|\] \[\leq\frac{1}{t}\int_{0}^{t}\int_{D}\int_{D\setminus D_{w}}\nu(z- w)\left|P_{s}^{D}f(w)P_{t-s}^{D}f(z)-f(w)f(z)\right|\,dz\,dw\,ds\] \[\leq\frac{1}{t}\int_{0}^{t}\int_{D}\int_{D\setminus D_{w}}\nu(z -w)\left|P_{s}^{D}f(w)-f(w)\right|\left|P_{t-s}^{D}f(z)\right|dz\,dw\,ds\]
\[+\frac{1}{t}\int_{0}^{t}\int_{D}\int_{D\setminus D_{w}}\nu(z-w) \left|P_{t-s}^{D}f(z)-f(z)\right|\left|f(w)\right|dz\,dw\,ds\] \[\leq\frac{1}{t}\int_{0}^{t}\sqrt{\int_{D}\int_{D\setminus D_{w}} \hskip-14.226378pt\nu(z-w)\left|P_{s}^{D}f(w)-f(w)\right|^{2}\,dz\,dw}\sqrt{ \int_{D}\int_{D\setminus D_{w}}\hskip-14.226378pt\nu(z-w)|P_{t-s}^{D}f(z)|^{2} \,dz\,dw}\,ds\] \[\quad+\frac{1}{t}\int_{0}^{t}\sqrt{\int_{D}\int_{D\setminus D_{w} }\hskip-14.226378pt\nu(z-w)\left|P_{t-s}^{D}f(z)-f(z)\right|^{2}\,dz\,dw}\sqrt{ \int_{D}\int_{D\setminus D_{w}}\hskip-14.226378pt\nu(z-w)|f(w)|^{2}\,dz\,dw}\,ds\] \[\lesssim\frac{1}{t}\int_{0}^{t}\sqrt{\mathcal{E}^{D}[P_{s}^{D}f- f]\mathcal{E}^{D}[P_{t-s}f]}\,ds+\frac{1}{t}\int_{0}^{t}\sqrt{\mathcal{E}^{D}[P _{t-s}^{D}f-f]\mathcal{E}^{D}[f]}\,ds\to 0,\]
as \(t\to 0^{+}\), because the functions \(s\mapsto\mathcal{E}^{D}[P_{s}^{D}f-f]\mathcal{E}^{D}[P_{t-s}f]\) and \(\,s\mapsto\mathcal{E}^{D}[P_{s}f-f]\) are continuous and vanish at \(s=0\).
This proves (21). We also note that by Theorem 3.3 and the Hardy inequality (18),
\[\int_{D}\int_{D\setminus D_{w}}\left|f(w)\nu(z-w)f(z)\right|dz\,dw \leq\frac{1}{2}\int_{D}\int_{D\setminus D_{w}}(f(w)^{2}+f(z)^{2})\nu(z-w)\,dz \,dw\] \[\leq c\int_{D}f(w)^{2}\kappa_{D}(w)\,dw\leq c\mathcal{E}^{D}[f]<\infty.\qed \tag{23}\]
Here and below, for \(w\in D\) and \(A\subset\mathbb{R}^{d}\), we let \(\nu(w,A)=\int_{A}\nu(z-w)dz\). We next propose and verify the following alternative to (21):
\[\hat{\mathcal{E}}^{D}[f]=\frac{1}{2}\iint\limits_{[z,w]\in D}(f(w)-f(z))^{2} \nu(z-w)\,dw\,dz+\int_{D}f^{2}(w)\nu(w,D_{w}^{c})\,dw. \tag{24}\]
The first term on the right-hand side of (24) can be thought of as coming from jumps, and the second is due to shooting down. To prove (24) we argue as follows. Since \(\mathbb{R}^{d}=D\cup D^{c}\) and \(f=0\) on \(D^{c}\),
\[\mathcal{E}^{D}[f]=\frac{1}{2}\int_{D}\int_{D}(f(w)-f(z))^{2}\nu(z-w)\,dw\,dz+ \int_{D}f^{2}(w)\nu(w,D^{c})\,dw.\]
Therefore, by (21),
\[\hat{\mathcal{E}}^{D}[f] =\frac{1}{2}\int_{D}\int_{D}(f(w)-f(z))^{2}\nu(z-w)\,dw\,dz+\int_ {D}f^{2}(w)\nu(w,D^{c})\,dw\] \[\quad+\int_{D}\int_{D\setminus D_{w}}f(w)f(z)\nu(z-w)\,dz\,dw\] \[=\frac{1}{2}\int_{D}\int_{D_{w}}(f(w)-f(z))^{2}\nu(z-w)\,dw\,dz+ \frac{1}{2}\int_{D}\int_{D\setminus D_{w}}(f(w)-f(z))^{2}\nu(z-w)\,dw\,dz\] \[\quad+\int_{D}f^{2}(w)\nu(w,D^{c})\,dw+\int_{D}\int_{D\setminus D _{w}}f(w)f(z)\nu(z-w)\,dz\,dw\]
\[=\frac{1}{2}\iint\limits_{[z,w]\in D}(f(w)-f(z))^{2}\nu(z-w)\,dw\,dz+ \frac{1}{2}\int_{D}\int_{D\setminus D_{w}}\left(f^{2}(w)+f^{2}(z)\right)\nu(z-w) \,dw\,dz\] \[\quad+\int_{D}f^{2}(w)\nu(w,D^{c})\,dw,\]
which yields (24). The steps above are justified by the absolute convergence of the integrals \(\int_{D}\int_{D\setminus D_{w}}f(w)f(z)\nu(z-w)\,dw\,dz\) and \(\int_{D}\int_{D\setminus D_{w}}f^{2}(w)\nu(z-w)\,dw\,dz\), see (23).
## 5. The Green function
We recall the definition of the Green function for the killed process [17, Section 2.3]:
\[G_{D}(x,y)=\int\limits_{0}^{\infty}p_{D}(t,x,y)\,dt,\ x,y\in\mathbb{R}^{d}.\]
Analogously we define the Green function for the shot-down process.
**Definition 5.1**.: \(\hat{G}_{D}(x,y)=\int\limits_{0}^{\infty}\hat{p}_{D}(t,x,y)\,dt\)_, \(x,y\in\mathbb{R}^{d}\)._
From this definition, one can derive a perturbation formula and a Hunt formula as well as scaling and positivity properties.
**Lemma 5.2**.: \[\hat{G}_{D}(x,y)=G_{D}(x,y)-\int_{D}\int_{D\setminus D_{w}}\hat{G}_{D}(x,w) \nu(z-w)G_{D}(z,y)\,dz\,dw,\quad x,y\in\mathbb{R}^{d}.\]
Proof.: By Corollary 2.20, Fubini-Tonelli, and the above definitions,
\[G_{D}(x,y) =\hat{G}_{D}(x,y)-\int_{0}^{\infty}\int_{D}\int_{D\setminus D_{w} }\int_{0}^{t}\hat{p}_{D}(s,x,w)\nu(z-w)p_{D}(t-s,z,y)\,ds\,dz\,dw\,dt\] \[=\hat{G}_{D}(x,y)-\int_{D}\int_{D\setminus D_{w}}\int_{0}^{ \infty}\hat{p}_{D}(s,x,w)\nu(z-w)\,ds\int_{0}^{\infty}\,p_{D}(t-s,z,y)\,du\,dz \,dw\] \[=G_{D}(x,y)-\int_{D}\int_{D\setminus D_{w}}\hat{G}_{D}(x,w)\nu(z- w)G_{D}(z,y)\,dz\,dw.\qed\]
**Lemma 5.3**.: For open \(U\subset D\),
\[\hat{G}_{D}(x,y)=\hat{G}_{U}(x,y)+\mathbb{E}_{x}\left[[X_{\sigma_{U}-},X_{ \sigma_{U}}]\subset D;\hat{G}_{D}(X_{\sigma_{U}},y)\right].\]
Proof.: By Lemma 2.18,
\[\hat{G}_{D}(x,y) =\hat{G}_{U}(x,y)+\int_{0}^{\infty}\mathbb{E}_{x}\left[\sigma_{U} <t,[X_{\sigma_{U}-},X_{\sigma_{U}}]\subset D;\hat{p}_{D}(t-\sigma_{U},X_{ \sigma_{U}},y)\right]\,dt\] \[=\hat{G}_{U}(x,y)+\mathbb{E}_{x}\left[[X_{\sigma_{U}-},X_{\sigma _{U}}]\subset D;\hat{G}_{D}(X_{\sigma_{U}},y)\right].\qed\]
**Lemma 5.4**.: For \(x,y\in\mathbb{R}^{d},\ r>0,\)
\[\hat{G}_{rD}(rx,ry)=r^{\alpha-d}\int_{0}^{\infty}\hat{p}_{D}(s,x,y)\,ds=r^{\alpha -d}\hat{G}_{D}(x,y).\]
Proof.: By Lemma 2.4,
\[\hat{G}_{rD}(rx,ry) =\int_{0}^{\infty}\hat{p}_{rD}(t,rx,ry)\,dt=r^{\alpha}\int_{0}^{ \infty}\hat{p}_{rD}(r^{\alpha}s,rx,ry)\,ds\] \[=r^{\alpha-d}\int_{0}^{\infty}\hat{p}_{D}(s,x,y)\,ds=r^{\alpha-d }\hat{G}_{D}(x,y).\qed\]
**Lemma 5.5**.: \(\hat{G}_{D}(x,y)>0\) for \(x,y\in D.\)__
Proof.: The result follows from Lemma 2.7.
### Harnack lower bound
**Definition 5.6**.: _Consider open \(U\subset D\). We say that \(u:D\to[0,\infty]\) has the mean value property (\(mvp\)) for \(\hat{X}_{D}\) inside \(U\) if_
\[u(x)=\mathbb{E}_{x}\{[X_{\sigma_{B}-},X_{\sigma_{B}}]\subset D;u(X_{\sigma_{B }})\},\quad x\in B\subset\subset U,\]
_and we say that \(u\) has the supermean value property (\(smvp\)) if_
\[u(x)\geq\mathbb{E}_{x}\{[X_{\sigma_{B}-},X_{\sigma_{B}}]\subset D;u(X_{\sigma_ {B}})\},\quad x\in B\subset\subset U.\]
Without loosing generality in Definition 5.6 we may let \(u=0\) on \(D^{c}\).
**Lemma 5.7**.: \(u(x)=\hat{G}_{D}(x,y)\) has \(smvp\) for \(\hat{X}_{D}\) on \(D\) and \(mvp\) for \(\hat{X}_{D}\) on \(D\setminus\{y\}\).
Proof.: Let \(y\in\mathbb{R}^{d}\). From Lemma 5.3 we get \(smvp\) of \(u\). Then for \(B\subset\subset D\setminus\{y\}\), we get \(\hat{G}_{B}(x,y)\leq G_{B}(x,y)=0\), because \(\operatorname{dist}(y,B)>0\). By Lemma 5.3, \(\hat{G}_{D}(x,y)=\mathbb{E}_{x}\left[[X_{\sigma_{B}-},X_{\sigma_{B}}]\subset D ;\hat{G}_{D}(X_{\sigma_{B}})\right]\).
For a ball \(B=B(z,r)\) and \(\varepsilon>0\), we let \(\varepsilon B=B(z,\varepsilon r)\). The following variant of Harnack inequality is somewhat unusual because infima are used on both sides of (25) - the inequality cannot have the supremum on the right-hand side, see Section 6.
**Proposition 5.8** (Harnack lower bound).: Let \(n>1\). Let \(\mathbb{L}=[x_{1},x_{2}]\cap[x_{2},x_{3}]\cap...\cap[x_{n-1},x_{n}]\subset D\) be a polygonal chain in \(D\). Let \(\rho\in(0,\operatorname{dist}(\mathbb{L},D^{c}))\). Let \(u\geq 0\) have the \(smvp\) for \(\hat{X}_{D}\) inside \(B(x_{i},\rho)\) for each \(i=2,3,...,n\). Let \(\varepsilon\in(0,1)\). Then there is a constant \(C=C(d,\alpha,\mathbb{L},\rho,\varepsilon)\) such that
\[\inf_{y\in B(x_{n},(1-\varepsilon)\rho)}u(y)\geq C\inf_{x\in B(x_{1}, \varepsilon\rho)}u(x). \tag{25}\]
The proof of Proposition 5.8 is given after the following auxiliary result.
**Lemma 5.9**.: Let \(d\in\{1,2,...\},\alpha\in(0,2),\rho\in(0,\infty),l\in[0,\infty),\varepsilon \in(0,1)\). Let \(D\subset\mathbb{R}^{d}\) be a (nonempty) domain. Let \(x_{1},x_{2}\in D\) be such that the line segment \(L=[x_{1},x_{2}]\) is contained in \(D\), \(\operatorname{dist}(L,D^{c})\geq\rho\) and \(|x_{1}-x_{2}|=l\). Assume that \(u:D\to[0,\infty]\) has \(smvp\) inside \(B(x_{2},\rho)\) for the shot-down process \(\hat{X}^{D}\) on \(D\). Then,
\[\inf_{y\in B(x_{2},(1-\varepsilon)\rho)}u(y)\geq C\inf_{x\in B(x_{1}, \varepsilon\rho)}u(x), \tag{26}\]
where \(C=C(d,\alpha,l/\rho,\varepsilon)\in(0,1)\) may be chosen as nonincreasing in \(l/\rho\).
Proof.: By our assumptions,
\[u(y)\geq\mathbb{E}_{y}\{[X_{\tau_{B}-},X_{\tau_{B}}]\in D;u(X_{\tau_{B}})\}\text{ for }y\in B\subset\subset B(x_{2},\rho). \tag{27}\]
The set \(K=\{z\in\mathbb{R}^{d}:\operatorname{dist}(z,L)<\rho\}\) is convex (see Figure 4). Furthermore, \(B(x_{1},\rho),B(x_{2},\rho)\subset K\) and \(K\subset D\). These will be important in applications of (27). Let \(y\in B(x_{2},(1-\varepsilon)\rho)\).
We only need to prove that \(u(y)\geq C\inf_{x\in B(x_{1},\varepsilon\rho)}u(x)\) for some appropriate constant \(C\). To this end, we consider \(B=B(y,\varepsilon\rho/4)\). Let \(x^{\prime}\) be the center of the radius of \(B(x,\varepsilon\rho)\) antipodal to \(y\), see Figure 4. We define \(B^{\prime}=B(x^{\prime},\varepsilon\rho/4)\). Thus, \(B^{\prime}\subset B(x_{1},\varepsilon\rho)\) and \(B^{\prime}\cap B=\emptyset\). Clearly, \(B,B^{\prime}\subset K\subset D\). Then,
\[u(y) =\mathbb{E}_{y}\{[X_{\tau_{B}-},X_{\tau_{B}}]\subset D;u(X_{\tau_{ B}})\}\geq\mathbb{E}_{y}\{X_{\tau_{B}}\in B^{\prime};u(X_{\tau_{B}})\}\] \[\geq\mathbb{P}_{y}(X_{\tau_{B}}\in B^{\prime})\inf_{x\in B(x_{1}, \varepsilon\rho)}u(x).\]
By Riesz' formula,
\[\mathbb{P}_{y}(X_{\tau_{B}}\in B^{\prime})=C_{\alpha}^{d}\int\limits_{|x-x^{ \prime}|<\varepsilon\rho/4}\frac{(\varepsilon\rho/4)^{\alpha}}{(|x-y|^{2}-( \varepsilon\rho/4)^{2})^{\alpha/2}}|x-y|^{-d}\,dx.\]
We have \(|x-y|\leq 2\rho+l\) under the integral, hence
\[\mathbb{P}_{y}(X_{\tau_{B}}\in B^{\prime})\geq C_{\alpha}^{d}(2\rho+l)^{-d- \alpha}(\varepsilon\rho/4)^{d+\alpha}|B(0,1)|=C_{\alpha}^{d}\frac{\omega_{d}} {d}\left(\frac{\varepsilon}{4}\right)^{\alpha+d}\left(2+\frac{l}{\rho}\right) ^{-d-\alpha}.\qed\]
Figure 4. Situation in the proof of Lemma 5.9
Proof of Proposition 5.8.: Let \(l_{i}=|x_{i}-x_{i-1}|,i=2,3,...,n\). Assume that \(\varepsilon\in(0,\frac{1}{2}]\). By Lemma 5.9 and induction,
\[\inf_{y\in B(x_{n},(1-\varepsilon)\rho)} \geq C\left(d,\alpha,\frac{l_{n}}{\rho},\varepsilon\right)\inf_{x \in B(x_{n-1},\varepsilon\rho)}u(x)\geq C\left(d,\alpha,\frac{l_{n}}{\rho}, \varepsilon\right)\inf_{x\in B(x_{n-1},(1-\varepsilon)\rho)}u(x)\] \[\geq C\left(d,\alpha,\frac{l_{n}}{\rho},\varepsilon\right)\cdot \ldots\cdot C\left(d,\alpha,\frac{l_{2}}{\rho},\varepsilon\right)\inf_{x\in B (x_{1},(1-\varepsilon)\rho)}u(x).\]
This proves (25) for \(\varepsilon<\frac{1}{2}\). For \(\varepsilon\in(\frac{1}{2},1)\), we trivially have
\[\inf_{y\in B(x_{n},(1-\varepsilon)\rho)}u(y) \geq\inf_{x\in B(x_{n},\rho/2)}u(x)\] \[\geq C\left(d,\alpha,\frac{l_{n}}{\rho},\frac{1}{2}\right)\cdot \ldots\cdot C\left(d,\alpha,\frac{l_{2}}{\rho},\frac{1}{2}\right)\inf_{x\in B (x_{1},\rho/2)}u(x)\] \[\geq C\left(d,\alpha,\frac{l_{n}}{\rho},\frac{1}{2}\right)\cdot \ldots\cdot C\left(d,\alpha,\frac{l_{2}}{\rho},\frac{1}{2}\right)\inf_{x\in B (x_{1},\varepsilon\rho)}u(x).\]
The dependence of the estimate on the geometry of \(\mathbb{L}\) and \(D\) may be detailed as follows,
\[C\left(d,\alpha,\mathbb{L},\rho,\varepsilon\right)=\left[C_{\alpha}^{d}\frac{ \omega_{d}}{\alpha}\left(\frac{\varepsilon\wedge\frac{1}{2}}{4}\right)^{ \alpha+d}\right]^{n}\prod_{i=2}^{n}\left(2+\frac{l_{i}}{\rho}\right)^{-d-\alpha},\]
see the proof of Lemma 5.9.
### Sharp estimates of the Green function
**Theorem 5.10**.: _Let \(d\in\{1,2,...\},\alpha\in(0,2)\) and \(d>\alpha\). Let \(D\neq\emptyset\) be a domain in \(\mathbb{R}^{d}\). Let \(\hat{G}_{D}\) be the Green function of the shot-down process \(\hat{X}_{D}\). Then,_
\[\hat{G}_{D}(x,y)\approx\frac{\delta_{D}(x)^{\alpha/2}\delta_{D}(y)^{\alpha/2}} {r(x,y)^{\alpha}}|x-y|^{\alpha-d},\quad x,y\in D, \tag{28}\]
_where \(\delta_{D}(x)=\operatorname{dist}(x,D^{c})\), \(r(x,y)=\delta_{D}(x)\vee|x-y|\vee\delta_{D}(y)\)._
We remark that (28) is analogous to the corresponding estimate for the Green function for the killed \(\alpha\)-stable process in bounded smooth domains ([25], [15]), which has been extended to bounded Lipschitz domains in [22]:
\[G_{D}(x,y)\approx\frac{\delta_{D}(x)^{\alpha/2}\delta_{D}(y)^{\alpha/2}}{r(x,y )^{\alpha}}|x-y|^{\alpha-d},\quad x,y\in D. \tag{29}\]
Therefore Theorem 5.10 in fact asserts the comparability:
\[\hat{G}_{D}(x,y)\approx G_{D}(x,y),\quad x,y\in\mathbb{R}^{d}.\]
We emphasize that connectedness plays an important role in the potential theory of \(\hat{X}^{D}\). Accordingly, our proof uses the same old-fashioned Harnack chain argument as for classical harmonic functions [7], whereas the Harnack inequality for \(\alpha\)-harmonic functions has a more flexible statement; see the proof of (29) in [22].
Proof of Theorem 5.10.: We have \(\hat{G}_{D}(x,y)\leq G_{D}(x,y)\) for \(x,y\in\mathbb{R}^{d}\), hence we only need to prove the lower bound:
\[\hat{G}_{D}(x,y)\geq c\frac{\delta_{D}(x)^{\alpha/2}\delta_{D}(y)^{\alpha/2}}{r (x,y)^{\alpha}}|x-y|^{\alpha-d},\quad x,y\in D. \tag{30}\]
Let \(r_{0}>0\) be a localization radius of \(D\). For \(x\in D\), there is \(\underline{x}\in\partial D\) such that \(\delta_{D}(x)=|x-\underline{x}|\), and the ball \(B(\bar{x},r_{0}/2)\) tangent to \(\partial D\) at \(\underline{x}\), see Figure 5.
Let us consider several cases.
**Case \(\mathbf{1}:\)** Assume \(\varepsilon\in(0,1)\) and there are \(r>0,\;z\in D\) such that \(B=B(z,r)\subset D\) and \(x,y\in\varepsilon B\). Then,
\[\hat{G}_{D}(x,y)\geq\hat{G}_{B}(x,y)=G_{B}(x,y)\approx|x-y|^{\alpha-d}.\]
This follows from (29), because \(\delta_{D}(x)\approx\delta_{D}(y)\approx r(x,y)\), and so (30) also hold in the case considered.
**Case \(\mathbf{2}:\)** Let \(K=\{x\in D:\delta_{D}(x)\geq r_{0}/8\}\). \(K\) is connected and by the Harnack lower bound, Lemma 5.7 and \(Case\;1\) we get
\[\hat{G}_{D}(x,y)\geq c,\quad x,y\in K.\]
**Case \(\mathbf{3}:\)** Let \(\delta_{D}(x)<r_{0}/8,\delta_{D}(y)>r_{0}/4\). We consider the ball \(B_{x}=B(\tilde{x},r_{0}/4)\subset D\) tangent to \(\partial D\) at \(\underline{x}\), of radius \(r_{0}/4\), and the ball \(B=B(\tilde{x},r_{0}/8)\subset D\) tangent to \(\partial D\) at \(\underline{x}\). Note that \(\underline{x}\in\partial B\), and \(y\notin B\). Furthermore, we consider a ball \(B^{\prime}\subset B_{x}\) of radius \(r_{0}/8\) tangent to \(\partial B_{x}\) at \(\bar{x}\). The situation is shown in Figure 6.
Figure 5. Situation in the proof of Theorem 5.10
By \(Case\ \ 2\) for \(z\in B^{\prime}\) we have \(\hat{G}_{D}(z,y)\geq c\). Since \(B_{x}\) is a convex subset of \(D\),
\[\hat{G}_{D}(x,y) =\mathbb{E}_{x}\big{\{}\hat{G}_{D}(X_{\tau_{B}},y);[X_{\tau_{B}-}, X_{\tau_{B}}]\in D\big{\}}\geq\mathbb{E}_{x}\big{\{}\hat{G}_{D}(X_{\tau_{B}},y);X_{ \tau_{B}}\in B^{\prime}\big{\}}\] \[\geq c\ \mathbb{P}_{x}\big{\{}X_{\tau_{B}}\in B^{\prime}\big{\}} \geq c^{\prime}\int\limits_{B^{\prime}}\frac{\big{(}(r_{0}/8)^{2}-|x-\tilde{x }|^{2}\big{)}^{\alpha/2}}{\big{(}|z-x|^{2}-(r_{0}/8)^{2}\big{)}^{\alpha/2}}|z-x |^{-d}\,dz\] \[\geq c^{\prime\prime}(\delta_{D}(x)/r_{0})^{\alpha/2}\approx \delta_{D}(x)^{\alpha/2},\]
which proves (30) in this case.
**Case \(\mathbf{4:}\)** If \(\delta_{D}(y)<r_{0}/8\), \(\delta_{D}(x)>r_{0}/4\), then we recall that \(\hat{G}_{D}(x,y)=\hat{G}_{D}(y,x)\), and (30) follows from \(Case\ \ 3\).
**Case \(\mathbf{5:}\)** Let \(\delta_{D}(x)<r_{0}/8\), \(\delta_{D}(y)<r_{0}/8\), and \(|x-y|>r_{0}\). As in \(Case\ \ 3\) we consider a ball \(B\subset D\) of radius \(r_{0}/8\) and center \(\tilde{x}\) tangent to \(\partial D\) at \(\underline{x}\). We also consider the ball \(B^{\prime}\) from \(Case\ \ 3\), see Figure 6. We denote them by \(B(x)\) and \(B^{\prime}(x)\), respectively, and construct analogous balls \(B(y)\) and \(B^{\prime}(y)\) for \(y\). Note that \(y\notin B_{x}\). By convexity of \(B_{x}\),
\[\hat{G}_{D}(x,y)=\mathbb{E}_{x}\big{\{}\hat{G}_{D}(X_{\tau_{B(x)}},y);[X_{\tau _{B(x)-}},X_{\tau_{B(x)}}]\subset D\big{\}}\geq\mathbb{P}_{x}\big{\{}X_{\tau_ {B(x)}}\in B^{\prime}(x)\big{\}}\!\!\!\!\inf_{z\in B^{\prime}(x)}\hat{G}_{D}(z, y).\]
For \(z\in B^{\prime}(x)\), we have \(z\notin B(y)\), and we get
\[\hat{G}_{D}(z,y)=\mathbb{E}_{y}\big{\{}\hat{G}_{D}(z,X_{\tau_{B(y)}});[X_{\tau _{B(y)-}},X_{\tau_{B(y)}}]\subset D\big{\}}\geq\mathbb{P}_{y}\big{\{}X_{\tau_ {B(y)}}\in B^{\prime}(y)\big{\}}\!\!\!\inf_{w\in B^{\prime}(y)}\hat{G}_{D}(z,w).\]
Therefore \(\hat{G}_{D}(x,y)\geq c\delta_{D}(x)^{\alpha/2}\delta_{D}(y)^{\alpha/2}\), see \(Case\ \ 3\).
Figure 6. Proof of 5.10: \(Case\ 3\)
**Case 6** : Let \(\delta_{D}(x)<r_{0}/8,\delta_{D}(y)<r_{0}/8,|x-y|<r_{0}\). This case locally repeats the arguments of \(Case\ 3-5\), but to handle it we need the following Harnack chain of balls.
If \(\varepsilon>0,x_{1},x_{2}\) belong to \(D\), \(\delta(x_{j})>\varepsilon\) and \(|x_{1}-x_{2}|<2^{k}\varepsilon\), then there is a Harnack chain from \(x_{1}\) to \(x_{2}\) of length \(Mk\). Furthermore, for each ball \(B\) in the chain, its radius is not smaller than \(M^{-1}\min(\operatorname{dist}(x_{1},B),\operatorname{dist}(x_{2},B))\) (see [23] for details).
By Harnack lower bound, if \(x_{1},x_{2}\) are as above, then every nonnegative function with \(smpv\) in \(D\) satisfies:
\[\inf_{y\in(1-\varepsilon)B_{2}}u(y)\geq c^{k}\inf_{x\in\varepsilon B_{1}}u(x).\]
We now sketch the argument for \(Case\ 6:\) If \(|x-y|\leq N[\delta_{D}(x)\vee\delta_{D}(y)]\), then by the Harnack chain and \(Case\ 1\) with radii \(s\approx\delta_{D}(x)\vee\delta_{D}(y)\), we have \(G(x,y)\geq c|x-y|^{\alpha-d}\). Take large \(N\) and \(r_{0}>|x-y|>N[\delta_{D}(x)\vee\delta_{D}(y)]\). Assume \(\delta_{D}(x)\leq\delta_{D}(y)\) and take \(\tilde{x}\in D\) on the ray through \(\underline{x},x\) such that \(\delta(\tilde{x})=\frac{1}{3}|x-y|.\) Similarly, define \(\tilde{y}\). Then \(|\tilde{x}-\tilde{y}|,\delta(\tilde{x}),\delta(\tilde{y})\approx r(x,y)=r.\) We have by Harnack lower bound and by considering the tangent balls \(B_{x},B_{y}\) as in \(Case\ 5\) and \(Case\ 4\):
\[\hat{G}_{D}(x,y) \geq c\left(\frac{\delta_{D}(x)}{r}\right)^{\alpha/2}\hat{G}_{D} (\tilde{x},y)\geq c\left(\frac{\delta_{D}(x)}{r}\right)^{\alpha/2}\left(\frac {\delta_{D}(y)}{r}\right)^{\alpha/2}\hat{G}_{D}(\tilde{x},\tilde{y})\] \[\geq c\frac{\delta_{D}(x)^{\alpha/2}\delta_{D}(y)^{\alpha/2}}{r( x,y)^{\alpha}}r(x,y)^{\alpha-d}.\qed\]
## 6. Failure of Harnack inequality
The killed \(\alpha\)-stable Levy process \(X^{D}\) satisfies the classical Harnack inequality: Given a bounded open set \(D\subset\mathbb{R}^{d}\), there is a constant \(c\geq 1\) such that every nonnegative function \(u\), which is harmonic with respect to the process \(X^{D}\), and for all \(x,y\in D\) with \(\operatorname{dist}(x,D^{c})\geq\frac{1}{2}\operatorname{diam}(D)\) and \(\operatorname{dist}(y,D^{c})\geq\frac{1}{2}\operatorname{diam}(D)\) the following inequality holds true:
\[c^{-1}u(x)\leq u(y)\leq cu(x)\,.\]
In this section, we provide an example that shows that the Harnack inequality fails for the shot-down process \(\hat{X}^{D}\). To this end, we fix \(d=2\), \(\alpha<1\), and we let \(\varepsilon>0\) be small. We define balls \(B=B((0,0),1),B^{\prime}=B((2.5,-1),1),B^{\prime\prime}=B((5,1),1)\), set \(D=B((0,0),9)\setminus(B^{\prime}\cup B^{\prime\prime})\) and consider the geometric situation shown in Figures 7 and 8. Let \(|A|\) be the (planar) measure of the set \(A\) therein.
Let \(f=\frac{1}{|A|}\mathbf{1}_{A}\), and
\[u(x)=\mathbb{E}_{x}\big{[}f(X_{\tau_{B}});[X_{\tau_{B}-},X_{\tau_{B}}]\subset D \big{]},\quad x\in\mathbb{R}^{2}.\]
By Ikeda-Watanabe, for \(x\in B\) we have
\[u(x)=\int\limits_{[z,y]\subset D}G_{B}(x,y)\nu(z-w)f(z)\,dy\,dz.\]
Therefore by (29) and Figure 8,
\[u(0)=\int\limits_{A}\int\limits_{V_{\varepsilon}}G_{B}(0,y)\nu(z-y)f(z)\,dy\, dz\geq c\int\limits_{V_{\varepsilon}}G_{B}(0,y)\,dy\geq c_{1}\int\limits_{|y|<c_{2 \varepsilon}}|y|^{\alpha-2}\,dy=c_{3}\varepsilon^{\alpha},\]
where \(c_{3}>0\) does not depend on \(\varepsilon\). Then we consider \(x=(0,x_{2})\) with \(-1<x_{2}<0\). By (29) and Figure 8,
\[u(x)\leq\int\limits_{A}\int\limits_{V_{2\varepsilon}}G_{B}(x,y)\nu(z-y)f(z)\, dy\,dz\leq c_{3}\int\limits_{V_{2\varepsilon}}G_{B}(x,y)\,dy\leq c_{4}(-x_{2})^{ \alpha-2}|V_{\varepsilon}|\leq c_{5}|x_{2}|^{\alpha-2}\varepsilon.\]
where \(c_{5}>0\) does not depend on \(\varepsilon\).
This is much smaller than \(u(0)\) when \(\varepsilon\to 0\) since \(\alpha<1\). Recall that \(A\) and \(u\) in fact depend on \(\varepsilon\). Therefore, we have found a sequence of harmonic functions \(u_{\varepsilon}\), which are harmonic on \(D\) with respect to \(\hat{X}^{D}\), such that the ratio \(u_{\varepsilon}(0)/u_{\varepsilon}(x)\) is unbounded. The Harnack inequality fails. |
2305.17179 | Tokenization Impacts Multilingual Language Modeling: Assessing
Vocabulary Allocation and Overlap Across Languages | Multilingual language models have recently gained attention as a promising
solution for representing multiple languages in a single model. In this paper,
we propose new criteria to evaluate the quality of lexical representation and
vocabulary overlap observed in sub-word tokenizers. Our findings show that the
overlap of vocabulary across languages can be actually detrimental to certain
downstream tasks (POS, dependency tree labeling). In contrast, NER and
sentence-level tasks (cross-lingual retrieval, NLI) benefit from sharing
vocabulary. We also observe that the coverage of the language-specific tokens
in the multilingual vocabulary significantly impacts the word-level tasks. Our
study offers a deeper understanding of the role of tokenizers in multilingual
language models and guidelines for future model developers to choose the most
suitable tokenizer for their specific application before undertaking costly
model pre-training | Tomasz Limisiewicz, Jiří Balhar, David Mareček | 2023-05-26T18:06:49Z | http://arxiv.org/abs/2305.17179v1 | # Tokenization Impacts Multilingual Language Modeling:
###### Abstract
Multilingual language models have recently gained attention as a promising solution for representing multiple languages in a single model. In this paper, we propose new criteria to evaluate the quality of lexical representation and vocabulary overlap observed in sub-word tokenizers. Our findings show that the overlap of vocabulary across languages can be actually detrimental to certain downstream tasks (POS, dependency tree labeling). In contrast, NER and sentence-level tasks (cross-lingual retrieval, NLI) benefit from sharing vocabulary. We also observe that the coverage of the language-specific tokens in the multilingual vocabulary significantly impacts the word-level tasks. Our study offers a deeper understanding of the role of tokenizers in multilingual language models and guidelines for future model developers to choose the most suitable tokenizer for their specific application before undertaking costly model pre-training.1
Footnote 1: The code is available at: github.com/tomlimi/entangled_in_scripts.
## 1 Introduction
Multilingual language models perform surprisingly well in a variety of NLP tasks for diverse languages Devlin et al. (2019); Conneau and Lample (2019); Conneau et al. (2019). It has been observed that the representation of the input sequence has a significant effect on their effectiveness Mielke et al. (2021). In the widely used Transformer Vaswani et al. (2017) models achieving state-of-the-art results through diverse tasks, a large fraction of parameters are allocated in the input encoding layer.2 The popular language-independent approach to represent the input texts is to learn a vocabulary of frequently appearing strings that may consist of words or parts of words Sennrich et al. (2016); Song et al. (2021); Kudo and Richardson (2018).
Footnote 2: For instance, in XLM-Roberta\({}_{\text{Base}}\), 192M out of 270M parameters are in the input embedding layer (approximately 70%).
In this work, we focus on the characteristics of subword tokenization methods in a multilingual setting. Our main contribution is the introduction of the methods for measuring whether tokenizers effectively represent meaningful language-specific tokens in the vocabulary (_vocabulary allocation_) and whether the units they learn are shared across languages (_vocabulary overlap_). We posit the following questions:
Figure 1: Mapping the impact of _vocabulary allocation_ and _vocabulary overlap_ on language model performance. The location of points corresponds to Spearman’s correlation between vocabulary measures and the task score (see the details in Tables 3 and 5). High _vocabulary overlap_ benefits NER and sentence-level tasks (NLI, sentence retrieval) and hinders POS and dependency labeling performance. High _vocabulary allocation_ improves word-level tasks but leads to a decrease in masked language modeling scores. Masked language modeling is measured only in language. Thus it’s unaffected by _vocabulary overlap_. Analogically, sentence retrieval is solely cross-lingual and unaffected by _vocabulary allocation_.
**(Q1) How do sub-word tokenizers differ in _overlap_ and _allocation_ of learned vocabularies?_ To answer this question, we apply the metrics to tokenizers obtained with two widely used algorithms: SentencePiece Unigram LM (Kudo and Richardson, 2018), and BPE (Sennrich et al., 2016). Furthermore, we propose two methods of learning tokenizers on monolingual corpora and then combining them to allow the tokenization of multilingual texts.
**(Q2) Which properties of multilingual tokenizers affect the LM's representation quality?** We address this question by training small language models utilizing different tokenization methods. We evaluate the models on masked word prediction and a diverse set of downstream tasks: POS, NER tagging, dependency tree labeling, NLI, and cross-lingual sentence retrieval.
The proposed evaluation scheme offers a good prediction of language models' performance. Notably, we show that the system results significantly improve when tokenizers allocate more vocabulary units for specific languages. Our investigation shows that this aspect has a bigger influence than the _vocabulary overlap_ for word-level tasks (see Figure 1). To the best of our knowledge, the interactions between multilingual _vocabulary allocation_ and _vocabulary overlap_ have not been investigated in past research.
## 2 Multilingual Subword Tokenization
The majority of the currently deployed models use subword tokenization as a way to pre-process the input texts. The input is represented as a sequence of units from a finite vocabulary, which can be translated into numeric representation by an input embedding layer.
The benefits of subword tokenization are the ability to obtain numeric representation for meaningful words frequently used in the resources and handling less frequent words by splitting them into subwords. The latter property mitigates the problem of out-of-vocabulary (OOV) words by breaking them down into smaller parts (sub-words) already present in the vocabulary. It is crucial in handling multilingual texts, especially in languages with large vocabularies and complex morphology.
In the following section, we describe two widely used algorithms of subword tokenization:
### Background: Subword Tokenization
**Byte-pair encoding BPE:**(Sennrich et al., 2016) is a subword tokenization method that iteratively replaces the most frequent pair of vocabulary units in the input text with a single unit. The process starts with taking unique characters of the training text as the initial vocabulary. Subsequently, we take the most frequent pair of vocabulary units, merge the pair, and add it as a new unit to the vocabulary. This process is repeated until a pre-set vocabulary size \(N\) is reached.
**Unigram LM:**(Kudo, 2018) is the method of obtaining subword vocabulary that was first introduced as the underlying tokenizer of SentencePiece algorithm (Kudo and Richardson, 2018). The prerequisite is obtaining an extensive vocabulary, e.g., consisting of all strings present in data with at most, a predefined number of characters. The expectation-maximization algorithm is used to estimate the probability of vocabulary units. After EM convergence, the portion of units with the lowest contribution to the likelihood of the training corpus is removed from the vocabulary. The procedure is repeated until the pre-set vocabulary size is obtained.
### Combining Monolingual Tokenizers
Rust et al. (2021) observed that subword tokenizers trained on monolingual data outperform multilingual ones. The latter can overrepresent the subwords specific to languages constituting a large portion of the training corpora (e.g., English). Moreover, their vocabulary is less likely to contain morphemes important in modeling low-resource languages and instead prioritizes less meaningful character sequences appearing across languages.
To alleviate this issue, we suggest utilizing monolingual tokenizers for multilingual tokenization. First, the Unigram LM tokenizers are trained on separate monolingual corpora. The tokenizers are then combined to create a tokenizer suitable for multilingual data. We propose two methods for combining monolingual tokenizers:
**Language-specific Tokenization NoOverlap:** We train Unigram tokenizers for each of \(L\) considered languages with the same vocabulary size for each of the languages \(\frac{N}{L}\). In multilingual tokenization, we apply the tokenizer for a specific language separately and produce a token with language identification.3 The vocabulary consists of
segments of total size \(N\). Naturally, the tokenized texts in different languages will consist of tokens from distinct vocabulary segments. Noticeably, the same character sequence in different languages can be assigned different token ids.
**Language-Mixed Tokenization TokMix:** We train Unigram LM tokenizers for each of \(L\) languages. Subsequently, we averaged vocabulary unit probabilities across tokenizers, sorted them, and trimmed the vocabulary to the pre-set vocabulary size \(N\) keeping the units with the highest probability. 4
Footnote 4: To account for possible overlaps between language-specific vocabularies, we set their sizes above \(\frac{N}{L}\). It assures that joint vocabulary will have at least \(N\) tokens.
\[\hat{\theta}=\sum_{i=1}^{L}w_{i}\theta_{i} \tag{1}\]
\(w_{i}\) are weights assigned to each language. By default, we set the weights to be uniform and equal to \(\frac{1}{L}\). Unlike NoOverlap, the same vocabulary units coming from distinct monolingual tokenizers are merged into one unit with averaged probability.
### Tokenizer and Model Training Setting
We initially focused on a group of 6 languages varying both in the script and language family: Arabic, Chinese, Greek, Turkish, Spanish, and English. In subsequent experiments, we extend the method to 20 languages.
We download \(10\%\) of CC corpus available atv [https://data.statmt.org/cc-100/](https://data.statmt.org/cc-100/). Following the methodology in Conneau and Lample (2019), we subsample each language's data to ensure that the training corpus is well-balanced across languages. An equation defines the sample size \(c_{l}\) for language \(l\):
\[c_{l,\alpha}=c_{\min}\cdot\left(\frac{|C_{l}|}{c_{\min}}\right)^{\alpha} \tag{2}\]
Where \(c_{\min}\) is the minimal sample size (defined by the smallest language), and \(C_{l}\) is all data available for a language, \(\alpha\) is the so-called "balancing parameter". In our experiments, we set \(c_{\min}\) to 10 M characters, \(C_{l}\) is, e.g., 8.8 B characters for English. We set \(\alpha\) to \(0.25\), which corresponds to a balancing factor picked for XLM-Roberta Conneau et al. (2019). The training data for the tokenizer and the model are the same. The vocabulary size \(N\) was set to 120,000. Appendix A contains technical details about our approach.
## 3 Measuring Tokenizer Properties
This section presents our in-depth analytical approach to evaluate different aspects of multilingual tokenization. We introduce non-parametric measures that describe the key properties of multilingual tokenizers: quality of vocabulary representation for particular languages and lexical overlap across languages.
We base our analysis on the empirical probability distribution of vocabulary units \(v\in\mathcal{V}\) computed on training corpus for each language \(l\):
\[d_{l,\mathcal{V}}(v)=\frac{f(v,C_{l})}{\sum_{v\in\mathcal{V}}f(v,C_{l})} \tag{3}\]
Function \(f(v,C_{l})\) is the number of occurrences of a vocabulary unit \(v\) in monolingual training corpus \(C_{l}\).
### Vocabulary Allocation
We aim to quantify how well multilingual vocabulary represents meaningful lexical units of particular languages. Our intuition is that a good lexical representation is obtained when: 1. It uses a vast portion of multilingual vocabulary, and thus a larger part of the embedding layer is devoted to the language; 2. The text in the language is split into longer and potentially more meaningful tokens.
Vocabulary Allocation: Average RankTo measure the number of vocabulary units available for modeling specific languages, we propose an estimation of the average rank of vocabulary units in distribution over a monolingual corpus.5 This measure denotes how many tokens are typically considered by a language model that has access to language identity information but no context (probabilistic unigram LM).
Footnote 5: In this context, rank is the position of unit \(v\) in the vocabulary \(\mathcal{V}\) sorted in descending order by the probability distribution \(d_{l,\mathcal{V}}\)
\[\mathrm{AR}_{l,\mathcal{V}}=\sum_{v\in\mathcal{V}}\mathrm{rank}(v,d_{l, \mathcal{V}})d_{l,\mathcal{V}}(v) \tag{4}\]
Our intuition is that model will have better information about the language's lexicon when vocabulary is distributed over a larger number of tokens as more parameters of the input embedding layer would be allocated to represent language-specific features. Moreover, larger vocabularies tend to cover longer and more meaningful units.
#### Vocabulary Allocation: Characters per Token
In line with previous intuition, longer tokens have a more meaningful representation. Therefore, we measure text fragmentation by computing the average number of characters for a vocabulary unit in monolingual corpus \(C_{l}\).:
\[\mathrm{CPT}_{l,\mathcal{V}}=\frac{|C_{l}|}{|T_{\mathcal{V}}(C_{l})|} \tag{5}\]
\(T_{\mathcal{V}}(C_{l})\) is the tokenization of the corpus with vocabulary \(\mathcal{V}\); \(|C_{l}|\) is the size of the corpus measured as the number of characters. We choose the number of characters as the unit to relate to because it's not susceptible to cross-lingual differences regarding word boundaries and the average length of words. Still, the amount of information conveyed by a single character varies largely with the writing systems, e.g., texts written in logographic scripts (e.g., Chinese, Japanese) tend to be shorter in the number of letters than similarly informative ones in the phonetical script (e.g., Latin) Perfetti and Liu (2005).
### Vocabulary Overlap
Another important property of multilingual vocabulary is sharing lexical units across languages. Previous works claimed that vocabulary overlap improves cross-lingual transfer for learning downstream tasks Pires et al. (2019); Wu and Dredze (2019). We measure overlap as the divergence between corpora distributions \(d_{l}\) (defined in equation 3). We use the Jensen-Shannon divergence.6 We apply JSD because it is symmetric and applicable for distribution with different supports. The latter is often the case when distributions are estimated for languages with distinct writing systems.
Footnote 6: In NLP literature, JSD is also known as “information radius” Manning and Schütze (2001).
\[\mathrm{JSD}(d_{l1,\mathcal{V}}||d_{l2,\mathcal{V}})=\\ =\frac{1}{2}\sum_{v\in\mathcal{V}}d_{l1,\mathcal{V}}(v)\log_{2} \frac{d_{l1,\mathcal{V}}(v)}{m_{l1,l2,\mathcal{V}}(v)}+\\ +\frac{1}{2}\sum_{v\in\mathcal{V}}d_{l2,\mathcal{V}}(v)\log_{2} \frac{d_{l2,\mathcal{V}}(v)}{m_{l1,l2,\mathcal{V}}(v)} \tag{6}\]
where:
\[m_{l1,l2,\mathcal{V}}=\frac{1}{2}d_{l1,\mathcal{V}}+\frac{1}{2}d_{l2,\mathcal{ V}} \tag{7}\]
JSD is bounded in the range \(0\) to \(1\). The lower the value, the larger the overlap across corpora.
Another possibility to quantify overlap is to count unique vocabulary units appearing in tokenized texts across languages. The advantage of divergence is that it reflects the frequency of shared tokens across corpora. It is also less affected by the choice of the data size used for estimating empirical probability distributions (\(d_{l}\)).
## 4 Evaluating Language Modeling and Downstream Tasks
In this section, we present the tasks and measures for evaluation of multilingual language models trained with different tokenizers.
### Language Modeling
We evaluate the masked language modeling performance with mean reciprocal rank:
\[\mathrm{MRR} = \frac{1}{N}\sum_{i=1}^{N}\frac{1}{\mathrm{rank}(x_{i},\hat{P}( \cdot|X\setminus x_{i}))} \tag{8}\]
where \(\hat{P}(\cdot|X\setminus x_{i})\) is the probability over vocabulary of predicting token \(x_{i}\) by the model given its context: \(X\setminus x_{i}\).
### Downstream Evaluation
The downstream tasks are taken from the XtreEME Hu et al. (2020), which is the collection of diverse datasets with predefined splits used to evaluate multilingual models' representation.
We probe the models' output representation to evaluate how useful the learned representation is for the downstream tasks. Only an additional linear layer is trained for the task, while the base model representation is frozen. The approach is suitable for evaluating how well the pre-trained model encodes linguistic phenomena as it does not change parameters learned in pre-training in contrast to regular fine-tuning Conneau et al. (2018); Belinkov (2022).
#### Word-level Tasks
The first set of tasks covers classification on a single word or word pair level. The probe is a linear layer taking word representations on input and outputting one of the classes. For word representations, we take the model's output embedding of the first subwords. We evaluate the results with an F1 score averaged across classes (macro-average).
We test syntactic tasks: **Part of Speech** and **Dependency labeling** on Universal Dependencies de Marneffe et al. (2021) and **Named Entity Recognition** on Wikiann dataset Pan et al. (2017). In dependency labeling, we use edge probe Tenney et al. (2019) on top of the representation of two words connected by the dependency arc.
Sentence-level TasksIn this set of tasks, we examine whether the model learns sentence-level representations that capture its semantics and can be transferred across languages. To obtain this sentence embedding, we average the model's output representation across all the tokens in the sentence.
We evaluate **Natural Language Inference** on XNLI dataset Conneau et al. (2018) and **Sentence Retrieval** on Tatoeba bitext corpus Artetxe and Schwenk (2019). For NLI, we use edge probing. Sentence retrieval is solved by an unsupervised algorithm matching sentences based on their cosine similarity. In Appendix A.3, we provide details of the datasets and probe training.
#### 4.2.1 In-language vs. Cross-lingual Transfer
For all the downstream tasks, except sentence retrieval, we compute in-language performance by training the probe and evaluating it on held-out test data in the same language. We quantify cross-lingual transfer by training a probe on one language (source) and evaluating it on the test set for another language (target).
## 5 Experiments and Results
We train four tokenizers for the smaller set of diverse 6 languages (en, es, tr, el, zh, ar) using existing methods: Unigram, BPE, and our methods for monolingual tokenizer merging: NoOverlap, TokMix. Using these tokenizers, we then train four models7 following the settings of XLM-Roberta Conneau et al. (2019) which we then use for the probing experiments.
Footnote 7: Details about the pretraining and probing procedures are described in Appendix A.2
In Section 5.1, we analyze the distribution of learned vocabulary units and compute _vocabulary allocation_ and _vocabulary overlap_ measures described in Section 3. Then in Section 5.2, we evaluate the models' performance measures introduced in Section 4 and compare them with the measures for tokenizers.
Subsequently, we repeat the analysis for the broader set of 20 diverse languages (including six mentioned earlier and: he, ka, ur, hi, mr, th, ta, te, bg, ru, sw, vi, fr, de) with three tokenization methods used in three pre-trained models. In this setting, we do not use NoOverlap tokenizer, which cannot be trained effectively due to the necessity of constraining vocabulary for each language to \(\frac{N}{L}=6,000\).
### Evaluation of Tokenizers' Properties
Vocabulary allocation largely varies throughout languages and tokenization methods.Table 1 shows that the average rank noticeably differs across languages. The highest AR is observed for Chinese, which is caused by the fact that logographic scripts require an extensive vocabulary capacity to encode all characters.
Multilingual _vocabulary allocation_ is highly dependent on the tokenization method used. Vocabulary learned with Unigram underperforms BPE and
\begin{table}
\begin{tabular}{l l c c c c c c} \hline \hline & & ar & tr & zh & el & es & en \\ \hline \multirow{4}{*}{AR} & Unigram & 2129 & 2719 & **5919** & 2070 & 1439 & 1513 \\ & BPE & 2972 & 3226 & 4294 & **2907** & **2220** & **2143** \\ & NoOverlap & 2537 & 2653 & 2909 & 2065 & 1661 & 1597 \\ & TokMix & **3485** & **4167** & 3961 & 2639 & 1999 & 1898 \\ \hline \multirow{4}{*}{CPT} & Unigram & 3.16 & 4.01 & 1.84 & 3.5 & 3.88 & 3.91 \\ & BPE & **3.7** & 4.19 & **2.03** & **3.97** & **4.34** & **4.22** \\ & NoOverlap & 3.53 & 4.19 & 1.56 & 3.81 & 4.15 & 4.15 \\ & TokMix & **3.7** & **4.45** & 1.73 & 3.9 & 4.24 & 4.18 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Values of _vocabulary allocation_ measures for 4 tokenizers trained on the small language set. The highest values for each language are bolded.
Figure 2: _Vocabulary overlap_ measure: Jensen-Shanon divergence for four tokenization methods. Orange square in the bottom right groups the languages with the same script (Latin).
ToKMix in both average rank and character per token. Table 7 presented in the Appendix shows that this trend exists throughout languages except for Chinese. This suggests that our vanilla Unigram is a suboptimal multilingual vocabulary learner.
It is important to note that NoOverlap scores even lower than Unigram in the _vocabulary allocation_ measures due to the limited vocabulary size for each language and disallowing overlap. However, as shown in the next sections, LM trained with this tokenizer can achieve good results on some tasks.
The choice of tokenization method affects _vocabulary overlap._Figure 2 shows Jensen-Shanon divergencies between the vocabularies of six languages. We observe that the highest cross-lingual overlaps appear in the vocabulary obtained by Unigram, followed by TokMix, and BPE. Expectedly, we do not observe overlaps for NoOverlap's setting (\(\mathrm{JSD}=1\)).
Jensen-Shanon divergence is a good predictor of whether the languages share the script. For all tokenization methods, the divergence is significantly smaller in the bottom-right square grouping of the languages using Latin script. This effect is even more visible in the visualization of JSD computed for twenty languages (Figure 8 in Appendix C).
### Tokenizer Properties Impact Language Model's Performance
High _vocabulary allocation_ improves downstream results for word-level tasks.In Table 1(a), we observe that the choice of the tokenization method significantly impacts the results for POS, dependency labeling, and NER. We presume it results from learning good lexical representations throughout languages, e.g., by BPE and TokMix. The higher _vocabulary allocation_ is especially beneficial for word-level tasks. Whereas the influence on the sentence-level task (NLI) is minimal.
Notably, the model instance with NoOverlap tokenizer achieves the best F1 in POS and dependency labeling despite underperforming in _vocabulary allocation_. It is the result of learning language-specific representation for tokens that is especially useful for syntactic tasks.
masked word prediction harder. At the same time, a high average rank means that the vocabulary is broader and contains lexical units important for downstream tasks.
Again, this trend does not hold for the results for NoOverlap setting, in which the search space for the masked-word problem is limited to the language-specific tokens leading to the best performance in MLM and syntactic tasks (POS and dependency label prediction).
In Table 3, we show that the strong relationship between _vocabulary allocation_ (avg. rank and CPT) and LM performance (MRR) is statistically supported. The length of token units has a strong positive influence on POS, dependency labeling, and NER results (\(r>0.65\)) and a negative influence on MRR (\(r<-0.9\)), while it does not significantly affect NLI results. The correlation between the average rank and MRR, NER scores is weaker but still significant. Moreover, it is significantly correlated with XNLI accuracy with a medium coefficient \(r=0.56\), even though the changes in XNLI are low across tokenizers.
Impact of _vocabulary overlap_ on cross-lingual transfer varies across tasks.We observed that NoOverlap approach obtains competitive results for POS tagging. Surprisingly no vocabulary sharing also improves cross-lingual transfer in the task among languages with Latin script (shown in Table 3(a) and Figure 2(b)). We think that the reason behind the strength of NoOverlap approach is that some tokens have different meanings across languages, e.g., the word "a" is an indefinite article in English and a preposition in Spanish.
Nevertheless, vocabulary overlap is crucial to cross-lingual transfer in some tasks. Especially NER within the same script languages (Figure 2(a)) and sentence-level tasks. For these tasks, NoOverlap significantly underperforms other tokenization methods. The drop within Latin script languages is in the range: \(6.8\) - \(11.3\%\) for NER and \(12.7\) - \(15.9\%\) for sentence retrieval. In these cases, usage of the same tokens can indicate that texts refer to the same entities across languages, e.g., names are usually the same strings in the languages sharing writing system.
\begin{table}
\end{table}
Table 4: Averaged results of the evaluation for cross-language overlaps and transfers. Each probing result is an average of 5 random seeds (for 6 languages) and 3 random seeds (for 20 languages). The best value in each metric is underlined, and bolded results are closer than the sum of standard deviations from the optimal value.
Table 5 presents the correlations for cross-lingual transfer scores with JSD measuring _vocabulary overlap_. The coefficient supports our previous observation that lower overlap (thus higher JSD) improves transfer for POS tagging and dependency labeling and deteriorates it for other tasks. Although, the correlation for NER is not significant. The _vocabulary allocations_ of source and target languages significantly influence the cross-lingual transfers. Similarly to the in-language correlations, the influence of character per token is more substantial on word-level tasks, while Average Rank affects sentence-level tasks to a larger extent. This observation underlines the importance of allocating a sufficient portion of vocabulary for low-resource for better cross-lingual transfer. 8
Footnote 8: We describe the correlation analysis in detail in Appendix C.3.
**Results generalize to the larger set of languages.** The key observation for six language sets holds in the model trained for twenty languages. Table 1(b) shows that BPE and TokMix obtain better _vocabulary allocation_ than Unigram leading to improved results for word-level downstream tasks (NER, POS, Dependency labeling). Due to the smaller vocab size to the language number ratio, average ranks decrease for all methods.
We observe in Table 1(b) that the cross-language
Figure 3: Cross-lingual transfer for POS and NER tasks. The absolute values are presented for the Unigram tokenizer. For other tokenization methods, the color scheme shows a difference from the Unigram algorithm. In the case of NER, we observe a drop in cross-lingual transfer for NoOverlap tokenization, especially for the same script pairs, suggesting that lexical overlap is an important aspect contributing to cross-lingual transfer for NER. We don’t see similar drop in the case of Part of Speech tagging.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & **V. Overlap** & **V. Allocation SRC** & **V. Allocation TGT** \\ & (JSD) & (AR) & (CPT) & (AR) & (CPT) \\ \hline NER & -0.111 & **0.249** & **0.33** & 0.209 & **0.28** \\ POS & **0.395** & **0.365** & **0.547** & **0.489** & **0.653** \\ Dep l. & **0.463** & 0.19 & **0.425** & **0.249** & **0.44** \\ NLI & **-0.516** & **0.421** & 0.203 & **0.297** & 0.103 \\ Retrieval & **-0.648** & **0.235** & 0.082 & **0.238** & 0.085 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Spearman correlations between cross-lingual transfer results and tokenization measures. _vocabulary overlap_ is measured by JSD, we also measure the correlation with _vocabulary allocations_ of source and target language of the transfer directions. Statistically significant correlations (\(p<0.01\)) are bolded. Computed for six languages.
vocabulary overlap is the highest for Unigram and lowest for BPE, similar to the six languages settings. However, the association between _vocabulary overlap_ and the cross-lingual transfers is less pronounced.
## 6 Related Work
Importance of _vocabulary overlap_.Wu and Dredze (2019); Pires et al. (2019) claimed that multilingual overlap benefits cross-lingual transfer. In contrast to this work, they compare overlaps for different language pairs with only one tokenizer. We think that their observations may be confounded by the typological similarity between languages. In the following works, Conneau et al. (2020) found that sharing parameters in top layers is more important to multilingual than same token embedding. Similar results were demonstrated by Wang et al. (2021); Dufter and Schutze (2020) who show that in bilingual models, artificially removing _vocabulary overlap_ (similarly to ours NoOverlap) does not deteriorate cross-lingual transfer. In contrast to many previous approaches, we used probing for evaluation because this method offers better insight into representation learned in pre-training. Similarly, our results, Malkin et al. (2022); Limisiewicz et al. (2022) observed that differences in scripts could, in some cases, improve the cross-lingual transfer in masked language modeling and for downstream tasks.
Importance of _vocabulary allocation_.The effect of _vocabulary allocation_ on model performance was studied to a lower extent. Zheng et al. (2021) observed that limited vocabulary capacity allocated for specific languages impedes the downstream tasks' performance and thus proposed a method to obtain more balanced _vocabulary allocation_ throughout languages. For the same purpose, Chung et al. (2020) proposed a novel approach to generating multilingual vocabulary based on clustering the target languages and merging separate vocabularies. Recently, Liang et al. (2023) based on the elements of both approaches and increased vocabulary to train the XLM-V model, achieving better results than its predecessor (XLM-Roberta Conneau et al. (2019)).
In a monolingual setting, Bostrom and Durrett (2020) argued that Unigram tokenization produces subword tokens that are more aligned with morphological units that bring improvement for downstream tasks. This contrasts with our finding of Unigram's underperformance when applied to a multilingual corpus.
Improving multilingual sub-word tokenization.Patil et al. (2022) proposed a modification to BPE algorithm that increases overlap between similar languages and benefits cross-lingual transfer. Rust et al. (2021) observed that models with dedicated monolingual tokenizers outperform multilingual ones. This observation can be utilized by adapting the embedding layer of the model for a target language Pfeiffer et al. (2020); Artetxe et al. (2020); Minixhofer et al. (2022). However, these approaches require language-specific modification of the model, limiting its multilingual aspect.
Alternatives to sub-word tokenization.There are multiple alternative approaches for inputting text into deep models, such as character-based representation Clark et al. (2022), byte input Xue et al. (2022), or representing the input text as images Salesky et al. (2021). Mielke et al. (2021) summarize a wide range of methods and point out that they offer trade-offs and may be better suited for certain tasks or languages.
## 7 Conclusions
We introduced a new framework for the evaluation of multilingual subword tokenizers. We show that _vocabulary allocation_ is a crucial aspect affecting the results of many downstream tasks. Specifically, we have observed the following trends: 1. Including longer and more diverse vocabulary units (higher _vocabulary allocation_) improves in-language results and cross-lingual transfers for word-level tasks; 2. _vocabulary overlap_ is beneficial for cross-lingual transfer in sentence-level tasks; 3. Among languages with the same script, _vocabulary overlap_ improves transfer for NER and deteriorates it for POS and dependency labeling. Our conclusions are in line with the observation of Mielke et al. (2021) that there is no "silver bullet solution" tokenizer suiting all purposes.
We release the code for measuring tokenizer properties: github.com/tomlimi/entangled_in_scripts. We believe that it will be a useful evaluation tool for the developers of models who can get a better insight into the tokenization method before computationally expensive model training.
### Limitations
To achieve robust, unbiased results, we decided to train first on a smaller number of languages, fix our methodology and then confirm our findings on the full set of languages. This meant that two rounds of pretraining needed to be done and because of that, we scaled our models down for computational efficiency reasons.
Another limitation of our methodology is the choice to train linear probes on top of the contextualized word representations instead of the more common finetuning approach. Nevertheless, we think that probing gives better insight into the pre-trained model's representation.
## Ethics Statement
We do not identify ethical risks connected to this work.
## Acknowledgements
We thank Jindrich Libovicky, Martin Popel, Gabriel Stanovsky, and anonymous ACL reviewers for their valuable comments and suggestions for improvement. This work has been supported by grant 338521 of the Charles University Grant Agency. We have been using language resources and tools developed, stored, and distributed by the LINDAT/CLARIAH-CZ project of the Ministry of Education, Youth and Sports of the Czech Republic (project LM2018101).
|
2309.00908 | MagicProp: Diffusion-based Video Editing via Motion-aware Appearance
Propagation | This paper addresses the issue of modifying the visual appearance of videos
while preserving their motion. A novel framework, named MagicProp, is proposed,
which disentangles the video editing process into two stages: appearance
editing and motion-aware appearance propagation. In the first stage, MagicProp
selects a single frame from the input video and applies image-editing
techniques to modify the content and/or style of the frame. The flexibility of
these techniques enables the editing of arbitrary regions within the frame. In
the second stage, MagicProp employs the edited frame as an appearance reference
and generates the remaining frames using an autoregressive rendering approach.
To achieve this, a diffusion-based conditional generation model, called
PropDPM, is developed, which synthesizes the target frame by conditioning on
the reference appearance, the target motion, and its previous appearance. The
autoregressive editing approach ensures temporal consistency in the resulting
videos. Overall, MagicProp combines the flexibility of image-editing techniques
with the superior temporal consistency of autoregressive modeling, enabling
flexible editing of object types and aesthetic styles in arbitrary regions of
input videos while maintaining good temporal consistency across frames.
Extensive experiments in various video editing scenarios demonstrate the
effectiveness of MagicProp. | Hanshu Yan, Jun Hao Liew, Long Mai, Shanchuan Lin, Jiashi Feng | 2023-09-02T11:13:29Z | http://arxiv.org/abs/2309.00908v1 | # MagicProp: Diffusion-based Video Editing via Motion-aware Appearance Propagation
###### Abstract
This paper addresses the issue of modifying the visual appearance of videos while preserving their motion. A novel framework, named MagicProp, is proposed, which disentangles the video editing process into two stages: appearance editing and motion-aware appearance propagation. In the first stage, MagicProp selects a single frame from the input video and applies image-editing techniques to modify the content and/or style of the frame. The flexibility of these techniques enables the editing of arbitrary regions within the frame. In the second stage, MagicProp employs the edited frame as an appearance reference and generates the remaining frames using an autoregressive rendering approach. To achieve this, a diffusion-based conditional generation model, called PropDPM, is developed, which synthesizes the target frame by conditioning on the reference appearance, the target motion, and its previous appearance. The autoregressive editing approach ensures temporal consistency in the resulting videos. Overall, MagicProp combines the flexibility of image-editing techniques with the superior temporal consistency of autoregressive modeling, enabling flexible editing of object types and aesthetic styles in arbitrary regions of input videos while maintaining good temporal consistency across frames. Extensive experiments in various video editing scenarios demonstrate the effectiveness of MagicProp.
figurefigure
figurefigure
## 1 Introduction
Content creation often involves video editing, which includes modifying the appearance or adjusting the motion of raw videos (Wu et al., 2023; Kasten et al., 2021; Zhao et al., 2023; Wang et al., 2023). Filmmakers may need to adjust the exposure, saturation, and contrast of raw videos for better aesthetic
Figure 1: Video editing via MagicProp: global, background, and foreground editing are all supported.
quality, while advertisers may want to change realistic videos into certain fascinating styles to impress target audiences. This paper addresses the problem of editing videos' appearance, including changing the content or style locally in a certain spatial region or globally throughout the entire video.
Existing works attempt to solve this problem mainly from two perspectives: editing each frame individually via image generation models [Qi et al., 2023; Ceylan et al., 2023; Yang et al., 2023; Khachatryan et al., 2023; Geyer et al., 2023] or modeling the entire video sequence for appearance changing [Ni et al., 2023; Molad et al., 2023; Karras et al., 2023; Kasten et al., 2021; Esser et al., 2023]. Methods based on image models, such as Stable Diffusion [Rombach et al., 2022] and ControlNet [Zhang and Agrawala, 2023], can flexibly modify the content or style of any arbitrary region, but it is challenging to ensure temporal consistency across adjacent frames. To alleviate this issue, some use structure-guided models and cross-frame attention to align color and layout across frames [Zhang and Agrawala, 2023; Qi et al., 2023; Ceylan et al., 2023]. Other methods exploit inter-frame correspondence, such as optical flow, to warp the features of edited frames [Yang et al., 2023; Geyer et al., 2023]. However, the temporal consistency of the edited video is still suboptimal. Instead of using image-based models, researchers have developed many sequence-based models for video generation and editing [Esser et al., 2023; Couairon et al., 2023]. Neural Layered Atlas (NLA) overfits a video first and then edits the learned corresponding Atlas to change the foreground or background [Kasten et al., 2021; Bar-Tal et al., 2022]. NLA-based methods can effectively edit the appearance of videos, but test-time optimization is time- and resource-consuming. Recently, many diffusion-based models have been proposed for structure-aware video generation, such as Gen-1 [Esser et al., 2023], ControlVideo [Zhao et al., 2023; Chen et al., 2023], and VideoComposer [Wang et al., 2023]. These methods synthesize videos by conditioning on layout sequences such as depth or sketch maps, so that the motion coherence in the resultant video can be ensured. However, the editability and flexibility will be compromised due to the limitation of textual descriptions and the difficulty of user interaction. For instance, when editing a certain part of a given video, text prompts may not precisely localize the region of interest across all frames, and it may be challenging for users to prepare masks for all frames. The trade-off between temporal consistency and editing flexibility inspires us to explore other alternative frameworks for video editing.
Motivated by the fact that frames within a video usually share a similar scene, we propose a novel framework, MagicProp, which disentangles video editing into two stages, namely, appearance editing and motion-aware appearance propagation. MagicProp first selects one frame from the given video and edits its appearance. The edited frame is used as the appearance reference in the second stage. Then, MagicProp autoregressively renders the remaining frames by conditioning on the reference frame and the motion sequence (_e.g._, depth maps of the given video). MagicProp models videos in an autoregressive manner, which guarantees the temporal consistency of the output videos. Additionally, MagicProp uses powerful image diffusion models (optionally with additional masks) for reference editing, allowing for flexible modification of the contents of a local region or the entire video.
The most crucial component of MagicProp is an autoregressive conditional image diffusion model that synthesizes the target image under the control of its previous frame, the target depth, and the reference appearance. We design a lightweight adapter to merge and inject the semantic-level and pixel-level information of the reference frame into the image generation process, ensuring that the appearance of the resultant frames aligns well with the reference. During training, we follow the strategy of zero terminal signal-to-noise ratio (SNR) [Lin et al., 2023], which bridges the gap between the noise schedules during training and inference, resulting in better matching of the color and style of generated frames with the reference. We conducted extensive experiments in several video editing scenarios, including local object/background editing and global stylization. The results demonstrate the effectiveness and flexibility of MagicProp. The contributions of MagicProp are three-fold:
* We proposed a novel framework, MagicProp, that decouples video editing into appearance editing and motion-aware appearance propagation.
* We devised a lightweight adapter to inject class- and pixel-level features into the diffusion model. We also applied the zero-terminal SNR strategy for training. These techniques facilitate the alignment of the appearance.
* Extensive experiments demonstrate that MagicProp can flexibly edit any arbitrary region of the given video and generate high-quality results.
Related Works and Preliminaries
In this section, we first review recent related works on the appearance editing of videos. We categorize them into two groups, _i.e._, editing a video frame by frame via image models, and modeling the whole frame sequence for editing. Then, we introduce the preliminaries about diffusion probabilistic models and the notation for video editing.
### Related Works
Frame-by-frame EditingDiffusion-based image generation models have achieved great success in image generation and editing tasks (Ho et al., 2020, 2022; Rombach et al., 2022; Blattmann et al., 2023). The simplest method for video editing is to edit each frame individually (Meng et al., 2022; Liew et al., 2022; Hertz et al., 2022). Although it is flexible to edit each frame and the resultant frames have a good aesthetic quality, the temporal consistency of the whole video is usually inferior. Some methods use the layout condition generation method to edit each frame (Zhang and Agrawala, 2023; Huang et al., 2023b). For example, ControlNet (Zhang and Agrawala, 2023) synthesizes images with the conditioning of a text description and an additional layout map, such as a depth map or an edge map, thus the spatial layout of the edited frame matches that of the original frame. Whilst these methods can guarantee the layout consistency of the edited videos, the appearance of frames (_e.g._, identity, texture, and color) still changes apparently across frames. To alleviate the issue of temporal consistency, a line of methods rely on cross-frame attention to fuse the latents of edited frames and those of their previous frames (or other reference frames) (Qi et al., 2023; Hertz et al., 2022; Khachatryan et al., 2023; Ceylan et al., 2023), so that the consistency of shape and style can be improved. Another line of methods exploit the correspondence between frames in the original video and use it to warp the latent or attention maps when generating future frames (Yang et al., 2023; Geyer et al., 2023). Correspondence-based wrapping may fail due to the occlusion in consecutive frames. In general, methods based on per-frame editing still suffer from temporal consistency across frames.
Editing via Sequential ModelingVideos are naturally sequential data, and therefore using sequential models for video generation and editing intrinsically benefits temporal consistency. Neural Layered Atlas (NLA) (Kasten et al., 2021; Bar-Tal et al., 2022; Huang et al., 2023a) represents a video through several 2D maps and 2D-to-color atlases. The appearance of objects and backgrounds can be easily edited by modifying the corresponding atlases. However, NLA needs to perform test-time optimization for each video to learn its representations, which is very time-consuming. Recently, diffusion models have been proven effective in modeling sequential data like videos. Many methods use video diffusion models or flatten image diffusion models into video models for video editing (Ho et al., 2022; Blattmann et al., 2023; Zhou et al., 2023; Wang et al., 2023). Dreamix (Molad et al., 2023) and Tune-A-Video (Wu et al., 2023), fine-tune the video model on the provided video first and then generate a new video by conditioning the textual prompt of the editing instruction. Fine-tuning on the given video cannot sufficiently guarantee that the motion (layout sequence) in the edited video aligns well with the original. To ameliorate this issue, motion-conditioned video diffusion models have been proposed, including Gen-1 (Esser et al., 2023), ControlVideo (Zhao et al., 2023; Chen et al., 2023), and VideoComposer (Wang et al., 2023). These methods generate video with the condition of a layout sequence, such as depth or edge maps. When editing, one can extract the layout sequence from the given video first and then generate a new video by conditioning the layout sequence and an editing text prompt. Overall, editing methods based on video models can effectively synthesize temporally consistent videos, but their editability and image quality are not as good as the image-based models at the current stage due to the limitation of textual description and the difficulty of training a good video model. Textual prompts only can provide a high-level semantic description of the desired appearance. It is challenging to locate a specific local editing region of a video based on textual prompts.
In contrast, MagicProp disentangles appearance editing and appearance propagation. It can flexibly edit the appearance based on powerful image editing methods that can incorporate textural descriptions and localization masks. Besides, synthesizing future frames with an autoregressive model also ensures temporal consistency across frames.
### Preliminaries
Denoising Diffusion Probabilistic ModelDenoising diffusion probabilistic models (DDPM) are a family of latent generative models that approximate the probability density of training data by reversing the Markovian Gaussian diffusion processes (Sohl-Dickstein et al., 2015; Ho et al., 2020). Concerning a distribution \(q(\mathbf{x})\), DDPM models the probability density \(q(\mathbf{x})\) as the marginal of the joint distribution between \(\mathbf{x}\) and a series of latent variables \(x_{1:T}\), _i.e._, \(p_{\theta}(\mathbf{x})=\int p_{\theta}(\mathbf{x}_{0:T})d\mathbf{x}_{1:T}\) with \(\mathbf{x}=\mathbf{x}_{0}\). The joint distribution is defined as a Markov chain with learned Gaussian transitions starting from the standard normal distribution, _i.e._,
\[p_{\theta}(\mathbf{x}_{T})=\mathcal{N}(\mathbf{x}_{T};\mathbf{0},\mathbf{I}) \tag{1}\] \[p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})=\mathcal{N}(\mathbf{x}_{t-1}; \mathbf{\mu}_{\theta}(\mathbf{x}_{t},t),\Sigma_{\theta}(\mathbf{x}_{t},t)) \tag{2}\]
To perform likelihood maximization of the parameterized marginal \(p_{\theta}(\cdot)\), DDPM uses a fixed Markov Gaussian diffusion process, \(q(\mathbf{x}_{1:T}|\mathbf{x}_{0})\), to approximate the posterior \(p_{\theta}(\mathbf{x}_{1:T}|\mathbf{x}_{0})\). In specific, two series, \(\alpha_{0:T}\) and \(\sigma_{0:T}^{2}\), are defined, where \(1=\alpha_{0}>\alpha_{1}>\ldots,>\alpha_{T}\geq 0\) and \(0=\sigma_{0}^{2}<\sigma_{1}^{2}<\cdots<\sigma_{T}^{2}\). For any \(t>s\geq 0\), \(q(\mathbf{x}_{t}|\mathbf{x}_{s})=\mathcal{N}(\mathbf{x}_{t};\alpha_{t|\mathbf{x}}\mathbf{x}_{s}, \sigma_{t|s}^{2}\mathbf{I})\), where \(\alpha_{t|s}=\alpha_{t}/\alpha_{s}\) and \(\sigma_{t|s}^{2}=\sigma_{t}^{2}-\alpha_{t|s}^{2}\sigma_{s}^{2}\). Usually, we set \(\alpha_{t}^{2}+\sigma_{t}^{2}=1\), thus,
\[q(\mathbf{x}_{t}|\mathbf{x}_{0})=\mathcal{N}(\mathbf{x}_{t}|\alpha_{t}\mathbf{x}_{0},(1-\alpha _{t}^{2})\mathbf{I}). \tag{3}\]
We use deep neural networks to parameterize the expectation function \(\mu_{\theta}(\mathbf{x}_{t},t)\) of the sampling process or the denoising function \(\epsilon_{\theta}(\mathbf{x}_{t},t)\), which can be used to alternatively estimate the expectation via \(\mu_{\theta}(\mathbf{x}_{t},t)=\frac{1}{\sqrt{\alpha_{t|t-1}}}(\mathbf{x}_{t}-\frac{1- \alpha_{0|t-1}}{\sqrt{1-\alpha_{t}}}\epsilon_{\theta}(\mathbf{x}_{t},t))\). When performing conditional generation tasks, the network should take additional control signals \(\mathbf{y}\) as input, _i.e._, \(\epsilon_{\theta}(\mathbf{x}_{t},t,\mathbf{y})\). The parameterized reversed process \(p_{\theta}\) can be optimized by maximizing the associated evidence lower bound (ELBO). We plug the Gaussian parameterization into KL-divergence terms, the ELBO optimization turns to be noise estimation, where \(\lambda(t)\) is a weighting function. After training, we can sample new data via the Markov chain defined in Eqn (2). Instead, we also can use deterministic samplers, such as DDIM, to generate new data. For a certain starting noise \(\mathbf{x}_{T}\sim\mathcal{N}(\mathbf{x}_{T};\mathbf{0},\mathbf{I})\), the mapping from \(\mathbf{x}_{T}\) to the generated datum \(\mathbf{x}_{0}\) through a deterministic sampler is denoted by \(\Phi(\mathbf{x}_{T},\mathbf{y})\).
\[L=\mathbb{E}_{\mathbf{x}_{0},t,e}[\lambda(t)\|\epsilon_{\theta}(\mathbf{x}_{t})- \epsilon\|_{2}^{2}]. \tag{4}\]
Notation for Video EditingWe denote a video by \(\mathbf{x}=[\mathbf{x}^{1},...,\mathbf{x}^{K}]\), where \(\mathbf{x}^{i}\) represents the \(i^{\text{th}}\) frame in the sequence and, for each \(i\in[1,\ldots,K]\), \(\mathbf{x}^{i}\in[-1,1]^{C\times H\times W}\). To reduce the computational overhead of modeling videos, we use a variational auto-encoder (VAE), denoted by \(\{\mathcal{E}(\cdot),\mathcal{D}(\cdot)\}\), to map videos from the RGB space to a lower-dimensional latent space. The video frames are transformed one by one, _i.e._, \(\mathbf{z}=[\mathbf{z}^{1},...,\mathbf{z}^{K}]\) with \(\mathbf{z}^{i}=\mathcal{E}(\mathbf{x}^{i})\). We follow Stable Diffusion which uses an encoder to downsample \(\mathbf{x}\) into a spatially \(8\times\) smaller space. The generated latent codes can be decoded to videos by \(\mathcal{D}(\cdot)\). The editing operations require users to provide extra information describing the desired appearance of the target video. We denote the instruction information by \(\mathbf{y}\); it could be a textual description, an extra localization mask, or other visual reference. We use CLIP, denoted by \(\tau(\cdot)\), to encode the text prompt or reference image, and the embedding is denoted \(\tau(\mathbf{y})\). To preserve the motion of the original video, we use a depth estimation model, such as TCMonoDepth, to extract the sequence of depth maps for representing the motion. We denote \(\mathcal{M}(\cdot)\) as the depth model and \(\mathbf{m}=[\mathbf{m}^{1},\ldots,\mathbf{m}^{K}]\) with \(\mathbf{m}^{i}=\mathcal{M}(\mathbf{x}^{1})\) as the depth sequence.
## 3 Method
This paper addresses the problem of motion-preserving video editing, where we aim to alter the appearance of a given video while retaining the original motion. Typically, frames in a short video have similar scenes, with main objects and backgrounds appearing consistently throughout. It is natural to disentangle the video editing problem into two sub-tasks, _viz._, editing the appearance of the main objects and/or the background first and then propagating the edited content to all other frames based on the original motion.
In this section, we elucidate the pipeline of MagicProp \(\mathcal{V}(\cdot)\), which performs video editing in two stages sequentially, _i.e._, appearance editing \(\Phi^{1}(\cdot)\) and motion-aware appearance propagation \(\Phi^{2}(\cdot)\).
MagicProp can flexibly edit the appearance of a given video according to users' instructions. It supports changing the contents (_e.g._, object type and image style) in any specific region, either locally or globally. Formally, MagicProp takes input as the source video \(\mathbf{x}\), a textual prompt \(\mathbf{y}\), and optionally a localization mask \(\mathbf{w}\). This mask can be provided by users or easily obtained by a powerful segmentation model. After the two-stage processing, MagicProp generates an edited video \(\hat{\mathbf{x}}\) whose motion remains unchanged.
### Appearance Editing
The first stage of MagicProp is to manipulate the appearance of the source video. We select one frame as the appearance reference. Thanks to many effective image-editing methods, we can flexibly edit any arbitrary region of the reference frame, including changing object types or visual styles.
In specific, we select a frame \(\mathbf{x}^{\#}\) from the input video \(\mathbf{x}\) as the appearance reference. Existing image editing methods, such as Text-to-Image (T2I) models, offer rich possibilities to manipulate images' contents (Meng et al., 2022; Liew et al., 2022; Zhang and Agrawala, 2023). Here, we use the ControlNet optionally with a segmentation mask \(\mathbf{w}\) to change the main objects and/or the background. By conditioning the depth map of \(\mathbf{x}^{\#}\) and a textual prompt \(\mathbf{y}\), ControlNet will generate a new image \(\hat{\mathbf{x}}^{\#}\) whose layout matches the original one and semantics aligns with the text description. In comparison to existing Text-to-Video (T2V) models, T2I models, such as Stale Diffusion, have apparent superiority in terms of per-frame quality. Thus, the resultant frame edited by ControlNet contains rich details and enjoys high aesthetic quality. Besides, T2I diffusion models allow us to use localization masks to precisely control the editing parts in images. It is flexible to edit a local region or the whole image. In brief, stage one chooses and edits a certain frame, and the edited frame will be used as the appearance reference for video synthesis in the second stage.
\[\hat{\mathbf{x}}^{\#}=\Phi^{1}(\mathbf{x},\#,\mathbf{y},\mathbf{w}) \tag{5}\]
Figure 3: Auto-regressive Motion-aware Appearance Propagation Diffusion Model
Figure 2: The pipeline of MagicProp.
### Motion-aware Appearance Propagation
Given a source video \(\mathbf{x}\) and the appearance reference \(\hat{\mathbf{x}}^{\#}\), the second stage \(\Phi^{2}(\cdot)\) will render a new video \(\hat{\mathbf{x}}\) that preserves the motion in source one and whose appearance matches the reference. The most crucial part is an appearance propagation diffusion probabilistic model (PropDPM). PropDPM, denoted by \(\phi_{\theta}(\cdot)\), synthesizes the whole video in an auto-regressive manner. Each frame \(\hat{\mathbf{x}}^{k}\) is generated with the conditioning of the reference appearance \(\hat{\mathbf{x}}^{\#}\), its corresponding depth map \(\mathbf{m}^{k}\), and the previous edited frame \(\hat{\mathbf{x}}^{k-1}\). We can use the edited appearance reference as the starting frame, _i.e._, \(\hat{x}^{0}=\hat{x}^{\#}\) and \(\mathbf{m}^{0}=\mathbf{m}^{\#}\). The rest can be rendered frame-by-frame through Eqn (6) for \(k\) from \(1\) to \(K\). The layout in the generated frames aligns with the depth maps extracted from the corresponding frames in the source video. Hence, the motion (layout sequence) remains unchanged compared to the source video, and the temporal consistency in the rendered video is also guaranteed.
\[\hat{\mathbf{x}}^{k}=\phi_{\theta}(\mathbf{m}^{k},\hat{\mathbf{x}}^{k-1},\mathbf{m }^{k-1},\hat{\mathbf{x}}^{\#}) \tag{6}\] \[\hat{\mathbf{x}}=\Phi^{2}(\hat{x}^{\#},\mathbf{x}) \tag{7}\]
In specific, PropDPM is designed based on the latent diffusion model (Rombach et al., 2022). We use a VAE \(\{\mathcal{E}(\cdot),\mathcal{D}(\cdot)\}\) to map a video into a lower-dimensional latent space. PropDPM is trained to generate the edited latent \(\hat{\mathbf{z}}^{k}\) and we then use the VAE to reconstruct the edited video frame \(\hat{\mathbf{x}}^{k}\). For the conditioning signals, we split them into two groups, _viz._, the spatial conditions and the semantic conditions. The spatial conditions, including the target frame's depth map and the previous frame, provide the spatial layout information for the generated image and form a contrast between two consecutive frames. This contrast facilitates the synthesis of contents by querying spatially corresponding regions. The semantic conditions include the RGB and the latent of the reference frame. They provide information about the color, style, and object classes in the target edited video.
The spatial conditions are injected into the PropDPM by concatenating them to the noisy latent. We use the TCMonoDepth (Li et al., 2021) model to estimate depth maps in the RGB space and rescale them into the size of the latent codes. When generating the \(k^{\text{th}}\) edited frame, we concatenate its depth map \(\mathbf{m}^{k}\), the latent of the previous edited frame \(\hat{\mathbf{z}}^{k-1}_{t}\), the previous depth map \(\mathbf{m}^{k-1}\), to the noisy latent \(\hat{\mathbf{z}}_{t}\). Instead, the semantic conditions are used as the input of the cross-attention modules. We design a lightweight adaptor to combine the CLIP's embedding and the VAE latent of the reference frame so that the injected semantics contains both class-wise and patch-wise information.
### Model Design of PropDPM
The main challenges of video editing are ensuring temporal consistency across all frames and maintaining per-frame quality. PropDPM addresses the first challenge by editing a video in an auto-regressive manner, conditioning on the true depth sequence to ensure temporal coherence across frames. However, due to the intrinsic error accumulation issue of auto-regressive modeling, the image quality of the edited frames degrades as the frame index increases. While the early edited frames contain rich details, the later edited ones become smooth and suffer from color shifting.
To alleviate the error accumulation issue, we propose two complementary solutions. First, we design an appearance adaptor that merges the class-level and patch-wise information of the reference frame. The output of this adaptor is sent to cross-attention modules. During inference, we use a fixed reference frame for each video when auto-regressively synthesizing frames. A fixed reference frame serves as an anchor to ameliorate the degradation. Second, we apply the Zero-Terminal-SNR (Lin et al., 2023) technique to train the diffusion model, which bridges the gap between the starting noise's strength during inference and the largest noise level during training. This technique improves the image quality of the generated frame in each iteration.
#### 3.3.1 Appearance Adaptor
We design a lightweight adaptor to fuse the class-level and pixel-level features of the reference frame. The adaptor preserves the spatial correspondence between the fused tokens and the reference image. In detail, we first use the VAE to extract the latent of the reference image, \(\mathbf{z}^{\#}\in\mathbb{R}^{4\times h\times w}\). The latent codes of VAE have good spatial correspondence to the original images. We use a nonlinear network to decrease the redundant spatial resolution of latent \(\mathbf{z}^{\#}\) by a factor of \(\times 2\) but increase the channel dimension to preserve more information. The resultant feature is in size of \(\mathbb{R}^{l/2\times h/2\times w/2}\) |
2307.11371 | Random Separating Hyperplane Theorem and Learning Polytopes | The Separating Hyperplane theorem is a fundamental result in Convex Geometry
with myriad applications. Our first result, Random Separating Hyperplane
Theorem (RSH), is a strengthening of this for polytopes. $\rsh$ asserts that if
the distance between $a$ and a polytope $K$ with $k$ vertices and unit diameter
in $\Re^d$ is at least $\delta$, where $\delta$ is a fixed constant in $(0,1)$,
then a randomly chosen hyperplane separates $a$ and $K$ with probability at
least $1/poly(k)$ and margin at least $\Omega \left(\delta/\sqrt{d} \right)$.
An immediate consequence of our result is the first near optimal bound on the
error increase in the reduction from a Separation oracle to an Optimization
oracle over a polytope.
RSH has algorithmic applications in learning polytopes. We consider a
fundamental problem, denoted the ``Hausdorff problem'', of learning a unit
diameter polytope $K$ within Hausdorff distance $\delta$, given an optimization
oracle for $K$. Using RSH, we show that with polynomially many random queries
to the optimization oracle, $K$ can be approximated within error $O(\delta)$.
To our knowledge this is the first provable algorithm for the Hausdorff
Problem. Building on this result, we show that if the vertices of $K$ are
well-separated, then an optimization oracle can be used to generate a list of
points, each within Hausdorff distance $O(\delta)$ of $K$, with the property
that the list contains a point close to each vertex of $K$. Further, we show
how to prune this list to generate a (unique) approximation to each vertex of
the polytope. We prove that in many latent variable settings, e.g., topic
modeling, LDA, optimization oracles do exist provided we project to a suitable
SVD subspace. Thus, our work yields the first efficient algorithm for finding
approximations to the vertices of the latent polytope under the
well-separatedness assumption. | Chiranjib Bhattacharyya, Ravindran Kannan, Amit Kumar | 2023-07-21T06:03:43Z | http://arxiv.org/abs/2307.11371v1 | # Random Separating Hyperplane Theorem and Learning Polytopes
###### Abstract
The Separating Hyperplane theorem is a fundamental result in Convex Geometry with myriad applications. The theorem asserts that for a point \(a\) not in a closed convex set \(K\), there is a hyperplane with \(K\) on one side and \(a\) strictly on the other side. Our first result, Random Separating Hyperplane Theorem (RSH), is a strengthening of this for polytopes. RSH asserts that if the distance between \(a\) and a polytope \(K\) with \(k\) vertices and unit diameter in \(\Re^{d}\) is at least \(\delta\), where \(\delta\) is a fixed constant in \((0,1)\), then a randomly chosen hyperplane separates \(a\) and \(K\) with probability at least \(1/\operatorname{poly}(k)\) and margin at least \(\Omega\left(\delta/\sqrt{d}\right)\). There is a rich body of work on reductions between (approximate) optimization and (approximate) separation oracles for general convex sets, where the focus has been on the number of oracle calls. An immediate consequence of our result is the first near optimal bound on the error increase in the reduction from a Separation oracle to an Optimization oracle over a polytope.
RSH has algorithmic applications in learning polytopes. We consider a fundamental problem, denoted the "Hausdorff problem", of learning a unit diameter polytope \(K\) within Hausdorff distance \(\delta\), given an optimization oracle for \(K\). Using RSH, we show that with polynomially many random queries to the optimization oracle, \(K\) can be approximated within error \(O(\delta)\). To our knowledge this is the first provable algorithm for the Hausdorff Problem. Building on this result, we show that if the vertices of \(K\) are well-separated, then an optimization oracle can be used to generate a list of points, each within Hausdorff distance \(O(\delta)\) of \(K\), with the property that the list contains a point close to each vertex of \(K\). Further, we show how to prune this list to generate a (unique) approximation to each vertex of the polytope. We prove that in many latent variable settings, e.g., topic modeling, LDA, optimization oracles do exist provided we project to a suitable SVD subspace. Thus, our work yields the first efficient algorithm for finding approximations to the vertices of the latent polytope under the well-separatedness assumption. This assumption states that each vertex of \(K\) is far from the convex hull of the remaining vertices of \(K\), and is much weaker than other assumptions behind algorithms in the literature which find vertices of the latent polytope.
Introduction
The Separating Hyperplane theorem (SHT) is a fundamental result in Convex Geometry with myriad applications (see e.g. [1]). The theorem asserts that for a point \(a\) not in a closed convex set \(K\), there is a hyperplane with \(K\) on one side and \(a\) strictly on the other side.
This paper makes two main contributions. Our theoretical contribution, which is an extension of the classical SHT, is what we call the Random Separating Hyperplane Theorem (RSH). Our main algorithmic contribution is to use RSH to prove that a natural algorithm, which we call the \(k\)-OLP algorithm, can learn (vertices of) latent polytopes arising in a number of problems in Latent Variable Models including Clustering, Mixture Learning, LDA (linear discriminant analysis), Topic Models. The algorithmic result is shown by reducing the problem of learning latent polytopes in a variety of settings to that of constructing approximate optimization oracles for the corresponding polytopes. Bulk of our algorithmic contribution is in proving this reduction and the existence of such oracles.
### Random Separating Hyperplane Theorem(Rsh)
RSH draws its motivation mainly from the Separating Hyperplane Theorem (SHT) of Convex Geometry. It also has connections to the Johnson-Lindenstrauss Random Projection theorem [13] and reductions among oracles in Convex Optimization. The SHT formally states that given a closed convex set \(K\) and a point \(a\notin K\), there exists a (non-zero) vector \(u\) such that
\[u\cdot a\,>\,\text{Max}_{y\in K}\,u\cdot y.\]
The following question arises: Does a randomly picked \(u\) separate \(a\) from \(K\)? Taking into account some necessary conditions for a positive answer, we can ask if the following inequality holds for a randomly chosen \(u\): (here \(\Delta\) is the diameter of \(K\), \(a\) is at distance at least \(\delta\Delta\) from \(K\), where \(\delta\in(0,1)\)):
\[\mathbf{Pr}\left(u\cdot a\geq\text{Max}_{y\in K}u\cdot y+|u|\alpha\delta \Delta\right)\;\geq\;1/\text{poly}_{\delta},\lx@note{footnote}{ poly}_{s}(z)\;\text{denotes }z^{\text{poly}(1/\delta)}. \tag{1}\]
with \(\alpha\) being as high as possible.
The question (1) is also motivated from the Johnson-Lindenstrauss Random Projection theorem (JL theorem) [13] which states that if \(a,b\) are points in \(\mathbf{R}^{d}\) and \(U\) is a random subspace of dimension \(s\), then with probability bounded away from \(0\), the distance between the projection of \(a\) and \(b\) on \(U\) is at least \(\Omega(|a-b|\sqrt{s}/\sqrt{d})\). The following natural generalization of this is interesting already for \(s=1\):
Instead of \(b\) being a point, if it is now a polytope \(K\), does a similar lower bound on the distance of \(a\) to \(K\) in the projection onto a random line hold?
It is easy to see that in spirit, this is the same question as whether (1) holds. It is also easy (see below) to see that the projection shrinks the distance between \(a\) and \(K\) by a factor of \(\Omega^{*}(\sqrt{d})\). The RSH theorem proves that the shrinkage is \(O(\sqrt{d})\) thus making this parameter nearly (within log factors) tight. We now state the RSH theorem:
**Theorem 1.1** (Random Separating Hyperplane Theorem(Rsh): Informal version).: _Suppose \(K\) is a \(k\) vertex polytope with diameter \(\Delta\) and \(a\) is a point at distance at least \(\delta\Delta\), \(\delta\in(0,1)\), from \(K\). Let \(V\) be an \(m-\)dimensional subspace containing \(K\cup\{a\}\). For a random Gaussian vector \(u\in V\), the following event happens with probability at least 1/poly\({}_{\delta}(k)\):_
\[u\cdot a\geq\text{Max}_{y\in K}u\cdot y+\frac{\delta\Delta|u|}{10\sqrt{m}}. \tag{2}\]
We provide a simple example where \(K\) is a line segment (see Section 9) to show that the factor \(\sqrt{m}\) cannot be improved. It is also interesting to note that the success probability of the event in (2)
needs to depend on \(k\). (see Section 9). In particular, \(\mathsf{RSH}\) does not hold for general convex sets (where \(k\) is not necessarily finite.)
### Oracles for Convex Sets
The seminal work of [10] showed that the Ellipsoid Algorithm can be viewed as a reduction of an Optimization Oracle to a Separation Oracle for convex sets. Since then, there has been an extensive study of reductions among approximate oracles (also referred to as "weak" oracles). There are two important parameters in a reduction from an oracle \(\mathcal{A}\) with error \(\delta\) to an oracle \(\mathcal{B}\) with error \(\varepsilon\) - the number of calls to \(\mathcal{B}\) used and the increase in error, namely, \(\delta/\varepsilon\). Of these, the number of oracle calls has received much attention since it is a measure of running time. But the error parameter has also been taken into account in most results starting with [10]. The best known bounds on the number of oracle calls in reductions are due to [11]; they achieve near-linear number of oracle calls. The known error increase factors are \(\Omega(d)\), where \(d\) is the dimension. Our proof of \(\mathsf{RSH}\) gives a simple polynomial time reduction (for fixed \(\delta\)) from Separation to Optimization with the error increase factor of \(O^{*}(\sqrt{d})\) which we also show is best possible within log factors.
We define our approximate oracles (which differ from the traditional definitions of approximate oracles - see Remark 1.4) and state our nearly-matching upper and lower bounds for error increase in the reduction. Here, \(B_{d}\) denotes the unit ball in \(\Re^{d}\).
**Definition 1.2**.: For a non-empty convex set \(K\subseteq\Re^{d}\) and \(\delta\in(0,1)\), a separation oracle for \(K\) with error \(\delta\), denoted \(\mathsf{SepOr}_{\delta}(K)\) oracle, takes as input any \(a\in\Re^{d}\) and returns a valid option among the two below (Note: Both may be valid):
* \(a\in K+\delta\Delta(K)B_{d}\).
* Returns \(u\) satisfying \(u\cdot a>\operatorname{Max}_{y\in K}u\cdot y\)
**Definition 1.3**.: For a non-empty convex set \(K\subseteq\Re^{d}\) and \(\varepsilon\in(0,1)\), an Optimization oracle for \(K\) with error \(\varepsilon\), denoted \(\mathsf{OptOr}_{\varepsilon}(K)\) oracle, takes as input any \(u\in\Re^{d},|u|=1\), and returns a point \(x(u)\) satisfying _both_ these conditions :
* \(x(u)\in K+\varepsilon\Delta(K)B_{d}\), and
* \(u\cdot x(u)\geq\operatorname{Max}_{y\in K}u\cdot y-\varepsilon\Delta(K)\)
_Remark 1.4_.: Starting with [10], the second option in the traditional definition of approximate oracles usually replaces the \(K\) we have in that option with a subset of \(K\), namely, the subset of all points with the property that a ball of specified size centered at that point is wholly inside \(K\). This is necessary since the proof of convergence of the Ellipsoid algorithm is by shrinking the volume of the ellipsoid containing \(K\). If \(K\) is not full dimensional, the subset of \(K\) is empty and a worst-case (adversarial) oracle can always return this option giving us no information on \(K\). Our definition makes the stronger assumption with \(K\) in the second option, thus, dealing with (among other examples) non-full dimensional \(K\).
**Theorem 1.5**.: _Let \(\delta\in(0,1)\) be any constant and \(K\) be a polytope in \(\Re^{d}\) with \(k\geq 1\) vertices given by an \(\mathsf{OptOr}_{\varepsilon}(K)\) oracle \(\mathcal{A}\), with \(\varepsilon\leq\delta/100\sqrt{d}\). Then there is an \(\mathsf{SepOr}_{\delta}(K)\) oracle which is obtained by making \(\text{poly}_{\delta}(dk)\) calls to \(\mathcal{A}\)._
It is worth noting that the calls to optimization oracle in the above result use random vectors \(u\), and so the reduction algorithm is randomized. The following result shows that the condition \(\varepsilon\leq\delta/100\sqrt{d}\) above is almost tight in the sense no deterministic algorithm can beat it by more than log factors.
**Theorem 1.6**.: _There is no deterministic polynomial time reduction from an \(\mathsf{SepOr}_{O(1)}(K)\) oracle to an oracle in \(\mathsf{OptOr}_{\Omega(\log d/\sqrt{d})}(K)\)._
### Algorithmic application of \(\mathsf{RSH}\)
We now discuss the second main contribution of our work, i.e., applications of \(\mathsf{RSH}\) to learning vertices of a latent polytope.
**Problem Formulation.** Several latent variable problems including Clustering, LDA, MMBM can be reduced (See Section 7 for details) to a problem that we call \(k\)-\(\mathsf{OLP}\): given an \(\varepsilon\)-optimization oracle for a \(k\) vertex polytope \(K\subseteq\mathbf{R}^{d}\), learn the vertices of \(K\) (approximately). We define two simpler (than \(k\)-\(\mathsf{OLP}\)) problems - ListLearn and Hausdorff, that are related to \(k\)-\(\mathsf{OLP}\), and then we define \(k\)-\(\mathsf{OLP}\).
The first problem, Hausdorff, seeks to find approximation to a polytope \(K\) when we are given an approximate optimization oracle for the polytope.
**Definition 1.7** (\((\varepsilon,\delta)\)-Hausdorff-Problem).: Given an \(\mathsf{OptOr}_{\varepsilon}(K)\) oracle for a polytope \(K\) in \(\mathbf{R}^{d}\) with \(k\) vertices, find a set \(P\) of \(m=\operatorname{poly}_{\delta}(dk)\) points such that \(\mathsf{Haus}(CH(P),K)\leq\delta\Delta(K)\), where, \(\mathsf{Haus}\) denotes Hausdorff distance (see Definition 5.1 for a formal definition).
In the problem ListLearn, we also wish to find a small list of points, such that each vertex of \(K\) is close to at least one point in this list.
**Definition 1.8**.: \([(\varepsilon,\delta)\)-ListLearn Problem] Given an \(\mathsf{OptOr}_{\varepsilon}(K)\) oracle for a polytope \(K\) in \(\Re^{d}\) with \(k\) vertices, each separated from the convex hull of the other \(k-1\) vertices by at least \(\delta\Delta(K)\), find a list \(P\subseteq K+\delta\Delta(K)B_{d}\) of \(m=\operatorname{poly}_{\delta}(dk)\) points such that for every vertex \(v\) of \(K\), there is some \(v^{\prime}\in P\) with \(|v-v^{\prime}|\leq\delta\Delta(K)/10\).
When the parameters \(\varepsilon,\delta\) will be clear from the context, we shall abbreviate the above two problems as Hausdorff and ListLearn problems respectively. It is not difficult to see that any solution \(P\) to ListLearn is also a solution to Hausdorff, but, the converse need not hold: \(\operatorname{CH}(P)\) may nearly contain \(K\) without \(P\) having any point close to some vertex of \(K\). Our technical results (see below for the informal versions) show that if \(\varepsilon\in O_{\delta}(1/\sqrt{d})\)2, then we can solve the above-mentioned problems efficiently and indeed then, the following simple algorithm gives the desired answers (the proof crucially uses \(\mathsf{RSH}\)):
Footnote 2: \(O_{\delta}(x)\) stands for \(f(\delta)x\) for some function \(f\).
**Random Probes Algorithm**
Pick uniformly at random unit vectors \(u_{1},u_{2},\ldots u_{m}\), where \(m=\)poly\({}_{\delta}(dk)\).
Return \(P\), which is the set of \(m\) answers of the \(\mathsf{OptOr}_{\varepsilon}(K)\) oracle to the queries \(u_{1},u_{2},\ldots u_{m}\).
The first result (see Theorem 5.2 for a formal statement) states that the convex hull of answers to polynomially many random queries to the approximate optimization oracle approximates \(K\) well.
**Theorem 1.9** (Hausdorff Approximation from oracle (Informal)).: _Consider an instance of the \((\varepsilon,\delta)\)-Hausdorff problem for a polytope \(K\subseteq\Re^{d}\). Assume that \(\varepsilon\in O_{\delta}(1/\sqrt{d})\), and let \(P\) be the set of points returned by the_ **Random Probes Algorithm** _above. Then with high probability,_
\[\mathsf{Haus}(CH(P),K)\leq\delta\Delta(K).\]
The second result (see Theorem 6.5 for a formal statement) shows that as long as each vertex of \(K\) is _well-separated_ from the convex hull of the other vertices of \(K\), the set \(P\) constructed by the **Random Probes Algorithm** contains an approximation to each of the vertices. Thus, answers to polynomially many random queries list-learns the polytope.
**Theorem 1.10** (List-Learning from Oracle (Informal)).: _Consider an instance of the \(\mathsf{ListLearn}\) problem for a polytope \(K\subseteq\Re^{d}\), and assume that \(\varepsilon\in O_{\delta}(1/\sqrt{d})\). Then, with high probability, the set \(P\) output by the **Random Probes Algorithm** above has the following property: for every vertex \(a\) of \(K\), there is a point \(a^{\prime}\in P\) with_
\[|a^{\prime}-a|\leq\delta\Delta(K)/10.\]
The \(\sqrt{d}\) factor in both the above theorems is near-optimal (within a \(\log d\) factor). Indeed, we shall prove:
**Theorem 1.11** (Oracle Lower Bound).: _The problem where, one is required to output a point which is within \(\Delta(K)/10\) of some vertex of \(K\), given only by an \(\mathsf{OptOr}_{\varepsilon}(K)\) oracle, cannot be solved in deterministic polynomial time when \(\varepsilon\geq 8\ln d/\sqrt{d}\)._
We now define the \(k\)-\(\mathsf{OLP}\) problem (the parameters \(\varepsilon,\delta\) in the definition will often be clear from the context and may not be mentioned explicitly). The problem statement is similar to that of \(\mathsf{ListLearn}\), but we want to output a list of exactly \(k\) points.
**Definition 1.12**.: [\((\varepsilon,\delta)\)-\(k\)-\(\mathsf{OLP}\) Problem] Under the same hypothesis as for the \(\mathsf{ListLearn}\) problem, find a set of points \(P\), \(|P|=k\), satisfying the following condition: for each vertex \(v\) of \(K\), there is a (unique) point \(v^{\prime}\in P\) such that \(|v-v^{\prime}|\leq\delta\Delta(K)/10\).
Our next result gives a strengthening of Theorem 1.10.
**Theorem 1.13**.: _Consider an instance of the \(k\)-\(\mathsf{OLP}\) problem on a polytope \(K\subseteq\Re^{d}\), and assume \(\varepsilon\in O_{\delta}(1/\sqrt{d})\) in a \(k\)-\(\mathsf{OLP}\) problem. Let \(P\) be the set of points returned by the **Random Probes Algorithm**. Then, in polynomial time, we an find a \(Q\subseteq P,|Q|=k\) satisfying the following condition: for each vertex \(v\) of \(K\), there is a (unique) \(v^{\prime}\in P,|v-v^{\prime}|\leq\delta\Delta(K)/10\)._
The algorithm for finding \(Q\) from \(P\) is likely of independent interest. We call this problem the "Soft Convex Hull" problem and it is described in Section 7.1.
Do Approximate Optimization Oracles exist?The answer to this question is a qualified Yes. They exist, but unfortunately, as we point out below, for many latent variable problems including the simple mixture of two Gaussians with means separated by \(\Omega(1)\) standard deviations, we do not get \(\varepsilon\in O_{\delta}(1/\sqrt{d})\), when, \(k<d\). Thus, we do not satisfy the hypothesis of the results mentioned in Theorem 1.9,Theorem 1.10, and Theorem 1.13. But we are able to tackle this hurdle by projecting to the \(k\)-SVD subspace (of the input data points which satisfy conditions discussed below) where, we do get the necessary \(\varepsilon\in O_{\delta}(1/\sqrt{k})\).
First we observe that approximate optimization oracles arise in a natural setting - that of latent variable models. [1] show that these models can be reduced to a geometric problem called \(\mathsf{LkP}\) described below. [We will not reproduce the reduction here.] \(\mathsf{LkP}\) is the following problem: Let \(K\) be a \(k\) vertex polytope in \(\Re^{d}\). Let \(M_{\cdot,1},\ldots,M_{\cdot,k}\) denote the vertices of \(K\). Assume that there are latent (hidden) points \(P_{\cdot,j},j=1,2,\ldots,n\), in \(K\). The observed data points \(A_{\cdot,j},j=1,2,\ldots,n\) are generated (not necessarily under any stochastic assumptions) by adding _displacements_\(A_{\cdot,j}-P_{\cdot,j}\) respectively to \(P_{\cdot,j}\). Let3
Footnote 3: By the standard definition of spectral norm, it is easy to see that \(\sigma_{0}^{2}\) is the maximum mean squared displacement in any direction.
\[\sigma_{0}:=\frac{||\mathbf{P}-\mathbf{A}||}{\sqrt{n}}.\]
We assume that there is a certain \(w_{0}\) fraction of latent points close to every vertex of \(K\), i.e., for all \(\ell\in[k]\),
\[C_{\ell}:=\{j:|P_{\cdot,j}-M_{\cdot,\ell}|\leq\frac{\sigma_{0}}{\sqrt{w_{0}}}\} \text{ satisfies }|C_{\ell}|\geq w_{0}n.\]
**Theorem 1.14** (From Data to Oracles).: _Using the above notation, the following "Subset Smoothing algorithm" gives us a polynomial time \(\mathsf{OptOr}_{\frac{4\sigma_{0}}{\Delta\sqrt{w_{0}}}}\left(K\right)\) oracle.._
**Subset Smoothing Algorithm**
Given query \(u\), let \(S\) be the set of the \(w_{0}n\)\(j\)'s with the highest \(u\cdot A_{\cdot,j}\) values.
Return \(A_{\cdot,S}:=\frac{1}{w_{0}n}\sum_{j\in S}A_{\cdot,j}\).
The Subset Smoothing algorithm was used in [1]. It is also reminiscent of Superquantiles [14], though our use here is not directly related to them. While this theorem helps us get optimization oracles, the error guarantee of \(O(\sigma_{0}/\Delta\sqrt{w_{0}})\) is not good enough in many applications. An elementary example illustrates this issue:
Consider a mixture of two equal weight standard Gaussians centered at \(-v\) and \(v\), where, \(v\) is a vector of length \(10\). [This fits the paradigm "means separated by \(\Omega(1)\) standard deviations".] Then, data generated by the mixture model fits our data generation process with \(K=\{\lambda v,\lambda\in[-1,1]\}\), and each \(P_{\cdot,j}\) is either \(v\) or \(-v\) depending on the Gaussian from which the point has been sampled. Here \(A_{\cdot,j}\) denotes the actual sampled point from the mixture. Now, \(\Delta=20\) and it can be seen from Random Matrix Theorems (see e.g., [20]) that \(\sigma_{0}=O(1)\) with high probability. So, \(\sigma_{0}/\Delta\sqrt{w_{0}}\in O(1)\) with high probability, and hence, the Theorem above yields an \(\mathsf{OptOr}_{\varepsilon}(K)\) oracle with \(\varepsilon\in\Omega(1)\). But \(d\) can be arbitrarily large and so we do not have the required \(\mathsf{OptOr}_{O(1/\sqrt{d})}(K)\) oracle.
This elementary example can be tackled in several ways. Our algorithm which we call the "\(k\)-\(\mathsf{OLP}\) algorithm" is simply stated and works in general settings (including on this toy example) for several Latent Variable problems (see Section7 for details). The main idea is to first project the input points on a suitable SVD subspace and then use the approximate optimization oracle in the projection.
SVD and the \(k\)-\(\mathsf{OLP}\) AlgorithmWe now state the result (see Theorem8.1 for a formal version) for \(k\)-\(\mathsf{OLP}\) in the setting of \(\mathsf{LKP}\). As mentioned above, this uses SVD followed by subset smoothing.
**Theorem 1.15**.: _Recall the notation and assumptions of Theorem1.14. In addition, we assume that each vertex of \(K\) is \(\delta\Delta(K)\) far from the convex hull of other vertices of \(K\), where, \(\delta\) satisfies:_
\[\sigma_{0}\leq c\delta^{2}\Delta\sqrt{w_{0}}/\sqrt{k}.\]
_Then, the set of points \(P\) returned by the following \(k\)-\(\mathsf{OLP}\) algorithm list learns the vertices of \(K\). Further, we can find a subset \(Q\) of \(P\) with \(|Q|=k\) and for each \(v\), vertex of \(K\), \(Q\) contains a \(v^{\prime}\) with \(|v-v^{\prime}|\leq\delta\Delta/10\) :_
**Algorithm \(k\)-\(\mathsf{OLP}\)**
Project to the \(k\)-dim SVD subspace \(V\) corresponding to the points \(A_{\cdot,j}\).
Pick \(m=\operatorname{poly}_{\delta}(k)\) random vectors \(u_{1},u_{2},\ldots,u_{m}\) in \(V\).
For each \(u_{i}\), take the mean of the \(A_{\cdot,j}\) with the \(w_{0}n/2\) highest values of \(u_{i}\cdot A_{\cdot,j}\).
Let \(P\) be the set of \(m\) means computed in the step above.
Output a subset \(Q\) of \(P\), \(|Q|=k\), using Theorem1.13.
We sketch here the steps in the proof (the details are contained in the proof of Theorem 8.1.)
_Sketch._ Let \(\widehat{K}\) denote projection of \(K\) onto \(V\). By Theorem 1.14, Step 3 of Algorithm \(k\)-OLP is an \(\mathsf{OptOr}_{\varepsilon}(\widehat{K})\) oracle, where \(\varepsilon=O\left(\frac{\sigma_{0}}{\Delta\sqrt{w_{0}}}\right)\). Also, each \(\widehat{M}_{\cdot,\ell}\), which is the projection of \(\widehat{M}_{\cdot,\ell}\) on \(V\), is \(O(\delta\Delta(K))\) far from the convex hull of the other vertices of \(\widehat{K}\) (See Lemma 8.3.) Now, Theorem 1.13 applied to \(\widehat{K}\) in the subspace \(V\) implies the desired result.
It is worth noting that data obtained from several generative models are known to satisfy the \(\mathsf{LKP}\) condition stated in Theorem 1.15, e.g., Stochastic Mixture models with \(k\) components, Topic Models, Mixed membership community models.
From List Learning to \(k\)-\(\mathsf{OLP}\)As outlined above, the \(k\)-OLP algorithm works in two stages: (i) Project the data points on the SVD subspace \(V\) of dimension \(k\), and (ii) make polynomially calls to the \(\mathsf{OptOr}_{\varepsilon}(K)\) oracle, where each query is given by a randomly chosen unit vector in the subspace \(V\) (as in the statement of Theorem 1.9) - let \(P\) be the set of points returned by the oracle. The first statement in Theorem 1.15 shows that the convex hull of \(P\) is close to \(K\).
Obtaining approximations to the vertices of \(K\) from \(P\) requires addressing a new problem: given a set of points \(W\), find a small subset \(T\) of \(W\), such that their convex hulls are close. We call this the _soft convex hull_ problem. A similar problem was addressed by [1]; however they gave a bi-criteria approximation algorithm for this problem. Under stronger assumptions, where we assume that there in the optimal solution \(T^{\star}\), each point of \(T^{\star}\) is well-separated from the convex hull of the rest of the points of \(T^{\star}\), we show that one can recover approximations to each of the points in \(T^{\star}\). Applying this result to the set of points \(P\) returned by the optimization oracle, we get a set of \(k\) points \(Q\), each of which approximates a unique vertex of the polytope \(K\).
The algorithm for obtaining soft convex hull proceeds as follows. We first prune points \(w\in W\) which have the following property: consider the subset \(X\) of points in \(W\) which are sufficiently far from \(w\). Then \(w\) is close the convex hull of \(X\). After pruning such points from \(W\), we pick a subset of points which are sufficiently far-apart from each other. The main technical result shows that this procedure outputs the desired set \(T\).
### Related Work
The seminal work of [13] showed that optimization of a convex function over a convex set can be reduced to separation oracles using the classical ellipsoid algorithm. There has been active research on reducing the number of separation (or membership) oracle queries (see e.g. [1, 14, 15]). [17] showed that, for a "well-rounded" convex set \(K\) (i.e., \(K\) has unit diameter and contains a ball of constant radius), we can get a separation oracle by making \(\widetilde{O}(d)\) calls to an optimization oracle for \(K\). In this reduction, in order to get a separation oracle with error \(\delta\), they use an optimization oracle with error \(\operatorname{poly}(\delta/d)\). As mentioned in the previous section, \(\mathsf{RSH}\) Theorem implies that we can obtain a separation oracle with error \(\delta\) for a convex polytope \(K\) from an optimization oracle with error \(O(\delta/\sqrt{d})\).
The well known result of [11] shows that given a set of \(n\) points in \(\Re^{d}\), projection to a random subspace of dimension \(O(\log n/\varepsilon^{2})\) preserves all pair-wise distances up to \((1+\varepsilon)\)-factor with high probability. Further, this bound on the dimension on which the points are projected is known to be tight [1, 15]. Note that in our setting, there are \(k+1\) points "of interest", namely, the \(k\) vertices of \(K\) and a point \(a\notin K\) and by the above, a random projection to \(O^{*}(\log k)\) dimensional space preserves all pairwise distances among them. But this is not sufficient for our problems. We need separation of \(a\) from all of \(K\) in the projection. We achieve this by projecting to a set of random \(1\)-dimensional subspaces, and show that the distance between a point and a polytope does not scale
down by more than \(O(\sqrt{d})\) factor for at least one of them with high probability (this is an immediate corollary of the RSH Theorem).
The problem of learning vertices of a polytope arises in many settings where data is assumed to be generated by a stochastic process parameterized by a model. Examples include topic models [1], stochastic block models [1], latent Dirichlet allocation [11]. A variety of techniques have been developed for these specific problems (see e.g. [1, 2, 1]). [10] (see also [1]) proposed the latent \(k\)-polytope (LkP) model which seeks to unify all of these latent variable models. In this model, there is _latent_ polytope with \(k\) vertices, and data is generated in a two step process: first we pick _latent_ points from this polytope, and then the observed points are obtained by perturbing these latent points in an adversarial manner. They showed that under suitable assumptions on this deterministic setting, one can capture the above-mentioned latent variable problems. Assuming strong separability conditions on the vertices of the polytope (i.e., each vertex of \(K\) is far from the _affine_ hull of other vertices of \(K\)), they showed that one can efficiently recover good approximations to the vertices of the polytope from the input data points. In comparison, our assumption on \(K\) is that each vertex of \(K\) is far from the convex hull of the remaining vertices of \(K\). This is a much milder condition, e.g., it allows a polytope with more than 2 vertices in a plane. [1] showed how to infer the parameter \(k\) from data in the LkP setting (under the strong separation condition).
[1] addressed a problem similar to the Hausdorff problem: instead of an \(\varepsilon\)-optimization oracle for a polytope \(K\), we are given an explicit set \(P\) of points, whose convex hull is within Hausdorff distance at most \(\delta\Delta(K)\) from \(K\). They are able to get better dependencies on the parameters \(k,\delta,\varepsilon\) than Theorem5.2 under these stronger assumptions.
## 2 Preliminaries
For two points \(x,y\in\Re^{d}\), \(|x-y|\) denotes the Euclidean distance between the points. Given a point \(x\in\Re^{d}\) and a subset \(X\subseteq\Re^{d}\), define \(\mathsf{dist}(x,X)\) as the minimum distance between \(x\) and a point in \(X\), i.e., \(\inf_{y\in X}|x-y|\). For a set of points \(X\), \(\Delta(X)\) denotes the diameter of \(X\), i.e., \(\sup_{x,y\in X}|x-y|\). We denote the convex hull of \(X\) by \(\mathsf{CH}(X)\). For two subsets \(A,B\) of \(\Re^{d}\), define their Minkowski sum \(A+B\) as \(\{x+y:x\in A,y\in B\}\). Similarly, define \(\lambda A\), where \(\lambda\in\Re\), as \(\{\lambda x:x\in A\}\). For an \(m\times n\) matrix \(B\), we use \(B_{\cdot,j}\) to denote the \(j^{th}\) column of \(B\). For a subset \(S\subseteq[n]\) of columns of \(B\), \(B_{\cdot,S}\) denotes \(\frac{1}{|S|}\sum_{j\in S}B_{\cdot,j}\). Often, we represent the vertices of a polytope \(K\) in \(\Re^{d}\) by a \(d\times k\) matrix \(M\), and so the columns \(M_{\cdot,1},\ldots,M_{\cdot,k}\) would represent the vertices of \(K\). We shall use the notation \(\operatorname{poly}_{\delta}(z)\) to denote a quantity which is \(z^{\operatorname{poly}(1/\delta)}\). Further the notation \(O_{\delta}(z)\) shall denote a quantity which is \(f(\delta)z\), where \(f(\delta)\) is a function depending on \(\delta\) only (and hence, is constant if \(\delta\) is constant).
We now give an outline of rest of the paper. In Section4, we prove the Random Separating Hyperplane theorem. In Section5, we prove Theorem1.9 by showing that an \(\mathsf{OptOr}_{\varepsilon}(K)\) oracle leads to efficient constructions of approximation to \(K\). In Section6 we give an algorithm for the ListLearn problem under the stronger assumption that the vertices of \(K\) are well separated. We also prove the lower bound result Theorem1.11 in this section. In Section7, we extend the algorithm for ListLearn to the \(k\)-OLP problem. This requires the concept of soft convex hulls. The algorithm for constructing soft convex hulls in given in Section7.1. Finally, in Section8, we apply the \(k\)-OLP algorithm for the latent polytope problem. As note earlier, in the setting of latent polytopes \(K\), we can only guarantee \(\mathsf{OptOr}_{\varepsilon}(K)\) oracles with \(\varepsilon\) being \(O(1/\sqrt{k})\), whereas our algorithm for \(k\)-OLP requires \(\varepsilon\) to be \(O(1/\sqrt{d})\). We handle this issue by projecting to a suitable SVD subspace and executing the \(k\)-OLP algorithm in this subspace. We conclude with some open problems in Section9.
Reduction from Separation to Optimization Oracles
In this section, we prove Theorem1.5, which is reproduced here.
**Theorem 1.5**.: _Let \(\delta\in(0,1)\) be any constant and \(K\) be a polytope in \(\Re^{d}\) with \(k\geq 1\) vertices given by an \(\mathsf{OptOr}_{\varepsilon}(K)\) oracle \(\mathcal{A}\), with \(\varepsilon\leq\delta/100\sqrt{d}\). Then there is an \(\mathsf{SepOr}_{\delta}(K)\) oracle which is obtained by making \(\text{poly}_{\delta}(dk)\) calls to \(\mathcal{A}\)._
Proof.: Consider a point \(a\in\Re^{d}\). Use the oracle \(\mathcal{A}\) on \(\text{poly}_{\delta}(kd)\) random vectors \(u,|u|=1\). Let \(U\) denote the set of these unit vectors. If \(a\notin K+\delta\Delta(K)B_{d}\), then, by \(\mathsf{RSH}\), with high probability, there is an \(u\in U\) such that \(u\cdot a>\text{Max}_{y\in K}u\cdot y+\delta\Delta(K)/(10\sqrt{d})\). For this \(u\), we have (using \(x(u)\in K+(\delta\Delta(K)/100\sqrt{d})B_{d}\)):
\[u\cdot a>u\cdot x(u)+(\delta\Delta(K)/11\sqrt{d}). \tag{3}\]
Conversely, for any \(u\in U\) satisfying (3), we have (using \(u\cdot x(u)\geq\text{Max}_{y\in K}u\cdot y-\delta\Delta(K)/100\sqrt{d}\)): \(u\cdot a>\text{Max}_{y\in K}u\cdot y+\delta\Delta(K)/20\sqrt{d}\). Thus, we obtain a \(\mathsf{SepOr}_{\delta}(K)\) oracle as follows: check if (3) holds for any \(u\in U\). We are guaranteed to find one with high probability if \(a\notin K+\delta\Delta B\) and that provides the required separating hyperplane.
The proof of the lower bound result Theorem1.6 is very similar to that of Theorem1.11, which is given in Section6.
## 4 Random Separating Hyperplane (Rsh) Theorem
In this section, we prove \(\mathsf{RSH}\) for polytopes: If a point \(p\) point is at a distance from a polytope \(K\), then, the Gaussian measure of the set of _well-separating_ hyperplanes has a positive lower bound depending on the number of vertices in \(K\). More specifically, we show:
**Theorem 4.1**.: _Suppose \(K\) is a polytope in \(\Re^{d}\) with \(k\) vertices and diameter \(\Delta(K)\). Suppose \(a\) is a point in \(\mathbf{R}^{d}\) and \(\delta\in(0,1]\) with_
\[\min_{y\in K}|a-y|\geq\delta\Delta(K). \tag{4}\]
_Let \(V\) be an \(m\)-dimensional subspace containing \(\mathsf{Span}(K\cup\{a\})\) and let \(u\) be a random vector drawn from the normal distribution \(N(0,I_{m})\) in \(V\). Then,_
\[\mathbf{Pr}_{u}\left[(u\cdot a-\max_{y\in K}u\cdot y)\geq|u|\cdot\delta\Delta (K)\cdot\frac{\sqrt{\log k}}{\sqrt{\log k}+4\delta\sqrt{m}}\right]\geq\frac{1 }{40}k^{-10/\delta^{2}}.\]
Proof.: Let \(\zeta_{1},\zeta_{2},\ldots,\zeta_{k}\) be the vertices of \(K\). Let \(b\) be the closest point in \(K\) to \(a\), and define
\[w=\frac{a-b}{|a-b|}.\]
Then by standard Convex Geoemtry arguments, we have for all points \(y\in K\):
\[w\cdot y\leq w\cdot b. \tag{5}\]
Let \(u\) be as in the statement of the theorem. We can write \(u\) as
\[u=\lambda w+z,\text{ where }z\perp w.\]
In order to prove the theorem, we first express \((u\cdot a-\max_{y\in K}u\cdot y)\) as
\[u\cdot(a-b)-\max_{y\in K}u\cdot(y-b)=u\cdot(a-b)-\max_{\ell=1,\ldots,k}u\cdot( \zeta_{\ell}-b) \tag{6}\]
It suffices to show that with reasonably high probability, there is a lower bound on \(u\cdot(a-b)\) and an upper bound on \(u\cdot(\zeta_{\ell}-b)\) for all \(\ell\). Observe that
\[u\cdot(\zeta_{\ell}-b)=\lambda w\cdot(\zeta_{\ell}-b)+z\cdot(\zeta_{\ell}-b) \stackrel{{\eqref{eq:
Now, event \(\mathcal{E}_{0}\) implies that
\[|u|\leq\lambda+|z|\leq\lambda+4\sqrt{m},\]
and so,
\[u\cdot a-\max_{y\in K}u\cdot y\geq\frac{\lambda\delta\Delta(K)|u|}{3(\lambda+4 \sqrt{m})},\]
The desired result now follows from Corollary4.5 and the fact that \(\lambda\cdot\delta\geq 3\sqrt{\ln k}\) (Fact4.4).
## 5 From \(\mathsf{OptOr}_{\varepsilon}(K)\) oracles to the Hausdorff Problem
We prove Theorem1.9 in this section. We begin by defining Hausdorff distance formally.
**Definition 5.1**.: The _Hausdorff-distance_, \(\mathsf{Haus}(K,K^{\prime})\), between two polytopes \(K\) and \(K^{\prime}\) is the infimum over all values \(\alpha\) such that the following condition is satisfied: for every point \(x\in K\), there is a point \(y\in K^{\prime}\) such that \(|x-y|\leq\alpha\), and vice versa.
We now give the formal version of Theorem1.9:
**Theorem 5.2**.: _Suppose \(\varepsilon,\delta\) are reals in \([0,1]\) with_
\[\delta>c\varepsilon\sqrt{d},\delta>c/\sqrt{d}, \tag{11}\]
_where \(c\) is a large enough constant. Let \(K\) be a \(k\)-vertex polytope with \(\Delta(K)\) denoting the diameter of \(K\). Suppose we are also given an \(\mathsf{OptOr}_{\varepsilon}(K)\) oracle \(\mathcal{O}\). Let \(P\) be the set of answers from the oracle on \(m:=k^{10+c\delta^{-2}}\) independent random queries. Then, \(\mathsf{Haus}(K,\mathsf{CH}(P))\leq\delta\cdot\Delta(K)\)._
Proof.: We describe the algorithm for obtaining the desired set \(S\) (referred as **Random Probes Algorithm** in Section1.3) in Algorithm1. The set \(P\) is constructed as follows: we pick a set of \(m\) random unit vectors. For each such unit vector \(u\), we add the corresponding point \(x(u)\) returned by the oracle \(\mathcal{O}\) to the set \(P\).
```
1.1 Input: An \(\mathsf{OptOr}_{\varepsilon}(K)\) oracle \(\mathcal{O}\).
1.2 Initialize a set \(P\) to \(\varnothing\)
1.3Repeat\(m\) times:
1.4 Let \(u\) be a random unit vector in \(\Re^{d}\).
1.5 Call \(\mathcal{O}\) on \(u\) to get a vector \(x(u)\).
1.6 Add \(x(u)\) to \(P\).
1.7Output\(P\).
```
**Algorithm 1**Algorithm for finding the set \(P\) such that \(\mathsf{CH}(P)\) approximates \(K\).
One side of the desired result is easy to show:
**Claim 5.3**.: _For each \(x\in\mathsf{CH}(P)\), there is a \(y\in K\) such that \(|x-y|\leq\delta\Delta(K)\)._
Proof.: For a point \(x(u)\in P\), we know by the definition of \(\mathsf{OptOr}_{\varepsilon}(K)\) oracle that there is a point \(y\in K\) such that \(|x(u)-y|\leq\varepsilon\Delta(K)\leq\delta\Delta(K)\), where \(\varepsilon\leq\delta\) follows from (11). The desired result now follows from the convexity of \(K\).
It remains to show that for any vertex \(v\) of \(K\), there is a point \(y\in\mathsf{CH}(P)\) such that \(|v-y|\leq\delta\Delta(K)\). Fix such a vertex \(v\) of \(K\) for rest of the discussion. Let the random unit vectors considered in Algorithm1 (in the order they get generated) be \(u^{1},\ldots,u^{m}\). Let \(P^{j}\) denote the subset \(\{x(u^{1}),\ldots,x(u^{j})\}\) of \(P\). Define an event \(\mathcal{E}_{j}\) as follows:
\[\mathsf{dist}(v,\mathsf{CH}(P^{j}))\leq\delta\cdot\Delta(K)\quad\text{or} \quad\mathsf{dist}(v,\mathsf{CH}(P^{j+1}))\leq\left(1-\frac{\delta^{2}}{c^{ \prime}}\right)\mathsf{dist}(v,\mathsf{CH}(P^{j})),\]
where \(c^{\prime}\) is a large enough constant. Our main technical result is to show that conditioned on _any_ choice of \(u^{1},\ldots,u^{j}\) the event \(\mathcal{E}_{j}\) happens with reasonably high probability (where the probability is over the choice of \(u^{j+1}\)):
**Lemma 5.4**.: _For any index \(j\in[m-1]\),_
\[\Pr_{u^{j+1}}\left[\mathcal{E}_{j}|u^{1},\ldots,u^{j}\right]\geq\frac{1}{100}.\]
Proof.: Fix the vectors \(u^{1},\ldots,u^{j}\). If \(\mathsf{dist}(v,\mathsf{CH}(P^{j}))\leq\delta\cdot\Delta(K)\), then we are done. So assume this is not the case. Let \(b\) be the closest point in \(\mathsf{CH}(P^{j})\) to \(v\). Thus,
\[|v-b|\geq\delta\cdot\Delta(K). \tag{12}\]
Define \(w\) as
\[w:=\frac{v-b}{|v-b|}.\]
We can now express the vector \(u^{j+1}\) as \(\lambda w+z\), where \(\langle z,w\rangle=0\). We first show the following useful properties of these vectors.
**Claim 5.5**.: _With probability at least \(\frac{1}{100}\), the following three events happen:_
\[|z| \leq 4\sqrt{d} \tag{13}\] \[\max_{y\in K}|z\cdot(v-y)| \leq 2\sqrt{\ln k}\Delta(K)\] (14) \[\lambda \geq\frac{100}{\delta}\sqrt{\ln k} \tag{15}\]
Proof.: The proofs of these three inequalities are identical to those of Fact 4.2, Fact 4.3 and Fact 4.4 (in order to prove (14), it suffices to show it for points \(y\) which are vertices of \(K\)).
The following fact is also easy to show:
**Fact 5.6**.: \[|v-x(u^{j+1})|\leq 2\Delta(K).\]
Proof.: By the definition of \(\mathsf{OptOr}_{\varepsilon}(K)\) oracle, there is a point \(p\in K\) such that \(|p-x(u^{j+1})|\leq\delta\cdot\Delta(K)\leq\Delta(K)\). The desired result now follows by triangle inequality.
Let \(\delta_{1}\) denote \(\frac{\delta^{2}}{100}\). Let \(b_{1}\) denote the vector
\[\delta_{1}x(u^{j+1})+(1-\delta_{1})b.\]
Since \(b_{1}\in\mathsf{CH}(P^{j+1})\), the desired result will follow if we prove the following:
\[|v-b_{1}|^{2}\leq\left(1-\frac{\delta^{2}}{100}\right)|v-b|^{2}. \tag{16}\]
Now,
\[|v-b_{1}|^{2} =\delta_{1}^{2}|v-x(u^{j+1})|^{2}+(1-\delta_{1})^{2}|v-b|^{2}+2 \delta_{1}(1-\delta_{1})(v-x(u^{j+1}))\cdot(v-b)\] \[\stackrel{{\text{\sc Fact~{}\ref{eq:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:F:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:F:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:F:Fact:Fact:F:Fact:Fact:Fact:Fact:F:Fact:Fact:Fact:Fact:Fact:Fact:Fact:F:Fact:Fact:Fact:F:Factact:F:Factact:F:Fact:Fact:Fact:Fact:F:Fact:Fact:Fact:Fact:Fact:Fact:F:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:Fact:F:Fact:Fact:Fact:Fact:F:Fact:Fact:Fact:Fact:Fact:Fact:F:Factact:Fact:F:Fact:Fact:F:Factact:F:Fact:Factact:F:Fact:Fact:Fact:Fact:F:Factact:F:Fact:Fact:Fact:Fact:Fact:F:Fact:Factact:F:Fact:Fact:Fact:Fact:F:Factact:Fact:F:Factact:F:Fact:Fact:F:Factact:Fact:Fact:Factact:F:Factact:F:Fact:Fact:F:Factact:F:Factact:Fact:Fact:F:Factact:F:Factact:F:Fact:Fact:Fact:Fact:Fact:F:Fact:Fact:F:Factact:F:Fact:Factact:F:Factact:F:Factact:F:Factact:F:Fact:Fact:Fact:F:Factact:F:Factact:F:Factact:F:Factact:F:Factact:F:Factact:F:Fact:Factact:F:Factact:F:Factact:F:Factact:F:Factact:Fact:F:Factact:F:Factact:F:Factact:F:Factact:Factact:Fact:Fact:Factact:F:Factact:F:Factactact:Factact:Fact:Factact:F:Factact:Fact:Factact:F:Factact:F:Factact:Factact:F:Factact:F:Factact:Factact:F:Factact:Factact:F:Factact:Factact:F:Factact:Fact:Factact:F:Factact:Factact:F:Factactact:F:Factact:F:Factact:Factact:F:Factactact:F:Factact:Factact:Factact:Factact:Factact:Factact:F:Factact:F:Factact:Factact:F:Factact:F:Factact:F:Factact:Fact:Factact:F:Factact:F:Factact:F:Factact:F:Factact:F:Factact:Fact:Factact:F:Factact:F:Factact:Factact:F:Factact:F:Factact:Factact:Factact:F:Factact:Fact:Factact:F:Factact:F:Factact:Factact:Factact:F:Factact:Factact:Fact:Fact:Factact:F:Factactact:Fact:Factact:F:Factactact:F:Factact:Fact:Factact:F:Factact:F:Factact:Factact:F:Factact:Factact:F:Factact:Fact:Factact:F:Factact:Factact:F:Factactact:F:Factactact:Factact:Fact:Factact:F:Factactact:F:Factact:Factact:F:Factact:Factact:Factact:Factact:Factact:F:Factact:Factact:Factactact:Factact:Factact:Factact:Factact:
\[=\left(1-\frac{3\delta_{1}}{2}\right)|v-b|^{2}++2\delta_{1}(1-\delta_{1}) \left(\underbrace{\frac{|v-b|}{\lambda}\cdot(v-x(u^{j+1}))\cdot u^{j+1}}_{:=A}\right.\] \[\left.\underbrace{-\frac{|v-b|}{\lambda}\cdot(v-x(u^{j+1}))\cdot z }_{:=B}\right) \tag{17}\]
We now bound each of the terms \(A\) and \(B\) above. Now,
\[A\leq\frac{|v-b|}{\lambda}\varepsilon\Delta(K)\stackrel{{\eqref {eq:B_1}}}{{\leq}}\frac{|v-b|\delta\Delta(K)}{c\lambda}\stackrel{{ \eqref{eq:B_1}}}{{\leq}}\frac{|v-b|^{2}}{c\lambda}\stackrel{{ \eqref{eq:B_1}}}{{\leq}}\frac{|v-b|^{2}\delta}{c},\]
where the first inequality follows from the definition of \(\mathsf{OptOr}_{\varepsilon}(K)\) oracle. We now bound the quantity \(B\). Let \(y\) be the point in \(K\) closest to \(x(u^{j+1})\). We know that \(|y-x(u^{j+1})|\leq\varepsilon\Delta(K)\). Therefore,
\[B\leq\frac{|v-b|}{\lambda}\left(|(v-y)\cdot z|+|z|\varepsilon \Delta(K)\right)\] \[\stackrel{{\eqref{eq:B_1},\eqref{eq:B_1}}}{{\leq}} \frac{|v-b|}{\lambda}\left(2\sqrt{\ln k}\Delta(K)+4\varepsilon\sqrt{d} \Delta(K)\right)\] \[\stackrel{{\eqref{eq:B_1},\eqref{eq:B_1}}}{{\leq}} \frac{\delta|v-b|\Delta(K)}{10}\stackrel{{\eqref{eq:B_1}}}{{ \leq}}\frac{|v-b|^{2}}{10}.\]
Substituting the above bound on \(A\) and \(B\) in (17) yields the desired result.
We are now almost done. As the following result shows, it suffices to argue that enough number of events \(\mathcal{E}_{j}\) happen:
**Claim 5.7**.: _If at least \(\frac{c^{\prime}}{\delta^{2}}\ln(2/\varepsilon)\) of the events \(\mathcal{E}_{j},j\in[m-1]\) happen, then \(\mathsf{dist}(v,\mathsf{CH}(P))\leq\delta\Delta(K)\)._
Proof.: Assume, for the sake of contradiction, that \(\mathsf{dist}(v,\mathsf{CH}(P))>\varepsilon\Delta(K)\). Assume that events \(\mathcal{E}_{j_{1}},\ldots,\mathcal{E}_{j_{h}}\) happen, where \(h:=\frac{c^{\prime}}{\delta^{2}}\ln(2/\varepsilon).\) Now, for any index \(i\in[h-1]\), the definition of \(\mathcal{E}_{j_{i+1}}\) implies that
\[\mathsf{dist}(v,\mathsf{CH}(P^{j_{i+1}}))\leq\left(1-\frac{\delta^{2}}{c^{ \prime}}\right)\mathsf{dist}(v,\mathsf{CH}(P^{j_{i+1}-1})\leq\mathsf{dist}(v, \mathsf{CH}(P^{j_{i}})).\]
Therefore,
\[\mathsf{dist}(v,\mathsf{CH}(P^{j_{h}}))\leq\left(1-\frac{\delta^{2}}{c^{ \prime}}\right)^{h}\mathsf{dist}(v,P^{1})\stackrel{{\text{\rm Fact \ref{eq:B_1}}}}{{\leq}}\left(1-\frac{\delta^{2}}{c^{ \prime}}\right)^{h}2\Delta(K)\leq\delta\Delta(K).\]
It remains to show that with high probability at least \(h:=\frac{c^{\prime}}{\delta^{2}}\ln(2/\varepsilon)\) of the events happen. In order to prove this, we divide the sequence \([m]\) into \([h]\) subsequences, each of length \(m/h\). Call these subsequences \(C_{1},\ldots,C_{h}\). It follows from Lemma 5.4 that for any \(i\in[h]\),
\[\Pr\left[\wedge_{j\in C_{i}}-\mathcal{E}_{j}\right]\leq 0.99^{m/h}\leq\frac{1}{h ^{2}}.\]
A simple union bound now shows that with probability at least \(1-1/h\), at least one event \(\mathcal{E}_{j}\) happens during each of the subsequences \(C_{1},\ldots,C_{h}.\) Claim 5.7 now proves the theorem.
## 6 From \(\mathsf{OptOr}_{\varepsilon}(K)\) oracles to ListLearn
We first define the notion of well-separatedness.
**Definition 6.1**.: We say that a polytope \(K\) with vertex set \(V\) is _\(\delta\)-well-separated_ if for every vertex \(v\in V\), we have
\[\mathsf{dist}(v,\mathsf{CH}(V\setminus\{v\}))\geq\delta\cdot\Delta(K),\]
where \(\Delta(K)\) denotes the diameter of \(K\).
We first show the lower bound result Theorem1.11, which is restated here.
**Theorem 1.11** (Oracle Lower Bound).: _The problem where, one is required to output a point which is within \(\Delta(K)/10\) of some vertex of \(K\), given only by an \(\mathsf{OptOr}_{\varepsilon}(K)\) oracle, cannot be solved in deterministic polynomial time when \(\varepsilon\geq 8\ln d/\sqrt{d}\)._
Proof.: The proof is by producing an "adversarial oracle". The parameter \(\varepsilon\) is set to \(8\ln d/\sqrt{d}\). Our \(K\) will a "needle" of the form \(\{\lambda u:\lambda\in[-1,1]\}\), where, \(u\) is a unit length vector. The oracle's answer for each query will always be the \(0\) vector. Let \(v_{1},v_{2},\ldots,v_{q}\), where \(q=d^{c}\) for a constant \(c\), be the vectors on which the oracle is queried. The answer \(0\) is clearly a valid answer for any \(\mathsf{OptOr}_{\varepsilon}(K)\) oracle, provided \(u\) is a unit vector with \(|u\cdot v_{i}|\leq 4\ln d/\sqrt{d}\). For a single \(v_{i}\), the probability that a random \(u\) satisfies \(|u\cdot v_{i}|>4\ln d/\sqrt{d}\) is at most \(e^{-d\varepsilon^{2}}\). So by union bound, we see that there is a vector \(u_{1}\) satisfying
\[|u_{1}\cdot v_{i}|\leq 4\ln d/\sqrt{d}\forall i\in[q].\]
Further, by a similar argument, there is a vector \(u_{2}\)satisfying
\[|u_{2}\cdot v_{i}|\leq 4\ln d/\sqrt{d},i\in[q]\;;\;|u_{1}-u_{2}|,|u_{1}+u_{2}| \geq.1.\]
It follows that the two possible \(K\), where
\[\{\lambda u_{1}:\lambda\in[-1,1]\}\,\ \{\lambda u_{2}:\lambda\in[-1,1]\}\]
are both consistent with the answer \(0\) for all \(v_{i},i\in[q]\) and in addition, we have that no point \(w\in\mathbf{R}^{d}\) is within distance \(\varepsilon\) of two vertices, one from each of these \(2\) needles. So no answer is valid for both needles and the adversarial oracle can choose one of the needles to render the algorithm's answer incorrect.
We now formally prove Theorem1.10, which gives an algorithm for the ListLearn problem.
**Theorem 6.2**.: _Suppose \(\varepsilon,\delta\) are reals in \([0,1]\) with \(\delta^{2}\geq c\varepsilon\sqrt{d},\delta^{3}\geq c\,\varepsilon\), where \(c\) is a large enough constant. Let \(K\) be a \(\delta\)-well-separated \(k\)-vertex polytope. Suppose we are also given \(\mathsf{OptOr}_{\varepsilon}(K)\) oracle \(\mathcal{O}\). Let \(W\) be the set of answers of the oracle to \(m=poly(d)\cdot k^{\Omega(1/\delta^{2})}\) independent random queries. Then for each vertex \(v\) of \(K\), there is a point \(v^{\prime}\in W\) such that \(|v-v^{\prime}|\leq O(\delta^{2}\Delta(K)/c)\)._
Proof.: The algorithm chooses a set \(U\) of \(k^{\Omega(1/\delta^{2})}\) unit length i.i.d. Gaussian vectors. For each \(u\in U\), it calls the oracle \(\mathcal{O}\) to find a vector \(x(u)\). Let \(W\) denote the set \(\{x(u):u\in U\}\). We invoke 7.6 with the parameters:
\[\delta^{\prime}:=\delta/4,\,\varepsilon^{\prime}:=\frac{32\delta^{2}}{c}\]
on the set \(W\). The algorithm in that result outputs a set \(Q\). We output that \(Q\) as the approximation to the set of vertices of \(K\). We now prove that the set \(Q\) has the desired properties. We first show that for every vertex of \(K\), there is a direction \(u\) in \(U\) along which the projection of this vertex is higher than the projection of the remaining vertices by a large enough margin. Let \(M_{.,1},\ldots,M_{.,k}\) be the vertices of \(K\).
**Claim 6.3**.: _With high probability, the following event happens: for each \(\ell\in[k]\), there is a vector \(u^{(\ell)}\in U\) such that for all \(\ell^{\prime}\in[k],\ell^{\prime}\neq\ell,\) we have_
\[u^{(\ell)}\cdot M_{\cdot,\ell}>u^{(\ell)}\cdot M_{\cdot,\ell^{\prime}}+\frac{c \,\varepsilon\cdot\Delta(K)}{8\delta^{2}} \tag{18}\]
Proof.: Fix a vertex \(M_{\cdot,\ell}\) of \(K\). Let \(K^{\prime}\) be the convex hull of \(\{M_{\cdot,1},\ldots,M_{\cdot,k}\}\setminus\{M_{\cdot,\ell}\}.\) We invoke Theorem4.1 on the polytope \(K^{\prime}\) and the point \(a:=M_{\cdot,\ell}\). The definition of \(\delta\)-well-separated implies that (4) is satisfied with \(\rho=\delta\). Using \(V=\Re^{d}\) in the statement of Theorem4.1, we see that
\[\Pr_{u\in U}\left[u\cdot M_{\cdot,\ell}-\max_{\ell^{\prime}\in[k],\ell^{ \prime}\neq\ell}u\cdot M_{\cdot,\ell^{\prime}}\geq\frac{\delta\sqrt{\log k} \Delta(K)}{\sqrt{\log k}+4\delta\sqrt{d}}\right]\geq\frac{1}{40}k^{-10/\delta ^{2}}.\]
Since
\[\frac{\delta\sqrt{\log k}}{\sqrt{\log k}+4\delta\sqrt{d}}\geq\min\left(\frac{ \delta}{2},\frac{1}{8\sqrt{d}}\right)\geq\frac{\varepsilon c}{8\delta^{2}},\]
the desired result follows from the fact that \(|U|\gg\frac{1}{40}k^{-10/\delta^{2}}\).
For rest of the proof, assume that the statement in Claim6.3 holds true, i.e., there are directions \(u^{(1)},\ldots,u^{(k)}\in U\) satisfying (18). We now show that for every vertex of \(M_{\cdot,\ell}\) of \(K\), the corresponding point \(x(u^{\ell})\) is close to \(M_{\cdot,\ell}\).
**Claim 6.4**.: _For every \(\ell\in[k]\),_
\[|x(u^{(\ell)})-M_{\cdot,\ell}|\leq 17\delta^{2}\Delta(K)/c.\]
Proof.: By the definition of \(\mathcal{O}\), we know that \(x(u^{(\ell)})\) can be written as \(y(u^{\ell})+z(u^{(\ell)})\), where \(y(u^{(\ell)})\in K\) and \(|z(u^{(\ell)})|\leq\varepsilon\Delta(K)\). Thus, there is a convex combination \(\lambda_{\ell^{\prime}},\ell^{\prime}\in[k]\), of the vertices \(M_{\cdot,\ell^{\prime}}\) of \(K\) such that
\[x(u^{(\ell)})=\sum_{\ell^{\prime}\in[k]}\lambda_{\ell^{\prime}}M_{\cdot,\ell^ {\prime}}+z(u^{(\ell)}).\]
By the definition of \(\mathcal{O}\), \(x(u^{(\ell)})\cdot u^{(\ell)}\geq M_{\cdot,\ell}\cdot u^{(\ell)}-\varepsilon \Delta(K)\) and \(|z(u^{(\ell)})\cdot u^{(\ell)}|\leq\varepsilon\Delta(K)\). So, we get
\[M_{\cdot,\ell}\cdot u^{(\ell)}-\varepsilon\Delta(K)\leq\sum_{\ell^{\prime} \in[k]}\lambda_{\ell^{\prime}}M_{\cdot,\ell^{\prime}}\cdot u^{(\ell^{\prime} )}+\varepsilon\Delta(K),\]
which implies (after subtracting \(\lambda_{\ell}M_{\cdot,\ell}\cdot u^{(\ell)}\) from both sides):
\[(1-\lambda_{\ell})M_{\cdot,\ell}u^{(\ell)}\leq\sum_{\ell^{\prime}\neq\ell} \lambda_{\ell^{\prime}}M_{\cdot,\ell^{\prime}}\cdot u^{(\ell^{\prime})}+2 \varepsilon\Delta(K)\]
which, using Claim6.3, yields:
\[(1-\lambda_{\ell})M_{\cdot,\ell}u^{(\ell)}\leq(1-\lambda_{\ell})M_{\cdot,\ell }u^{(\ell)}-(1-\lambda_{\ell})\frac{c\varepsilon\Delta(K)}{8\delta^{2}}+2 \varepsilon\Delta(K).\]
It follows from the above inequality that
\[1-\lambda_{\ell}\leq\frac{16\delta^{2}}{c}. \tag{19}\]
Therefore,
\[|x(u^{(\ell)})-M_{\cdot,\ell}|=\left|\sum_{\ell^{\prime}\neq\ell}\lambda_{ \ell^{\prime}}(M_{\cdot,\ell}-M_{\cdot,\ell^{\prime}})\right|+\varepsilon\Delta (K)\leq\sum_{\ell^{\prime}\neq\ell}\lambda_{\ell^{\prime}}\Delta(K)+ \varepsilon\Delta(K)\stackrel{{\eqref{eq:19}}}{{\leq}}\frac{17 \delta^{2}\Delta(K)}{c}.\]
This completes proof of the Theorem.
The statement of Theorem 6.2 requires two different bounds relating \(\delta\) to \(\varepsilon\). In some applications, it may be difficult to ensure both of these conditions. We consider the setting when \(d\gg k\), where the following variation of Theorem 6.2 is better suited.
**Theorem 6.5**.: _Suppose \(\varepsilon,\delta\) are reals in \([0,1]\) with \(\delta^{2}\geq c\varepsilon\sqrt{d},\delta\geq\frac{\sqrt{\log k}}{\sqrt{cd}},\) where \(c\) is large enough constant. Let \(K\) be a \(\delta\)-well-separated \(k\)-vertex polytope. Suppose we are also given an \(\mathsf{OptOr}_{\varepsilon}(K)\) oracle \(\mathcal{O}\). Let \(W\) be the set of answers of the oracle to \(m=poly(d)\cdot k^{O(1/\delta^{2})}\) independent queries. Then for each vertex \(v\) of \(K\), there is a point \(v^{\prime}\in W\) such that \(|v-v^{\prime}|\leq O(\delta^{2}\Delta(K)/\sqrt{c})\)._
Proof.: The proof is identical to that of Theorem 6.2, except that in the proof of Claim 6.3, we now have the following modified argument. Since \(\delta\sqrt{cd}\geq\sqrt{\log k}\), we get
\[\frac{\delta\sqrt{\log k}}{\sqrt{\log k}+4\delta\sqrt{d}}\geq\frac{1}{8\sqrt{ cd}}\geq\frac{\varepsilon\sqrt{c}}{8\delta^{2}}.\]
Rest of the proof of Claim 6.3 follows without any further changes (where we replace \(c\) by \(\sqrt{c}\)).
## 7 From ListLearn to the \(k\)-\(\mathsf{OLP}\) Problem
In this section, we show that for well-separated polytopes, a solution for the ListLearn problem can be used to solve the \(k\)-\(\mathsf{OLP}\) problem as well. This algorithm uses the notion of soft convex hulls. We first describe the algorithm for constructing soft convex hulls, and then use it to solve the \(k\)-\(\mathsf{OLP}\) problem.
### Soft Convex Hulls
Let \(W\) be a finite set of points in \(\Re^{d}\), and \(T\) be the vertices of \(\mathsf{CH}(W)\). The subset \(T\) of \(W\) is the unique subset of \(W\) with the following properties:
1. \(W\subseteq\mathsf{CH}(T)\)
2. \(\forall w\in W\), if \(w\notin\mathsf{CH}(W\setminus\{w\})\), then \(w\in T\).
We now define a natural notion of _soft_ convex hull.
**Definition 7.1**.: For an \(\varepsilon\geq 0\), and \(S\subseteq W\), define the \(\varepsilon\)-convex hull of \(S\), \(\varepsilon\)-\(\mathsf{CH}(S)\), as \(\mathsf{CH}(S)+\varepsilon\Delta(W)B\), where \(B\) is the unit ball of the Eucliean norm.
The intuition behind the above definition is that \(\mathsf{CH}(W)\) can have a very large set of vertices, but there may be a small set of points whose soft convex hull contains \(W\). This is defined more formally as follows:
**Definition 7.2**.: We call a subset \(T\subseteq W\) an \(\varepsilon\)-envelope of \(W\), written \(\varepsilon\)-\(\mathsf{ENV}(W)\), if \(W\subseteq\varepsilon\)-\(\mathsf{CH}(T)\).
**Remarks:** The following observations about the set \(\varepsilon\)-\(\mathsf{ENV}(W)\) are easy to see:
1. There are several distinct sets \(T\) which could qualify as \(\varepsilon\)-\(\mathsf{ENV}(W)\). For example, let \(W\) consist of the following set of points in \(\Re^{2}\): a set of points \(W_{1}\) close to \((0,0)\) and a set of points \(W_{2}\) close to \((1,0)\). Let \(T\) be a pair of points \(\{x,y\}\) with \(x\in W_{1},y\in W_{2}\). Then it is easy to check that \(T\) is an \(\varepsilon\)-\(\mathsf{ENV}(W)\).
2. Let \(T\) be \(\varepsilon\)-\(\mathsf{ENV}(W)\). Unlike property (P2) above, it is not necessary that if \(w\in W\) is such that \(w\notin\varepsilon\)-\(\mathsf{CH}(W\setminus\{w\})\), then \(w\in T\).
Since the set \(\varepsilon\text{-}\mathsf{ENV}(W)\) is not uniquely determined, we will impose one more condition on it to make it unique (if it exists) and polynomial time computable. This condition requires the points of \(T\) to be "far apart" from each other. More precisely:
**Definition 7.3**.: For \(\varepsilon,\delta\in[0,1]\), a set \(T\) is called a \((\varepsilon,\delta)\text{-}\mathsf{ENV}(W)\) if it is an \(\varepsilon\text{-}\mathsf{ENV}(W)\) and
\[\forall w\in T,\mathsf{dist}(w,\mathsf{CH}(T\setminus\{w\}))>\delta\Delta(W) \tag{20}\]
**Fact 7.4**.: _Given a subset \(T\) of \(W\), we can check in polynomial time whether \(T\) is an \((\varepsilon,\delta)\text{-}\mathsf{ENV}(W)\)._
Proof.: Fix a subset \(T\). We can verify in polynomial time whether \(T\) is an \(\varepsilon\text{-}\mathsf{ENV}(W)\). Indeed, for each point \(w\in W\), we need to check if \(w\in\varepsilon\text{-}\mathsf{CH}(T)\). This can be expressed as a convex program feasibility problem, where the variables are \(\lambda_{t}\) for each \(t\in T\):
\[\left|\sum_{t\in T}\lambda_{t}\cdot t-w\right|\leq\varepsilon\Delta(W),\quad \sum_{t}\lambda_{t}=1,\quad\lambda_{t}\geq 0\ \ \forall t\in T.\]
Similarly, we can check (20) using convex programming. For each \(w\in T\), we can find \(\mathsf{dist}(w,\mathsf{CH}(T\setminus\{w\}))\) as follows, where the variables are \(\lambda_{x},x\in T\setminus\{w\}\):
\[\min.\,\left|\sum_{x\in T\setminus\{w\}}\lambda_{x}\cdot x-w\right|,\quad \sum_{x}\lambda_{x}=1,\quad\lambda_{x}\geq 0\ \ \forall x\in T\setminus\{w\}.\]
For rest of the section, we address the following question: given the set \(W\), and parameters \(\varepsilon,\delta\), is there a \((\varepsilon,\delta)\text{-}\mathsf{ENV}(W)\), and if so, can we find in polynomial time an approximation to this set? We informally argue that several "natural" greedy strategies do not work. Consider, for example, the following algorithms for identifying \((\varepsilon,\delta)\text{-}\mathsf{ENV}(W)\):
* Identify \(T\) as the set of points \(w\in W\) for which \(w\) is not close to \(\mathsf{CH}(W\setminus\{w\})\): We define a set of points in \(\Re^{2}\). We have _rings_ of points, one around \((0,0)\) and the other around \((0,1)\), each point is close to the convex hull of the others and so this algorithm will dismiss all the points as not belonging to the desired set \(T\).
* Start by adding an extreme point (along an arbitrarily chosen direction) to \(T\). For the next \(k-1\) steps, iteratively find a point \(w\) for which \(\mathsf{dist}(w,\mathsf{CH}(T))\) is maximized and add it to \(T\): a simple example as above shows that this idea may not work as well. Consider \(v_{1},v_{2},v_{3},v_{4}\) lying on corners of a square, and let \(v_{5}\) be the mid-point of \(v_{3},v_{4}\). We start by adding \(v_{1}\) and then \(v_{2}\) to \(T\). But in the next step, we could add \(v_{5}\), and then both \(v_{3},v_{4}\). But the "correct" \(T\) would have been \(\{v_{1},v_{2},v_{3},v_{4}\}\).
If \(\varepsilon=0\), the above question is easy to answer in polynomial time. The answer is yes iff the set \(T\) of vertices of \(\mathsf{CH}(W)\) satisfies (20). Also, if \(\delta=1\), \(T\) has to be a singleton to satisfy (20).
In rest of this section, we consider the following problem: For what pairs of values of \(\varepsilon,\delta\), can we prove that there is _essentially_ at most one \((\varepsilon,\delta)\text{-}\mathsf{ENV}(W)\), and if so, can we determine this set efficiently? We do not know the exact answer to this, but our main result here (which suffices for the applications) is (verbally stated) an affirmative answer to the question if the following condition is satisfied:
\[\delta\in\Omega(\sqrt{\varepsilon}).\]
This will follow as a corollary of our main result:
**Theorem 7.5**.: _Let \(\delta,\varepsilon,\varepsilon_{3}\) be reals in \((0,1/8)\) satisfying_
\[\delta>\max\left(\frac{2\varepsilon}{\varepsilon_{3}-\varepsilon},4\varepsilon_ {3}\right) \tag{21}\]
_Let \(W\) be a finite set of points in \(\Re^{d}\). We can determine in polynomial time whether exists a set \(T\) in \((\varepsilon,\delta)\)-\(\mathsf{ENV}(W)\), and if so, we can efficiently find a subset \(Q\) of \(W\) such that_
\[|Q|=|T| \tag{22}\] \[\forall w\in T,\exists x\in Q:|w-x|\leq 2\varepsilon_{3}\Delta(W) \tag{23}\]
**Corollary 7.6**.: _Let \(\delta,\varepsilon\) be reals in \((0,1/8)\) satisfying \(\delta>16\sqrt{\varepsilon}\) Let \(W\) be a finite set of points in \(\Re^{d}\). We can determine in polynomial time whether exists a set \(T\) forming a \((\varepsilon,\delta)\)-\(\mathsf{ENV}(W)\), and if so, we can efficiently find a subset \(Q\) of \(W\) such that_
\[|Q|=|T| \tag{24}\] \[\forall w\in T,\exists x\in Q:|w-x|\leq 8\sqrt{\varepsilon}\Delta(W) \tag{25}\]
Proof.: The Corollary follows from Theorem (7.5) by taking \(\varepsilon_{3}=4\sqrt{\varepsilon}\)
Proof.: (of Theorem 7.5) The procedure is described in Algorithm 2. We first compute a subset \(Q^{\prime\prime}\) of \(W\) consisting of points \(w\in W\) which do not lie in the soft convex hull of the points in \(W\) which are "far" from \(w\) - this can be done in polynomial time by using arguments similar to those in the proof of Fact 7.4. Then \(Q\) is defined as a maximal subset of points in \(Q^{\prime\prime}\) with the property that the pair-wise distance between the points in it large. Finally, we check whether \(Q\) is an \((\varepsilon,\delta)\)-\(\mathsf{ENV}(W)\), which can be done efficiently using Fact 7.4.
```
1Compute \(\Delta(W)\).
2Let \(Q^{\prime}:=\{w\in W:w\in\varepsilon\text{-}\mathsf{CH}\left(\{x\in W:|w-x| \geq\varepsilon_{3}\Delta(W)\}\right).\)
3Let \(Q^{\prime\prime}:=W\setminus Q^{\prime}\).
4Define \(Q:=\) maximal subset of \(Q^{\prime\prime}\) such that for every distinct \(x,y\in Q\), \(|x-y|>2\varepsilon_{3}\Delta(W)\).
5if\(Q\) is \((\varepsilon,\delta)\)-\(\mathsf{ENV}(W)\)then
6Output the set \(Q\).
7else
8Output No.
```
**Algorithm 2**Procedure to identify \((\varepsilon,\delta)\)-\(\mathsf{ENV}(W)\)
Now we analyze this algorithm. If there is no \((\varepsilon,\delta)\)-\(\mathsf{ENV}(W)\), then the algorithm will clearly say "No" (because the set \(Q\) will not be \((\varepsilon,\delta)\)-\(\mathsf{ENV}(W)\)). So assume that there is a set \(T\) which is \((\varepsilon,\delta)\)-\(\mathsf{ENV}(W)\).
**Claim 7.7**.: \(T\subseteq Q^{\prime\prime}\)_._
Proof.: Suppose for the sake of contradiction that there is a point \(w\in T\setminus Q^{\prime\prime}\). For sake of brevity, let \(W_{x}\) denote the subset of points in \(W\) satisfying \(|w-x|\geq\varepsilon_{3}\Delta(W)\). The fact that \(w\notin Q^{\prime\prime}\) implies that \(w\in\mathsf{CH}(W_{x})\), i.e.,
\[w=\sum_{x\in W_{x}}\lambda_{x}\cdot x+e_{0},\]
where \(\lambda_{x}\) form a convex combination and \(|e_{0}|\leq\varepsilon\Delta(W).\) Since \(W\subseteq\varepsilon\text{-}\mathsf{ENV}(T)\), for each \(x\in W_{x}\), there are a point \(x^{\prime}\in\mathsf{CH}(T)\) such that \(e_{x}:=x-x^{\prime}\) has length at most \(\varepsilon\Delta(W)\). Since \(w\in T\), we can write \(x^{\prime}\) as
\[x^{\prime}=\mu_{x}w+(1-\mu_{x})y_{x},\quad y_{x}\in\mathsf{CH}(T\setminus\{w\}),\]
where \(\mu_{x}\in[0,1]\). Therefore,
\[|w-x|\leq|w-x^{\prime}|+|e_{x}|=(1-\mu_{x})|w-y_{x}|+|e_{x}|\leq(1-\mu_{x})\Delta( W)+\varepsilon\Delta(W).\]
But we know that \(|w-x|\geq\varepsilon_{3}\Delta(W)\). Therefore, we get
\[(1-\mu_{x})\geq\varepsilon-\varepsilon_{3}. \tag{26}\]
Now,
\[w =\sum_{x\in W_{x}}\lambda_{x}\cdot x+e_{0}=\sum_{x\in W_{x}}\lambda _{x}\cdot x^{\prime}+\sum_{x}\lambda_{x}e_{x}+e_{0}\] \[=\sum_{x\in W_{x}}\lambda_{x}\mu_{x}\cdot w+\sum_{x\in W_{x}} \lambda_{x}(1-\mu_{x})y_{x}+e,\]
where \(e=\sum_{x}\lambda_{x}e_{x}+e_{0}\) has length at most \(2\varepsilon\Delta(W)\). Let \(\theta_{x}\) denote \(\lambda_{x}(1-\mu_{x})\). Observe that \(\theta:=\sum_{x}\theta_{x}=1-\sum_{x}\lambda_{x}\mu_{x}\). Therefore, we can rewrite the above as
\[w-\frac{1}{\theta}\sum_{x\in W_{x}}\theta_{x}y_{x}=\frac{e}{\theta}.\]
Since \(\frac{1}{\theta}\sum_{x\in W_{x}}\theta_{x}y_{x}\in\mathsf{CH}(T\setminus\{w\})\), we see that
\[\mathsf{dist}(w,\mathsf{CH}(T\setminus\{w\})\leq\frac{|e|}{\theta}\leq\frac{2 \varepsilon\Delta(W)}{\sum_{x\in W_{x}}\lambda_{x}(1-\mu_{x})}\stackrel{{ \eqref{eq:w-x}}}{{\leq}}\frac{2\varepsilon}{\varepsilon- \varepsilon_{3}}\Delta(W).\]
Using (20) and (21), we get a contradiction. Therefore, \(w\in Q^{\prime\prime}\).
Since \(T\subseteq Q^{\prime\prime}\), for every \(w\in T\), there exists a point \(x_{w}\in Q\) such that \(|w-x_{w}|\leq 2\varepsilon_{3}\Delta(W).\) It is easy to see that if \(w,w^{\prime}\) are two distinct points in \(T\), then \(x_{w}\neq x_{w^{\prime}}\). Indeed,
\[|x_{w}-x_{w^{\prime}}|\geq|w-w^{\prime}|-|w-x_{w}|-|w^{\prime}-x_{w^{\prime}} |\geq\delta\Delta(W)-4\varepsilon_{3}\Delta(W)\stackrel{{\eqref {eq:w-x}}}{{>}}0.\]
Finally, we prove that every element in \(Q\) is of the form \(x_{w}\) for some \(w\in T\).
**Claim 7.8**.: _Every \(x\in Q\) is of the form \(x_{w}\) for some \(w\in T\)._
Proof.: Consider a point \(x\in Q\). This implies that \(x\in Q^{\prime\prime}\). We claim that there is a point \(w\in T\) such that \(|x-w|\leq\varepsilon_{3}\Delta(W)\). Suppose not. Then the set of points \(w\) such that \(|x-w|\geq\varepsilon_{3}\Delta(W)\) includes \(T\). But we know that \(x\in\varepsilon\text{-}\mathsf{CH}(T)\), and so \(x\in Q^{\prime}\), a contradiction.
Therefore
\[|x-w_{x}|\leq|x-w|+|w-w_{x}|\leq 2\varepsilon\Delta(W).\]
This implies that \(x=w_{x}\) (since any two distinct points in \(Q\) have distance greater than \(2\varepsilon\Delta(W)\).
Thus we have shown that \(|Q|=|T|\) and for every \(w\in T\), there is a unique element \(x_{w}\in Q\) with \(|w-x_{w}|\leq 2\varepsilon_{3}\Delta(W)\).
### Algorithm for \(k\)-\(\mathsf{OLP}\)
We now show how soft convex hulls can be used to generate a solution for the \(k\)-\(\mathsf{OLP}\) problem. The following result, which formalizes Theorem1.13, uses the same setting as that in Theorem6.2:
**Theorem 7.9**.: _Suppose \(\varepsilon,\delta\) are reals in \([0,1]\) with \(\delta^{2}\geq c\varepsilon\sqrt{d},\delta^{3}\geq c\,\varepsilon\), where \(c\) is a large enough constant. Let \(K\) be a \(\delta\)-well-separated \(k\)-vertex polytope. Suppose we are also given an \(\mathsf{OptOr}_{\varepsilon}(K)\) oracle \(\mathcal{O}\). Let \(W\) be the set of answers of the oracle \(\mathcal{O}\) to \(m=poly(d)\cdot k^{\Omega(1/\delta^{2})}\) independent random queries. We can find \(Q\subseteq W,|Q|=k\) in randomized \(poly(d)\cdot k^{\Omega(1/\delta^{2})}\)-time which satisfies the following condition w.h.p.: for every vertex \(v\) of \(K\), there is a point \(v^{\prime}\) in \(Q\) with \(|v-v^{\prime}|\leq\delta\Delta(K)/10\)._
Proof.: The proof of Theorem6.2 shows that for every vertex \(M_{\cdot,\ell}\) of \(K\), there is point \(x(u^{\ell})\in W\) such that
\[|x(u^{(\ell)})-M_{\cdot,\ell}|\leq 17\delta^{2}\Delta(K)/c. \tag{27}\]
Let \(T\) denote the set of points \(\{x(u^{(\ell)}):\ell\in[k]\}\). Our first claim is that the points of \(T\) are also well-separated.
**Claim 7.10**.: _For any \(\ell\in[k]\),_
\[\mathsf{dist}(x(u^{(\ell)}),\mathsf{CH}(T\setminus\{x(u^{(\ell)})\}))\geq \delta\Delta(K)/2.\]
_Further, the diameter of \(\mathsf{CH}(T)\) is at most \(2\Delta(K)\)._
Proof.: Fix an index \(\ell\in[k]\) and a point \(y\in\mathsf{CH}(T\setminus\{x(u^{(\ell)})\}).\) We can express \(y\) as a convex combination of points in \(T\setminus\{x(u^{(\ell)})\}\), i.e.,
\[y=\sum_{\ell^{\prime}\neq\ell}\lambda_{\ell^{\prime}}\cdot x(u^{(\ell^{\prime })}),\quad\text{where}\quad\sum_{\ell^{\prime}\neq\ell}\lambda_{\ell^{\prime} }=1.\]
Now
\[|x(u^{(\ell)})-y| \geq\left|M_{\cdot,\ell}-\sum_{\ell^{\prime}\neq\ell}\lambda_{ \ell^{\prime}}\cdot M_{\cdot,\ell^{\prime}}\right|-|x(u^{(\ell)})-M_{\cdot, \ell}|-\sum_{\ell^{\prime}\neq\ell}\lambda_{\ell^{\prime}}\cdot|x(u^{(\ell^{ \prime})})-M_{\cdot,\ell^{\prime}}|\] \[\geq\left(\delta-\frac{34\delta^{2}}{c}\right)\Delta(K)\geq \delta\Delta(K)/2,\]
where the second last inequality follows from (27) and the fact that \(K\) is \(\delta\)-well-separated. Since \(\mathsf{dist}(x(u^{(\ell)}),K)\leq\varepsilon\Delta(K)\), it follows that \(\Delta(\mathsf{CH}(T))\leq 2\Delta(K)\).
Recall that \(W\) denotes the set \(\{x(u):u\in U\}\). We now show that \(\mathsf{CH}(T)\) closely approximates the set \(\mathsf{CH}(W)\).
**Claim 7.11**.: \(W\subseteq\varepsilon^{\prime}\)_-\(\mathsf{CH}(T),\) where \(\varepsilon^{\prime}=\frac{32\delta^{2}}{c}\)._
Proof.: Fix a point \(x(u)\in W\). We know that \(x(u)\) can be written as
\[x(u)=y(u)+z(u),\quad y(u)\in K,|z(u)|\leq\varepsilon.\]
Let \(y(u)=\sum_{\ell\in[k]}\lambda_{\ell}\cdot M_{\cdot,\ell}\), where the coefficients \(\lambda_{\ell}\) form a convex combination. Then
\[\left|x(u)-\sum_{\ell\in[k]}\lambda_{\ell}\cdot x(u^{(\ell)})\right|\leq\sum_ {\ell\in[k]}\lambda_{\ell}\cdot|x(u^{(\ell)})-M_{\cdot,\ell}|+|z(u)|\leq\frac{ 17\delta^{2}\Delta(K)}{c}+\varepsilon\Delta(K)\leq\frac{32\delta^{2}\Delta(K)} {c},\]
where the second last inequality follows from (27) and the last inequality by the assumption in. This proves the desired result.
Claim 7.10 and Claim 7.11 imply that \(T\) is \((\varepsilon^{\prime},\delta^{\prime})\)-\(\mathsf{ENV}(W)\) with \(\delta^{\prime}=\delta/4,\varepsilon^{\prime}=\frac{32\delta^{2}}{c}\). We can now apply Corollary 7.6 to get approximations to \(x(u^{(\ell)})\) within distance \(17\sqrt{\varepsilon^{\prime}}\Delta(K)\). Claim 7.11 now implies that we can get approximations to \(M_{\cdot,\ell}\) within distance
\[17\sqrt{\varepsilon^{\prime}}\Delta(K)+\frac{16\delta^{2}\Delta(K)}{c}\leq \frac{\delta\Delta(K)}{10}.\]
This proves the desired result.
The following result is the analogue of Theorem 6.5, and gives an algorithm for the \(k\)-\(\mathsf{OLP}\) problem under slightly different conditions on the parameters \(\delta\) and \(\varepsilon\).
**Theorem 7.12**.: _Suppose \(\varepsilon,\delta\) are reals in \([0,1]\) with \(\delta^{2}\geq c\varepsilon\sqrt{d},\delta\geq\frac{\sqrt{\log k}}{\sqrt{cd}},\) where \(c\) is large enough constant. Let \(K\) be a \(\delta\)-well-separated \(k\)-vertex polytope. Suppose we are also given an \(\mathsf{OptOr}_{\varepsilon}(K)\) oracle \(\mathcal{O}\). Let \(W\) be the set of answers of the oracle \(\mathcal{O}\) to \(m=poly(d)\cdot k^{O(1/\delta^{2})}\) independent queries. In randomized \(poly(d)\cdot k^{O(1/\delta^{2})}\)-time we can find \(Q\subset W\) of \(k\) points such that the following condition is satisfied w.h.p.: for every vertex \(v\) of \(K\), there is a point \(v^{\prime}\) in \(Q\) with \(|v-v^{\prime}|\leq\delta\Delta(K)/10\)._
## 8 \(k\)-\(\mathsf{OLP}\) algorithm for Latent Polytopes using Singular Value Decomposition
Theorem 1.11 showed that a solution to \(k\)-\(\mathsf{OLP}\) problem requires the error parameter \(\varepsilon\) to be \(O^{*}(1/\sqrt{d})\). Theorem 6.2 and Theorem 6.5 give algorithms achieving this bound. However, for many polytopes with \(k\) vertices, we can solve the \(k\)-\(\mathsf{OLP}\) problem with \(\varepsilon\) being \(O^{*}(1/\sqrt{k})\). However, if \(k<d\), this error is too high. To tackle this, we find a good approximation to the subspace spanned by the vertices of \(K\), then we project to this subspace and use the result in Theorem 6.2. One such example is the "Latent \(k-\) Polytope" (abbreviated \(\mathsf{LkP}\)) problem which we now describe.
The \(\mathsf{LkP}\) problem has been studied in [1]. Certain assumptions were made on the model, namely, the hidden polytope \(K\) as well as on the (hidden) process for generating observed data from latent points in \(K\). These assumptions are (a) shown to hold in several important Latent Variable models and (b) are sufficient to enable one to get polynomial time learning algorithms.
Here, we formulate assumptions which are similar, but, weaker in one important aspect. Whereas [1] assumed that each vertex of \(K\) has a separation from the **affine hull** of the other vertices (thus, in particular, each vertex is affinely independent of other vertices), we assume here that each vertex is separated only from the convex hull of the others. Under this weaker assumption, the algorithm of [1] does not work. We give a different algorithm which we prove works. It is also simpler to state and carry out and its proof is based on a new general tool we introduce here - the Random Separating Hyperplane theorem (Theorem 4.1).
**Assumptions on data in the \(\mathsf{LkP}\) problem:** Let \(M_{\cdot,1},\ldots,M_{\cdot,k}\) denote the vertices of \(K\) and \(\mathbf{M}\) be the \(d\times k\) matrix with columns representing the vertices of \(K\). We assume there are latent (hidden) points \(P_{\cdot,j},j=1,2,\ldots,n\) in \(K\) and observed data points \(A_{\cdot,j},j=1,2,\ldots,n\) are generated (not necessarily under any stochastic assumptions) by adding _displacements_\(A_{\cdot,j}-P_{\cdot,j}\) respectively to \(P_{\cdot,j}\). Clearly if the displacements are arbitrary, it is not possible to learn \(K\) given only the observed data. So we need some bound on the displacements.
Secondly, if all (or almost all) latent points lie in (or close to) the convex hull of a subset of \(k-1\) or fewer vertices of \(K\), the missing vertex cannot be learnt. To avoid this, we will assume that there is a certain \(w_{0}\) fraction of latent points close to every vertex of \(K\).
Let 4
Footnote 4: By the standard definition of spectral norm, it is easy to see that \(\sigma_{0}^{2}\) is the maximum mean squared displacement in any direction.
\[\sigma_{0}:=\frac{||\mathbf{P}-\mathbf{A}||}{\sqrt{n}}. \tag{28}\]
We now show that the \(k\)-**OLP-Algorithm** mentioned in Section 1.3 has the desired properties:
**Theorem 8.1**.: _Suppose \(K\) is a latent poytope with \(k\) vertices \(M_{\cdot,1},M_{\cdot,2},\ldots,M_{\cdot,k}\) and \(\mathbf{P},\mathbf{A}\) are latent points (all in \(K\)) and observed data respectively. Assume_
\[\text{For all }\ell\in[k]\,\ C_{\ell}:=\{j:|P_{\cdot,j}-M_{\cdot,\ell}|\leq \frac{\sigma_{0}}{\sqrt{w_{0}}}\}\text{ satisfies }|C_{\ell}|\geq w_{0}n. \tag{29}\]
_Suppose \((\sqrt{\log k}/\sqrt{c_{0}k})\leq\delta\leq 1\) and \(c_{0}\) is a large constant satisfying_
\[\sigma_{0}\leq\frac{\delta^{2}\Delta(K)}{100c_{0}}\frac{\sqrt{w_{0}}}{\sqrt{k }}. \tag{30}\]
_Let \(V\) be the \(k\)-dimensional SVD subspace of \(\mathbf{A}\), and \(\widehat{K}\) denote the projection of \(K\) on \(V\)._
* _There is an_ \(\mathsf{OptOr}_{\frac{10\sigma_{0}}{\sqrt{w_{0}}\Delta}}(\widehat{K})\) _oracle_ \(\mathcal{O}\)_._
* _The algorithm_ \(k\)**-OLP Algorithm** _in Section_ 1.3 _outputs a set_ \(Q\) _of_ \(k\) _points such that the following condition is satisfied w.h.p.: for every vertex_ \(v\) _of_ \(K\)_, there is a point_ \(v^{\prime}\) _in_ \(Q\) _with_ \(|v-v^{\prime}|\leq\delta\Delta(K)/5\)_._
In the Theorem above, (30) implies an upper bound on \(\sigma_{0}\) of \(\Delta(K)\sqrt{w_{0}}/(c_{0}\sqrt{k})\) and so the oracle \(\mathcal{O}\) used by the Theorem lies in \(\mathsf{opt}_{\varepsilon}(\widehat{K})\) for \(\varepsilon\leq 1/(c_{0}\sqrt{k})\). Thus, we get around the lower bound result Theorem 1.11 by working in the \(k\)-dimensional SVD subspace of \(\mathbf{A}\). In rest of this section, we prove Theorem 8.1. We begin by giving properties of the SVD subspace, and then show that an approximate optimization oracle exists in this subspace. Finally, we apply Theorem 7.12.
### Properties of the SVD subspace of \(\mathbf{A}\)
Let \(V\) denote the \(k\)-dimensional SVD subspace of \(\mathbf{A}\). We shall show that the projection of \(K\) on \(V\) has an \(\mathsf{OptOr}_{\varepsilon}(K)\) oracle for a suitable value of \(\varepsilon\). Let \(\widehat{M}_{\cdot,\ell}\) denote the projection of \(M_{\cdot,\ell}\) on the SVD subspace \(V\). Define \(\widehat{A}_{\cdot,j}\) similarly, and let \(\widehat{\mathbf{A}}\) be the \(d\times n\) matrix whose columns are given by \(\widehat{A}_{\cdot,j}\). We now show that \(\widehat{M}_{\cdot,\ell}\) and \(M_{\cdot,\ell}\) are close to each other. The following notation will turn out to be very useful in subsequent discussion: for a \(d\times n\) matrix \(\mathbf{B}\) and a subset \(S\) of \([n]\), define
\[B_{\cdot,S}:=\frac{1}{|S|}\sum_{j\in S}B_{\cdot,j}.\]
**Claim 8.2**.: _Let \(S\subseteq[n]\) be a subset of indices. Then,_
\[|A_{\cdot,S}-P_{\cdot,S}|\leq\frac{\sigma_{0}\sqrt{n}}{\sqrt{|S|}}.\]
Proof.: Define a unit vector \(v\in\Re^{d}\) as follows: if \(j\in S\), we define \(v_{j}=\frac{1}{\sqrt{|S|}}\), \(0\) otherwise. Now, observe that
\[|A_{\cdot,S}-P_{\cdot,S}|=\frac{1}{\sqrt{|S|}}\cdot|(\mathbf{A}-\mathbf{P}) \cdot v|.\]
Inequality (28) now implies that the RHS above is at most \(\frac{\sigma_{0}\sqrt{n}}{\sqrt{|S|}}\)
**Lemma 8.3**.: _For all \(\ell\in[k]\), \(|M_{\cdot,\ell}-\widehat{M}_{\cdot,\ell}|\leq 5\sigma_{0}/\sqrt{w_{0}}\leq\frac{ \delta^{2}\Delta(K)}{c_{0}}\)._
Proof.: We have \(||\widehat{\mathbf{A}}-\mathbf{A}||\leq||\mathbf{A}-\mathbf{P}||\) since, \(\widehat{\mathbf{A}}\) is the best rank \(k\) approximation to \(\mathbf{A}\) in terms of the spectral norm and since each column of \(\mathbf{P}\) being a convex combination of the columns \(\{M_{\cdot,\cdot,\ell},\ell\in[k]\}\), \(\mathbf{P}\) has rank at most \(k\). We also have \(||\widehat{\mathbf{A}}-\widehat{\mathbf{P}}||\leq||\mathbf{A}-\mathbf{P}||\) since projections cannot increase length. Using these inequalities and the triangle inequality, we get:
\[||\mathbf{P}-\widehat{\mathbf{P}}||\leq||\mathbf{P}-\mathbf{A}||+||\mathbf{A}- \widehat{\mathbf{A}}||+||\widehat{\mathbf{A}}-\widehat{\mathbf{P}}||\leq 3 \cdot||\mathbf{A}-\mathbf{P}||\overset{\eqref{eq:2.1}}{\leq}3\sigma_{0}\sqrt{n}.\]
Let \(C_{\ell}\) be as in (29). Now, define \(w\) to be the unit vector with \(w_{j}=1/\sqrt{|C_{\ell}|},\forall j\in C_{\ell}\), and \(w_{j}=0,\forall j\notin C_{\ell}\). We see that
\[|M_{\cdot,\ell}-\widehat{M}_{\cdot,\ell}|\leq\frac{|(\mathbf{P}-\widehat{ \mathbf{P}})w|}{|C_{\ell}|}+\frac{2\sigma_{0}}{\sqrt{w_{0}}}\leq\frac{|| \mathbf{P}-\widehat{\mathbf{P}}||}{\sqrt{|C_{\ell}|}}+\frac{2\sigma_{0}}{ \sqrt{w_{0}}},\]
which proves the Lemma (using \(|C_{\ell}|\geq w_{0}n\)). The second inequality in the Lemma follows from (30).
Let \(\widehat{K}\) be the projection of \(K\) on \(V\). We now show that \(\widehat{K}\) is also \(\delta^{\prime}\)-well-separated for a value \(\delta^{\prime}\) close to \(\delta\).
**Lemma 8.4**.: _The vertices of \(\widehat{K}\) are given by \(\{\widehat{M}_{\cdot,\ell}:\ell\in[k]\}\). Further, \(\widehat{K}\) is \(\delta^{\prime}\)-well-separated, where \(\delta^{\prime}=\delta\left(1-\frac{1}{100}\right)\)._
Proof.: It is clear that the vertex set of \(\widehat{K}\) is s subset of \(S:=\{\widehat{M}_{\cdot,\ell}:\ell\in[k]\}\). Thus, we need to show that none of the points in \(S\) can be written as a convex combination of rest of the points in \(S\). This will follow from the fact that \(\widehat{K}\) is \(\delta^{\prime}\)-well-separated, and so it suffices to prove this statement. Consider a point \(\widehat{M}_{\cdot,\ell}\in S\) and a point \(p:=\sum_{\ell^{\prime}\in[k],\ell^{\prime}\neq\ell}\lambda_{\ell^{\prime}} \widehat{M}_{\cdot,\ell^{\prime}}\), which is in \(\mathsf{CH}(S\setminus\{\widehat{M}_{\cdot,\ell}\}).\) Then
\[|p-\widehat{M}_{\cdot,\ell}| =\left|\sum_{\ell^{\prime}\neq\ell}\lambda_{\ell^{\prime}}( \widehat{M}_{\cdot,\ell^{\prime}}-\widehat{M}_{\cdot,\ell})\right|\] \[\geq\left|\sum_{\ell^{\prime}\neq\ell}\lambda_{\ell^{\prime}}(M_{ \cdot,\ell^{\prime}}-M_{\cdot,\ell})\right|-\sum_{\ell^{\prime}\neq\ell} \lambda_{\ell^{\prime}}|M_{\cdot,\ell^{\prime}}-\widehat{M}_{\cdot,\ell^{ \prime}}|-\sum_{\ell^{\prime}\neq\ell}\lambda_{\ell^{\prime}}|M_{\cdot,\ell}- \widehat{M}_{\cdot,\ell}|\] \[=\left|\sum_{\ell^{\prime}\neq\ell}\lambda_{\ell^{\prime}}M_{ \cdot,\ell^{\prime}}-M_{\cdot,\ell}\right|-\frac{2\delta^{2}\Delta(K)}{c_{0}}\] \[\geq(\delta-\frac{2\delta^{2}}{c_{0}})\Delta(K)\geq\delta^{\prime }\Delta(K)\]
where the second last line uses Lemma8.3 and last line follows from the fact that \(K\) is \(\delta\)-well-separated. Note that \(\Delta(\widehat{K})\leq\Delta(K).\) Thus it follows that \(\widehat{K}\) is is \(\delta^{\prime}\)-well-separated.
We would now like to use Theorem6.5 on \(\widehat{K}\) with \(\varepsilon=\frac{\sigma_{0}}{\sqrt{w_{0}}\Delta(K)}\) and \(\delta^{\prime}=\delta\). Indeed the condition (30) along with Lemma8.4 show that the conditions of Theorem6.5 hold provided we are able to exhibit an efficient \(\mathsf{OptOr}_{\varepsilon}(\widehat{K})\) oracle.
### Construction of the \(\mathsf{OptOr}_{\varepsilon}(\widehat{K})\) oracle
We now describe the construction for the \(\mathsf{OptOr}_{\varepsilon}(\widehat{K})\) oracle in the subspace \(V\), where \(\varepsilon=\frac{10\sigma_{0}}{\Delta(K)\sqrt{w_{0}}}\). Let \(u\) be a unit vector in \(V\). The procedure (referred as **Subset Smoothing Algorithm** in Section 1.3) is given in Algorithm 3 - we project the points in \(\widehat{A}\) on \(u\) and consider the \(w_{0}n\) with the highest projection along \(u\). Finally, we output the average of these points.
```
1Input: unit vector \(u\in V\).
2 Let \(R(u)\) be the index set of the columns \(\widehat{A}_{\cdot,j}\) with the \(w_{0}n\) highest values of \(u\cdot\widehat{A}_{\cdot,j}\).
3Output:\(\widehat{A}_{\cdot,R(u)}\)
```
**Algorithm 3**Oracle in \(\mathsf{OptOr}_{\varepsilon}(\widehat{K})\)
We now show that this algorithm has the desired properties. Let \(R(u)\) be the index set as in Algorithm 3. The following is an easy consequence of Claim 8.2.
**Claim 8.5**.: \(\mathsf{dist}(\widehat{A}_{\cdot,R(u)},\widehat{K})\leq\varepsilon\Delta(K)\)_._
Proof.: Claim 8.2 shows that \(|A_{\cdot,R(u)}-P_{\cdot,R(u)}|\leq\varepsilon\Delta(K).\) Therefore, \(|\widehat{A}_{\cdot,R(u)}-\widehat{P}_{\cdot,R(u)}|\leq\varepsilon\Delta(K)\). The desired result now follows because \(\widehat{P}_{\cdot,R(u)}\in\widehat{K}\).
Let \(\ell\in[k]\) be the index for which \(\widehat{M}_{\cdot,\ell}\cdot u\) is maximized.
**Claim 8.6**.: \(\widehat{A}_{\cdot,R(u)}\cdot u\geq\widehat{M}_{\cdot,\ell}\cdot u-\varepsilon \Delta(K)\)_._
Proof.: Let \(C_{\ell}\) be the index set specified by (29). It suffices to show that
\[\widehat{A}_{\cdot,C_{\ell}}\cdot u\geq\widehat{M}_{\cdot,\ell}\cdot u- \varepsilon\Delta(K).\]
Now,
\[\widehat{M}_{\cdot,\ell}\cdot u-\widehat{A}_{\cdot,C_{\ell}}\cdot u \leq|\widehat{M}_{\cdot,\ell}-\widehat{A}_{\cdot,C_{\ell}}|\] \[\leq|\widehat{M}_{\cdot,\ell}-\widehat{P}_{\cdot,C_{\ell}}|+| \widehat{P}_{\cdot,C_{\ell}}-\widehat{A}_{\cdot,C_{\ell}}|\] \[\leq|M_{\cdot,\ell}-P_{\cdot,C_{\ell}}|+|P_{\cdot,C_{\ell}}-A_{ \cdot,C_{\ell}}|\] \[\leq\varepsilon\Delta(K),\]
where the last inequality follows from (29) and Claim 8.2.
The above two results show that Algorithm 3 yields an \(\mathsf{OptOr}_{\varepsilon}(\widehat{K})\) oracle. We would now like to apply Theorem 7.12 to the polytope \(\widehat{K}\) with parameters \(\varepsilon\) and \(\delta^{\prime}=\delta(1-1/100)\). Lemma 8.4 shows that \(\widehat{K}\) is \(\delta^{\prime}\)-well-separated. We now need to check that the parameters \(\varepsilon\) and \(\delta^{\prime}\) satisfy the following conditions needed in the statement of Theorem 7.12 (recall that \(\widehat{K}\) is a polytope in \(\Re^{k}\)): (i) \(\delta^{2}\geq c\varepsilon\sqrt{k}\), and (ii) \(\delta\geq\frac{\sqrt{\log k}}{\sqrt{c}k}.\) The second condition is already an assumption in the statement of Theorem 8.1, and the first condition follows from (30) and the fact that \(\varepsilon=\frac{10\sigma_{0}}{\Delta(K)\sqrt{w_{0}}}\). Applying Theorem 7.12, we get a set of \(k\) points \(Q\), such that for each vertex \(v^{\prime\prime}\) of \(\widehat{K}\), there is a point \(v^{\prime}\in Q\) with \(|v^{\prime\prime}-v^{\prime}|\leq\delta\Delta(K^{\prime})/10\). Applying Lemma 8.3 proves Theorem 8.1.
## 9 Open Problems
We now mention some problems that remain open in our work:
1. In the statement of \(\mathsf{RSH}\), the success probability of the desired event is \(O\left(1/k^{O(1/\delta^{2})}\right)\). Can we improve the exponential dependence of the success probability on \(1/\delta\)?
* Prove an analog (under suitable assumptions) of Theorem1.5 for other reductions among oracles. Of particular interest is a reduction from an Optimization oracle to a Separation Oracle.
* Theorem5.2 on the \(\mathsf{Haus}\) problem returns exponentially many points whose convex hull approximates \(K\). Can this be improved, either via a an improvement mentioned in the first open problem above, or alternatively, feeding the exponentially many points to the algorithm of [1]?
|
2303.03282 | Learning Object Manipulation With Under-Actuated Impulse Generator
Arrays | For more than half a century, vibratory bowl feeders have been the standard
in automated assembly for singulation, orientation, and manipulation of small
parts. Unfortunately, these feeders are expensive, noisy, and highly
specialized on a single part design bases. We consider an alternative device
and learning control method for singulation, orientation, and manipulation by
means of seven fixed-position variable-energy solenoid impulse actuators
located beneath a semi-rigid part supporting surface. Using computer vision to
provide part pose information, we tested various machine learning (ML)
algorithms to generate a control policy that selects the optimal actuator and
actuation energy. Our manipulation test object is a 6-sided craps-style die.
Using the most suitable ML algorithm, we were able to flip the die to any
desired face 30.4\% of the time with a single impulse, and 51.3\% with two
chosen impulses, versus a random policy succeeding 5.1\% of the time (that is,
a randomly chosen impulse delivered by a randomly chosen solenoid). | Chuizheng Kong, William Yerazunis, Daniel Nikovski | 2023-03-06T16:52:52Z | http://arxiv.org/abs/2303.03282v1 | # Learning Object Manipulation With Under-Actuated Impulse Generator Arrays
###### Abstract
For more than half a century, vibratory bowl feeders have been the standard in automated assembly for singulation, orientation, and manipulation of small parts. Unfortunately, these feeders are expensive, noisy, and highly specialized on a single part design bases. We consider an alternative device and learning control method for singulation, orientation, and manipulation by means of seven fixed-position variable-energy solenoid impulse actuators located beneath a semi-rigid part supporting surface. Using computer vision to provide part pose information, we tested various machine learning (ML) algorithms to generate a control policy that selects the optimal actuator and actuation energy. Our manipulation test object is a 6-sided craps-style die. Using the most suitable ML algorithm, we were able to flip the die to any desired face 30.4% of the time with a single impulse, and 51.3% with two chosen impulses, versus a random policy succeeding 5.1% of the time (that is, a randomly chosen impulse delivered by a randomly chosen solenoid).
Optimal control under uncertainty, stochastic modeling, learning control
## I Introduction
Automated assembly of products makes use of various factory automation devices whose purpose is to put the component parts together in the correct order and position. When typical first-generation industrial robots are used for the actual assembly, they execute the exact same sequence of operations without any variation. The only way this would be successful is if the component parts are presented in the exact same position and orientation, and it is the job of other types of factory automation equipment to make sure that this is the case. A very common and popular such device is the vibratory bowl feeder (VBF) [1] that uses a circular vibratory pattern and a specially designed ramp to bring parts up the ramp in the desired orientation.
VBFs are typically noisy, expensive, and difficult to design, due to their size and complexity. With costs reaching hundreds of thousands of dollars and lead times of three to six months, they make economic sense only for very large production runs, and are a poor match to the increasing trend towards high-mix, low-volume manufacturing. A new generation of industrial robots equipped with cameras has made it possible to grasp parts in a range of orientations, as long as they are sufficiently singulated from one another. This has led to the emergence of simplified part feeders where the parts are deposited not in a bowl, but on a flat surface which vibrates in a fixed pattern, eventually singulating at least some of the parts so that they can be grasped by a camera-equipped robot. This solution reduces drastically the noise, size, and cost of the feeder, as the vibration pattern is generic and no custom design is needed for each part. Still, this solution does not eliminate the problem of having the part often lie on the wrong facet. The robot has some flexibility about how to grasp the part, but at best the robot can approach it from a direction in no more than half of the unit sphere, that is, from above. To deal with this, when the part is facing the wrong way up, the robot would have to pick it up, place it down on a different facet, and regrasp it. There is no generic robot program to do that reliably for an arbitrary part geometry, so a customized program would need to be developed. Moreover, even if such a program were developed, the robot would have to spend time executing it, instead of doing actual assembly, thus increasing the talk time of the assembly operation, which is highly undesirable.
To solve this problem, we propose a novel design for a part feeder that uses a set of solenoids mounted under the surface that the parts have been placed on to impart impulse shocks onto the surface so as to flip the parts to a different facet, if the current one is not suitable for grasping. The device is equipped with a camera whose purpose is twofold: first, to recognize which facet the part is lying on, and second, to register the part's position and orientation in order to decide which solenoid to fire in order to maximize the chance of success in changing the facet. Note that this camera could be the same camera that the robot uses for grasping decisions, so it adds no additional cost to the system, while effectively making the part feeder adaptive.
This paper deals with the problem of deciding how to control the system in order to manipulate the parts in the feeder in an optimal way. The motion of the manipulated object involves complex contact dynamics that vary according to the geometry of the part, and traditional physical modeling would be prohibitively difficult and expensive. For this reason, we adopt the methodology of learning control by learning probabilistic models of the outcomes as a result of applied controls, and using them to choose the optimal
control [2]. Section II describes the design of the mechanism and its instrumentation with sensors, and Section III proposes a learning controller based on learned outcome models. Section IV describes experimental verification with different control objectives, and Section V concludes and proposes some directions for improving the success rate of the device and its controller.
## II Design and Operation of the Experimental System
Our experimental "smart bowl", called Thumper, is a seven-solenoid impulse-drive open-bowl manipulator. It is equipped with an HD webcam running at 30 fps, and has individual control of the solenoid impulse generators, including the time and duration of the applied impulse. The control software takes the video in, processes it with OpenCV to estimate the pose of the manipulated part, applies one of several ML methods to generate a policy for manipulating the object to a desired outcome, and from the generated policy and the observed current object pose, determines and issues impulse commands to impulsively maneuver the object into the desired state.
The overall system can determine the pose of a \(\sim\)25 mm test cube with an accuracy and repeatability of slightly less than one millimeter in X and Y, and about one degree in rotation, as projected onto the horizontal plane. The actual manipulation commands are fairly sparse; the system can only select which one of the seven solenoids to fire, and choose a firing duration. The firing duration is limited by empirical observation to be between 8 and 25 milliseconds, as durations below eight milliseconds are observed to be inadequate to actually move the test object, and impulses over 25 milliseconds have no greater authority in moving a test object. To confine the parts onto the bowl floor, a white 3-D printed hexagonal "corral" is mounted 5mm above the bowl floor. The mechatronic parts of the system can be seen in Figure 2 and the assembled system in Figure 2.
A 1080p HD webcam mounted above the bowl provides \(\sim\)30 frames per second to the computer vision (CV) system that locates the test objects and determines the test object pose. To calibrate the camera, we use a precise checkerboard mounted on a 3-D printed mount that positions directly against the vertical rods in a three-point kinematic arrangement, as shown in Fig. 3.
For naming convenience, in this paper we will use the term "thumper" to indicate the entire apparatus. We will also use the term "thumper" with a number to indicate one of the seven sets of PowerFET, solenoid, and striker heads. The actual layout and numbering of the solenoids are shown in Figure 4.
Fig. 4: The geometrical layout of the solenoid array; the inter-solenoid spacing is 60mm.
Figure 5 shows an FEM analysis of the bowl floor distorted by the static force of an impulse solenoid and Figure 6 shows the top view on Z displacement alone; of note is that the areas of greatest Z-motion and the areas of greatest tilt are not the same, nor are they exact inverses of each other. We surmise that this is actually a useful attribute, because to change the pose of the object, we must impart both a vertical impulse sufficient to get the object into the air, and also a rotational impulse sufficient to cause an adequate rotation of the object before ground contact resumes.
## III Learning Object Manipulation
We are interested in manipulating objects whose geometries allow them to stand stably on one of (relatively few of) their sides. Examples of such objects are hexagonal nuts and bolts, IC chips, etc. The state of such an object would be characterized by the side \(s\) it is on (an integer), as well as its position \((x,y)\) and orientation \(\theta\) (real numbers). The objective is to devise a control policy \(u=\pi(s,x,y,\theta)\) that selects which solenoid \(u_{s}\) to fire with duration \(u_{d}\) so as to maximize the probability of moving the object into a desired state, \(u=(u_{s},u_{d})\). This desired state can be described in terms of one or more of the state components, for example changing the face the part is lying on, or also possibly bringing it to a desired position and orientation. Let the Boolean function \(g(s,x,y,\theta)\), provided by the user, indicate whether state \((s,x,y,\theta)\) is a desired goal state or not.
Modeling the effect of impulse shocks on the manipulated parts from the pertinent physical principles is usually extremely difficult. Modern physics engines implement the relevant physical laws of motion, as well as suitable contact models, and can generate and simulate the equations of motion automatically from geometrical scene descriptions, thus alleviating the need to create a dynamical model manually. However, this still involves a painstaking process of careful geometric calibration of the scene, possibly along with tuning a variety of contact model parameters, such as coefficients of friction, restitution, stiffness, surface roughness, edge and vertex radii, etc. A recent study on a task very similar to ours (rolling a cube on a flat surface) showed that even after very careful calibration, the behavior of the system was largely unpredictable and not very consistent across multiple physics engines [3]. This reflects the inherently chaotic dynamics of such systems, where rolling from facet to facet is associated with bifurcations in the system's dynamics. The bifurcation parameters include many of those of the contact model -- for example, whether a cube will roll to the next facet for a given angular and linear velocity or remain on the previous one would depend on how much its edge will slip on the surface, and that is determined by the friction coefficient; assuming an incorrect value for that coefficient could predict a very different outcome as to how the cube will land. Moreover, physics engines are inherently deterministic, as their purpose is to predict the one physical reality that will happen, whereas the chaotic dynamics of irregular rolling parts might be better modeled by stochastic models for control purposes.
For these reasons, we adopt a learning control approach. There is a great variety of learning control methods in the literature, whose success largely depends on the nature of the control problem being solved. Early work on learning non-prehensile manipulation of parts by means of tilting a tray made use of observed examples of the effect of actions (in that case, the direction of the tray's tilt) to learn a model of these actions, and used this model for planning [4]. This method was based on earlier work on stochastic learning automata (SLA), [5], and discretized the state space of the manipulated part (position on the tray) into rather coarse regions. This matched well the usual assumption of SLA for relatively few discrete states, and made the learning problem tractable, but led to the introduction of additional
Fig. 5: FEM of a single solenoid’s static effect on the support surface; displacements accentuated 10x
Fig. 6: FEM top view showing Z displacement; the blue overlay shows areas where the support surface is moving downward.
Fig. 7: CV recognition of a state \((s,x,y,\theta)\) of the die: face=5, x=-49.801mm, y=7.66mm, angle=36.027 degrees
uncertainty and stochasticity in the model due to partial state observability, on top of the already significant stochasticity of the system due to complex contact and impact dynamics. Although our control problem bears strong similarity to the one in [4], we believe that an approach that does not quantize the state into a few coarse discrete states would be more productive.
Another distinct approach to learning manipulation has been to learn a full state-space model of the system dynamics, using various system identification methods [6]. Whereas this approach has been very productive for linear systems, the complicated non-linear nature of contact dynamics has required the application of advanced methods for learning non-linear and possibly hybrid discrete/continuous dynamical models. Various universal function approximation methods have been used to learn system dynamics, and neural networks in particular have been investigated extensively for a long time [7, 8, 9]. Recent interest in model-based reinforcement learning has renewed research efforts to find good methods for learning world models [10]. Recently proposed Contact Nets have improved considerably the accuracy of predictive models with respect to earlier dynamical models based on standard neural networks [11, 12]. However, learning such models is quite complicated, and might also be an overkill for our control problem, where prediction of the entire future trajectory of the manipulated part is not really necessary, and predicting the stable resting state would suffice.
For this reason, we focused on learning predictive models that predict only the resting state of the manipulated part as a result of a particular action (solenoid fired). Similar to SLA, these predictive models are probabilistic, to capture the inherent stochasticity of the complex contact dynamics involved. However, unlike SLA, our models use the full continuous state of the manipulated part, measured as precisely as technically and economically feasible, for the purposes of predicting the resting state. That is, our problem possesses significant aleatoric uncertainty (mostly due to chaotic bifurcation dynamics and contact phenomena), but not necessarily significant epistemic uncertainty, and there is no reason to artificially inject such epistemic uncertainty by quantizing the state; rather, a more productive approach might be to measure the continuous state as accurately as possible, and then employ machine learning methods that can work with the full continuous state.
In particular, we propose to first learn probabilistic models \(p=h(s,x,y,\theta,u)=Pr[g(s^{\prime},x^{\prime},y^{\prime},\theta^{\prime})=True |s,x,y,\theta,u]\) that predict the probability \(p\) of bringing the part into a desired configuration by firing solenoid \(u\) (and possibly, firing duration) when the part is in configuration \((s,x,y,\theta)\). Here, \((s^{\prime},x^{\prime},y^{\prime},\theta^{\prime})\) is the successor state resulting from applying the impulse shock. For a multi-step decision policy, it might also be advantageous to explicitly learn a model to predict this state, of the form \((s^{\prime},x^{\prime},y^{\prime},\theta^{\prime})=f(s,x,y,\theta,u)\). Such a model is known as a forward model in the field of learning control, and if we can learn a sufficiently accurate model of this kind, we can devise a greedy control policy by choosing the solenoid (and maybe duration) \(u^{*}\) that maximizes the probability of success: \(u^{*}=\operatorname*{argmax}_{u}h(s,x,y,\theta,u)\)[2, 8].
Learning of the predictive models proceeds in a self-supervised fashion. During training, the system conducts a relatively large number of experimental trials by firing the solenoids randomly and recording the sequence of states by utilizing the overhead computer vision system. In this sequence, the successor state of a trial becomes the starting state of the next trial. Each trial is represented as the tuple \((s,x,y,\theta,u,s^{\prime},x^{\prime},y^{\prime},\theta^{\prime})\). This training data is used together with the success criterion \(g(s^{\prime},x^{\prime},y^{\prime},\theta^{\prime})\) to learn the predictive model \(h(s,x,y,\theta,u)\) using a suitable supervised machine learning algorithm.
A small fraction of the poses are "leaners" (where the part is leaning on the corral and not flat on the bowl bottom); we simply declare such poses invalid and fire a random impulse to attempt achieving of an acceptable pose. Although these invalid examples are logged, they are not used for the machine learning inputs. Finally, the datasets were desk-checked by a human looking at the saved final video frames for quality assurance purposes. We have found the OpenCV results to be at least 99.8% accurate.
## IV Manipulation of a Six-Sided Die
To understand the capability of the smart bowl, we attempted a control task where the goal was to rotate a standard six-sided die to a desired configuration with the fewest number of impulses from a random starting position. We developed a simple optical character recognizer to identify which of the die's sides was facing up as shown in Fig. 7. This impulse-based manipulation of the die was first explored as a command domain question -- what set of solenoid / impulse pair were actually useful in rolling the die. We designed a sub-task to answer this question.
### _Rolling the Die to any Other Face_
In this sub-task, the goal was to roll the die to any face other than the one it was currently on, easily recognizable by the vision system as a change of the number on the topmost face. Accomplishing this task in a minimal number of attempts is equivalent to maximizing the probability of success in one attempt. When starting in state \((s,x,y,\theta)\) and ending up in state \((s^{\prime},x^{\prime},y^{\prime},\theta^{\prime})\), the success criterion is \(g(s^{\prime},x^{\prime},y^{\prime},\theta^{\prime})=True\) iff \(s^{\prime}\neq s\).
We started by running an exploratory experiment with 60,000 total firings of the solenoids (requiring about 18 hours of unattended self-supervised operation). Both the solenoid number and the firing duration (impulse of the solenoid) were chosen randomly (firing duration was limited to a 25 millisecond maximum duration). The results are shown as a histogram in Figure 8.
As the choice of solenoid was random, the total number of hits from each solenoid were roughly the same. However, thumper 2 (the center solenoid) has a much lower chance of succeeding, comparing with other peripheral solenoids. At first, we believed this was a mechanical or electrical
defect on that thumper channel, so we physically swapped the center solenoid with a peripheral solenoid, but the low rotation rate remained in the center position.
Aside from the center solenoid, if we randomly fire any of the peripheral solenoids with a random impulse, there is about 30% chance to roll the die to another face. This means that there are regions where certain solenoids have little to no authority over the rotation of the die, probably because the solenoid can provide some lift, but not enough rotational "kick" to rotate the die to the next face.
We then attempted actual control of the die -- finding the optimal solenoid and an effective impulse to provide enough lift to roll the die to any other face. To do so, we need to address the mixed nature of the solenoid impulse mechanism.
While choice of which solenoid to fire is clearly a categorical choice, the duration of impulse on each solenoid is continuous (at least as viewed on a millisecond scale). This requires a control policy that can yield simultaneous multi-class classification (the solenoid number) and a regression-style continuous-valued result (the firing duration). Additionally, our platform is under-actuated, and not all target faces or goal states are reachable from every possible initial location. Given the speed that sample data is obtainable via the CV system ( \(\sim\)1 Hz ), we considered data-intensive solutions where seven classifiers (one per solenoid) will be trained.
Several informal tests were done with a k-nearest neighbors (kNN) classifier [13] with k varying from 1 to 24, but the results were not encouraging --the associated areas under the receiver-operator characteristic (AUROC) curves were on the order of 0.7 at best. On closer inspection, it was found that the sample dataset was strongly biased toward having the die near the edge of the corral, probably due to the die hitting the corral wall and losing energy in the partially inelastic collision. This effect (akin to thermally induced density gradients in a gas) caused significant depletion of the sample population in the bowl center. In these low density regions, the k-nearest neighborhood diameter was expanding to 10-20mm. As we found in further testing using a kinematic jig to reproducibly place the die in a controlled location, the regions of the bowl where movements were correlated and consistent are often smaller than 10mm. If those regions happened to be low density as well, then the effective area of the kNN would become much larger than the correlation area and the kNN policy could behave no better than random chance.
We found significantly better results with a radius neighborhood (rN) classifier; the rN classifier includes all points within a given radius \(r\) in the voting set rather than just the \(k\) nearest points as in a kNN; voting and final selection of which solenoid to fire proceeds similarly to the kNN and yields the categorical output choosing which solenoid to actuate. The duration of the actuation is then chosen to be the mean of the set of successful activation impulses on that solenoid.
Like kNN, rN relies on some distance metric to determine whether a sample (\(x,y,\theta\)) is close enough to a prior observation (\(x_{0},y_{0},\theta_{0}\)). As \(\theta\) inherits a different unit than \(x\) and \(y\), we make use of a single distance metric that scalarizes the two distances in position and angle as follows:
\[D[(x_{0},y_{0},\theta_{0}),(x,y,\theta)]=\|(x_{0}-x,y_{0}-y)\|_{2}+w|\theta_{ 0}-\theta|\]
where \(w\) is a tuned conversion factor.
To determine an effective radius \(r\) and a suitable conversion factor \(w\) for the rN classifier, we used 10-fold cross-validation on each of the seven rN classifiers. For each solenoid and its underlying classifier, we took test data from the train-test split and swept a threshold \(a\) from 1.0 to 0.0. E.g., when \(a=0.7\), for any query in the test split, 70% of
Fig. 8: Die face changing (success) counts and probabilities for each solenoid with a random policy; the average probability of changing the die face with the random policy is 0.260
Fig. 9: ROC curves for determining \(r\) (radius of neighborhood) of the rN classifier; increasing the radius of the neighborhood improves performance but radius beyond 5 mm yields little if any improvement (using an angle conversion factor of \(d=\omega\theta,\omega=5mm/deg\))
its neighbors (from train split) within \(r\) have to meet the success criteria of face different, \(g(s^{\prime},x^{\prime},y^{\prime},\theta^{\prime})=True\) iff \(s^{\prime}\neq s\), for that query to be predicted as successful. Predictions of all queries were then compared with the ground truth label, and an entry of true / false positive rate (TPR / FPR) was plotted. The resulting ROC curves of one of the classifiers is shown in Figure 9. The value \(r=5\) with \(w=5\) was commonly agreed by all seven classifiers from the corresponding AUROC curve value.
We then performed an rN multi-class classification using these parameter settings and 60,000 verified samples (about 8500 samples per classifier). Given an arbitrary state of the die (\(s,x,y,\theta\)), the controller will apply all 7 classifiers on this state and then fire the solenoid whose corresponding classifier provides the best probability to roll the die to a different face. The result of this task is shown in Figure 10 with an average single-shot success probability of 0.753, versus 0.260 for the random policy -- an improvement of almost three times.
### _Controlling the Die to a Specific Target Face_
The next more complicated control problem was to learn how to roll the die to a chosen face different from the one it was currently on, so direction of rolling became significant.
The task here was to achieve a series of 2,000 randomly chosen target values for the upper die face (with no repeated faces) allowing up to 10 impulses to achieve the desired die pose. This emulates the challenge of feeding properly oriented parts to a manufacturing robot.
The first policy tested was the purely random-choice policy, which served as the experiment's control group. This resulted in an overall 5.1% success rate for rotating the die to a chosen face. The per-thumper activations and success rates are shown in Figure 11.
As before, the die's initial position density variation strongly favors the corral wall and avoids the center. Since this is the random policy (and the experiment control group) we expect to see a uniform distribution of initial positions versus thumper, and we are correct in that (as seen in Figure 12).
We are now in a position to consider a data-driven approach to approximating \(h(s,x,y,\theta,u)\) -- the function that predicts the probability \(p\) of bringing the part into a desired configuration given the state \((s,x,y,\theta)\) and an impulse \(u\); we have the entire 30,000 ground-truth data points for use as the base data for the rN policy.
For each of the seven solenoids, we formed a list of all \((s,x,y,\theta)\) examples within the \(r=5\) radius. Based on these training examples, we calculate the success probability for each of the seven solenoids and select the one with the highest success probability. To determine impulse duration, we took the mean of the successful impulses for that solenoid. In the case of a tie between two solenoids, we chose one at random from the tied candidates. An example of the decision
Fig. 11: Number of failures (red) and successes (blue) in achieving a targeted goal face for each solenoid as controlled by a random policy. The average success probability is 5.1% averaged over all thumper channels.
Fig. 12: Random Policy: Die positions and the solenoid fired.
Fig. 10: Die face changing (success) counts and rates on each solenoid using the rN classifier; average is 75.3%.
neighborhood for a die at [23, -49, 196] is shown in Figure 14.
As we are only choosing the best option without looking more than one step ahead, we characterize this as a greedy 1-step horizon approach to solve the under-actuated control problem. Using this policy, we ran the same 2,000-random-goals experiment. Figure 15 shows a scattergram of the XY positions of the die, with the color of each dot indicating the particular thumper chosen by the rN policy to have the best chance to rotate the die to another face.
The final results for this policy are shown in Figure 16, with the rN policy achieving the chosen goal state 30.6% of the time on the first impulse, beating the benchmark random policy by a factor of \(\sim\)6 times for single impulses, and succeeding 97.5% in 10 or fewer tries, versus the benchmark random policy of 43.0% in 10 or fewer tries.
### _Controlling the Die with a Two-Step-Horizon Model Predictive Controller (MPC)_
We then considered examining longer-horizon strategies where the die could be subjected to a series of impulses, with re-evaluation after each impulse, and repeating this strategy until the die is in the goal state, or ten trials have occurred. This policy is equivalent to a discrete-time two-step-horizon MPC controller when the dynamics model itself is learned from examples. This was done by extracting the
Fig. 16: Number of impulses fired and successes. sorted by solenoid; the overall average success rate on the first impulse is 30.6%. Note the low density on the center solenoid (#2) is correctly accommodated by the rN policy.
Fig. 13: Training data used in the rN model for classification; inset shows an example of a radius \(r\) = 5mm neighborhood of a die at [23, -49, 196] that will be evaluated.
Fig. 14: Sample decision list map of the die from Figure 13 at [23, -49, 196]; the neighbors within the radius \(r\) and their rolling directions are shown.
Fig. 15: Die positions and the solenoid fired by the learned rN policy seeking a particular target face. The Voronoi-like segments are impure because the target face varies.
\(r=\)5 mm neighborhood to obtain a set of likely poses after the first impulse, and performing the greedy one-step-horizon algorithm using each of the first-impulse poses as the initial position. The probabilities of these 2-impulse final states were calculated as in the one-step-horizon greedy policy, and the weighted sums of these probabilities were rolled back against each of the first-impulse probabilities to calculate success probabilities for the first impulse looking up to two impulses ahead. As in typical MPC methods, the actions to be taken in the next time step repeat this calculation from the start, and actions postulated on the second and further steps are in no sense "locked in" by the control policy.
Figure 17 summarizes the results of all three policies (random, greedy, and MPC) for the targeted-face task, as well as a purely theoretical ideally thrown die policy.
The horizon-2 MPC controller achieved success in one try at 29.5% of the time, versus a pure fixed-neighborhood greedy strategy at 30.0% ; this small difference is insignificant compared to the random policy of 5.1% success. Similarly, the greedy horizon-1 policy achieved a second-shot hit rate of 52.0% versus 50.2% for the horizon-2 MPC controller, also an insignificant difference when compared to the random policy at two shot success rate of \(\sim\)10.8%.
Both the Greedy horizon-1 and MPC horizon-2 were also clearly superior to an idealized random throw of the die (theoretical success rate of 1/6=0.1667). To be clear, the Thumper mechanism cannot execute random throws the way a human could, and it would be fairly difficult and expensive to generate an ideal die throw with other mechatronic components. For this reason, we do not consider the ideally thrown method as a viable real-world alternative, but even if it were feasible, both the Greedy and MPC controllers are superior.
## V Discussion and Future Work
The experiments above show that impulse-based manipulation can be effective for object orientation when driven by an ML controller, even when treating the object and impulse manipulator as completely black boxes and with zero modeling of the actual contact physics. That is, it _can_ be effective, not necessarily _will_ be effective, given the poor showing of the kNN classifier over the rN classifier at 30%. Thus, the main contribution of this paper is the identification of a relatively less known variant of the kNN classifier -- the rN method -- as a very effective component of a learning controller for part manipulation. The six-fold increase in the success rate of the controller, compared with a random firing policy, and the associated six-fold decrease in the takt time of the system, combined with the minimal need for manual supervision of the method (all training is self-supervised), could possibly result in a very fast and cost-effective method for part manipulation for robotic assembly.
Future extensions of this work that we are considering include multiple objects in the bowl simultaneously, multiple solenoids being fired simultaneously or with inter-firing delays on the order of the flexure propagation time of the bowl bottom surface, the testing of other shapes, and the integration of the Thumper system with an industrial robot. Although our current best results were from a greedy controller, improved predictive models of system dynamics might lead to superior success rates of MPC schemes with longer horizons, too.
|
2304.08370 | Mapping the distribution of OB stars and associations in Auriga | OB associations are important probes of recent star formation and Galactic
structure. In this study, we focus on the Auriga constellation, an important
region of star formation due to its numerous young stars, star-forming regions
and open clusters. We show using \textit{Gaia} data that its two previously
documented OB associations, Aur OB1 and OB2, are too extended in proper motion
and distance to be genuine associations, encouraging us to revisit the census
of OB associations in Auriga with modern techniques. We identify 5617 candidate
OB stars across the region using photometry, astrometry and our SED fitting
code, grouping these into 5 high-confidence OB associations using HDBSCAN.
Three of these are replacements to the historical pair of associations - Aur
OB2 is divided between a foreground and a background association - while the
other two associations are completely new. We connect these OB associations to
the surrounding open clusters and star-forming regions, analyse them physically
and kinematically, constraining their ages through a combination of 3D
kinematic traceback, the position of their members in the HR diagram and their
connection to clusters of known age. Four of these OB associations are
expanding, with kinematic ages up to a few tens of Myr. Finally, we identify an
age gradient in the region spanning several associations that coincides with
the motion of the Perseus spiral arm over the last $\sim$20 Myr across the
field of view. | Alexis L. Quintana, Nicholas J. Wright, Robin D. Jeffries | 2023-04-17T15:32:48Z | http://arxiv.org/abs/2304.08370v1 | # Mapping the distribution of OB stars and associations in Auriga
###### Abstract
OB associations are important probes of recent star formation and Galactic structure. In this study, we focus on the Auriga constellation, an important region of star formation due to its numerous young stars, star-forming regions and open clusters. We show using _Gaia_ data that its two previously documented OB associations, Aur OB1 and OB2, are too extended in proper motion and distance to be genuine associations, encouraging us to revisit the census of OB associations in Auriga with modern techniques. We identify 5617 candidate OB stars across the region using photometry, astrometry and our SED fitting code, grouping these into 5 high-confidence OB associations using HDBSCAN. Three of these are replacements to the historical pair of associations - Aur OB2 is divided between a foreground and a background association - while the other two associations are completely new. We connect these OB associations to the surrounding open clusters and star-forming regions, analyse them physically and kinematically, constraining their ages through a combination of 3D kinematic traceback, the position of their members in the HR diagram and their connection to clusters of known age. Four of these OB associations are expanding, with kinematic ages up to a few tens of Myr. Finally, we identify an age gradient in the region spanning several associations that coincides with the motion of the Perseus spiral arm over the last \(\sim\)20 Myr across the field of view.
keywords: stars: kinematics and dynamics - stars: early-type - stars: massive - stars: distances - Galaxy: structure - open clusters and associations: individual: Aur OB1, Aur OB2, Alicante 11, Alicante 12, COIN-Gaia_16, Gulliver 8, Kronberger 1, NGC 1778, NGC 1893, NGC 1912, NGC 1960, Stock 8.
## 1 Introduction
First defined by Ambartsumian (1947), OB associations are gravitationally unbound groups of young stars containing bright O- and B-type stars. They have sizes from a few tens of parsecs to a few hundred parsecs and total stellar mass of one thousand to several tens of thousands of solar masses (Wright, 2020). They are valuable tracers of the distribution of young stars, and have been used for such purposes for decades (see e.g. Morgan et al., 1953 and Humphreys, 1978). Most of the known OB associations are coincident with the Galactic spiral arms (Wright, 2020; Wright et al., 2022).
Bok (1934) pointed out that low-density systems were prone to disruption by tidal forces from the Galaxy, therefore Ambartsumian (1947) and Blaauw (1964) assumed that OB associations should be expanding. In the _clustered_ model of star formation from Lada & Lada (2003), massive stars forming in embedded clusters disperse their parent molecular cloud by feedback, a process known as _residual gas expulsion_(Hills, 1980; Kroupa et al., 2001). With the majority of the mass of the system in the form of gas, embedded clusters unable to survive as gravitationally bound open clusters will expand and disperse as unbound OB associations. The _hierarchical_ model of star formation, on the other hand, assumes that stars form over a range of densities, quickly decoupling from the gas in which they form. High-density clusters may survive as long-lived open clusters, while low-density groups will be gravitationally unbound from birth (Kruijssen, 2012). In such a model, OB associations may form gravitationally unbound and not require residual gas expulsion. Although the reality probably lies between these two cases (Wright, 2020), recent data and modern techniques can provide the key to unveil the origins of OB associations.
Expansion signatures from OB associations could indeed help to support the _clustered_ model. Attempts to detect expansion in OB associations have had varied results, with early studies finding very little evidence for expansion (see e.g. Wright et al., 2016; Wright & Mamajek, 2018; Ward & Kruijssen, 2018), while later studies had more success (see e.g. Kounkel et al., 2018; Cantat-Gaudin et al., 2019; Armstrong et al., 2020; Quintana & Wright, 2021). Failures to detect clear expansion signatures in OB associations have occurred mostly in systems with historically-defined membership (based on the position on the sky), while more recent studies that defined OB associations and their membership using spatial and kinematic information have proven more successful.
The Auriga constellation contains two OB associations identified and catalogued by Roberts (1972) and Humphreys (1978), as well as numerous young stars (Gyulbudaghian, 2011; Pandey et al., 2020), star-forming regions (Paldini et al., 2003; Mellinger, 2008; Anderson et al., 2015) and open clusters (Cantat-Gaudin & Anders, 2020). The Auriga constellation should intercept both the local arm and the Perseus spiral arm though few studies have focused on Galactic longitudes between 140\({}^{\circ}\) and 180\({}^{\circ}\)(Marco & Negueruela, 2016). Negueruela & Marco (2003) suggested the Auriga region is a less populated part of these spiral arms.
Aur OB1 is located at a distance of 1.06 kpc (Melnik and Dambis, 2020). It includes the open cluster NGC 1960 and the dark cloud LDN 1525 located at 1.2-1.3 kpc (Straizys et al., 2010), and is undergoing intense star formation (Panja et al., 2021).
Aur OB2 is located at a distance of 2.42 kpc (Melnik and Dambis, 2020). Its main features are the open clusters Stock 8, Alicante 11 and Alicante 12 (Marco and Negueruela, 2016). It was first thought that Aur OB2 extended between Stock 8 and NGC 1893, but recent studies have placed them at different distances, suggesting they may not all be part of the same system (Negueruela and Marco, 2003; Marco and Negueruela, 2016; Kuhn et al., 2019).
The paper is structured as follows. In Section 2 we revisit the historical Auriga OB associations with modern data and techniques. In Section 3 we outline our process for identifying OB stars, before detailing the clustering process used to identify new OB associations. In Section 4 we characterize these associations both physically and kinematically. In Section 5 we discuss the results in a broader context and we provide conclusions in Section 6.
## 2 The Auriga Region
In this section we explore the existing OB associations in Auriga with modern photometry and astrometry from _Gaia_ EDR3 (Gaia Collaboration et al., 2021), as well as any known open clusters and star-forming regions in their vicinity.
### Historical OB associations
We focus our study on a 150 deg\({}^{2}\) area in the Auriga constellation, with \(l=\) [165\({}^{\circ}\), 180\({}^{\circ}\)] and \(b=\) [-5\({}^{\circ}\), 5\({}^{\circ}\)] as shown in Fig. 1. This area encompasses two historical associations, Aur OB1 and OB2. Their members have been listed in several catalogues (e.g. Humphreys, 1978; Melnik and Dambis, 2020). From Melnik and Dambis (2020) there are 36 stars in Aur OB1, 20 in Aur OB2 and 10 in NGC 1893, although only 6 of them have equatorial coordinates in _Gaia_ EDR3 and listed in SIMBAD. NGC 1893 is usually considered part of Aur OB2 (see e.g. Marco and Negueruela, 2016; Lim et al., 2018), and we follow that convention here, increasing the number of Aur OB2 members to 26 stars.
We match these 62 sources with _Gaia_ EDR3 (Gaia Collaboration et al., 2021) using a radius of 1" and find a counterpart for all the stars. Following the criterion from Lindegren et al. (2021), we only use the astrometry for the 48 stars whose renormalised united weight error (RUWE) is \(<1.4\). Distances were taken from Bailer-Jones et al. (2021). The distribution of these stars in position, proper motions and distance is shown in Figs. 1, 2 and 3.
Figures 1 and 2 show that the existing members of the two associations do not have a strong level of kinematic coherence, their proper motions each spread over 2-3 mas yr\({}^{-1}\) or 10-15 km s\({}^{-1}\) at 1 kpc, much larger than one would expect for an OB association (Wright, 2020). Figure 3 shows that the Aur OB1 members are spread over distances from 0.6 to > 2 kpc, much larger than the parallax uncertainties (typically 0.03 mas). A similar issue is apparent for Aur OB2, its members are spread from 1.7 to over 4 kpc, albeit with a core group of stars around 2 kpc, though this does not match with the distance to NGC 1893 of 2.9 kpc (Mel'Nik and Dambis, 2009; Melnik and Dambis, 2020). The presence of stars at distances of 3-4 kpc within these associations was previously noted by Marco and Negueruela (2016). OB associations have historically been defined through their on-sky spatial distribution and apparent magnitudes, with their members assumed to be within a narrow range of distances (see e.g. Humphreys, 1978). It is clear that these two associations are not real OB associations; they neither exhibit the necessary kinematic coherence, nor are they located at a small enough range of distances to have been born together.
### Open clusters and star-forming regions
To revisit our census of the OB associations in Auriga, we start by collating information on the known open clusters and star-forming regions in this area. Several tens of open clusters have been identified in the region (Cantat-Gaudin and Anders, 2020). In particular, five of them are likely related to the existing OB associations, following the discussions in Straizys et al. (2010), Marco and Negueruela (2016) and Pandey et al. (2020). The properties of these OCs are summarized in Table 11, where we have also included other OCs in the region whose relevance will be shown in Section 3.7. The clusters are also shown in Fig. 1 alongside the OB associations.
Footnote 1: Alicante 11 and 12 are not listed in Cantat-Gaudin and Anders (2020). However, Marco and Negueruela (2016) calculated a common distance of \(\sim\) 2.8 kpc for these two clusters along with Stock 8, albeit overestimated compared with other estimates (Jose et al., 2008; Mel’Nik and Dambis, 2009), so we assigned them the same distance as Stock 8 in Cantat-Gaudin and Anders (2020).
In this area are also found multiple Hii regions (Paladini et al., 2003; Anderson et al., 2015), and several star-forming regions including Sh 2-235 and AFGL 5144 (Mellinger, 2008). They are shown in Fig. 1.
The most prominent feature of Fig. 1 is the centre of the region at \(l\sim 173^{\circ}\) and \(b\sim 0^{\circ}\). This is where the bulk of Aur OB2 members are located (Melnik and Dambis, 2020), along with the three open clusters Stock 8, Alicante 11 and 12 (see Table 1), and the Hii regions Sh 2-234 and 174.0+00.3. The star-forming region AFGL 5144 lies close to this area, at \(l=173.7^{\circ}\) and \(b=0.3^{\circ}\)(Mellinger, 2008), consistent with the young age of the OCs (Marco and Negueruela, 2016).
\begin{table}
\begin{tabular}{l l l l l l} \hline OC & Assoc. & \(l\) (\(\circ\)) & \(b\) (\(\circ\)) & \(d\) (kpc) & Age (Myr) \\ \hline NGC 1960 & Aur OB1 & 174.542 & 1.075 & \(1.16\pm 0.01\) & 18-26 \\ Stock 8 & Aur OB2 & 173.316 & -0.223 & \(2.11\pm 0.01\) & 4-6 \\ Alicante 11 & Aur OB2 & 173.046 & -0.119 & \(2.11\pm 0.01\) & 4-6 \\ Alicante 12 & Aur OB2 & 173.107 & 0.046 & \(2.11\pm 0.01\) & 4-6 \\ NGC 1893 & Aur OB2 & 173.577 & -1.634 & \(3.37\pm 0.05\) & 1-5 \\ Gulliver 8 & - & 173.213 & -1.549 & \(1.11\pm 0.01\) & 22-39 \\ NGC 1912 & - & 172.270 & 0.681 & \(1.10\pm 0.01\) & 250-375 \\ NGC 1778 & - & 168.914 & 2.007 & \(1.64\pm 0.01\) & 150-282 \\ CG16 & - & 170.038 & 0.270 & \(1.53^{\rm+0.02}_{-0.01}\) & 26 \\ K1 & - & 173.106 & 0.049 & \(2.12\pm 0.06\) & 6-8 \\ \hline \end{tabular}
\end{table}
Table 1: Properties of the open clusters in Auriga thought to be related to the OB associations. Galactic coordinates and distances taken from Cantat-Gaudin and Anders (2020). References for the ages are: Jeffreys et al. (2013) and Joshi et al. (2020) for NGC 1960, Marco and Negueruela (2016) for Stock 8, Alicante 11 and 12, Tapia et al. (1991), Marco et al. (2001), Sharma et al. (2007) and Lim et al. (2014) for NGC 1893, Subramaniam and Sagar (1999), Dias et al. (2021) for Guilliver 8, Jacobson et al. (2002), Pandey et al. (2007), Kharenko et al. (2005) and Dib et al. (2018) for NGC 1912, Barton and Hassan (1973), Kharenko et al. (2013), Dib et al. (2018) and Cantat-Gaudin et al. (2020) for NGC 1778, Cantat-Gaudin et al. (2020) for COIN-Gaia\({}_{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{ \_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{}{\_}{}\) (here abbreviated K1).
The star-forming region Sh 2-235 is located at \(l=173.7^{\circ}\) and \(b=2.7^{\circ}\)(Mellinger, 2008), close to the Hii regions G173.710+02.699 and G173.63+02.664, and where the region of highest extinction can be found (see Fig. 1).
## 3 Identification of new OB associations
In this section we summarize the method used to identify OB stars and associations. The method for identifying OB stars is very similar to that of Quintana and Wright (2021), which we briefly summarise here and highlight any changes.
### Data and selection process
We utilise astrometry and optical photometry from _Gaia_ EDR3 (Gaia Collaboration et al., 2021)2, optical photometry from IGAPS3(Drew et al., 2005; Monguio et al., 2020), and near-IR photometry from 2MASS4(Cutri et al., 2003) and UKIDSS5(Lucas et al., 2008). We require _Gaia_ astrometry to have \(RUWE<1.4\) and \(|\frac{\sigma_{m}}{\sigma_{m}}|>2\)6, where \(\varpi\) is the observed _Gaia_ parallax and \(\sigma_{m}\) its random uncertainty. We limit our sample to stars with BP-RP < 2.5, a colour limit equivalent to a star with \(\log(T_{\rm eff})=4\) and \(A_{V}=6\), which is about the maximum extinction level in this region at a distance of 3 kpc (Green et al., 2019). The sources were filtered to have \(d\) < 3.5 kpc, using the distances from Bailer-Jones et al. (2021).
Figure 1: Spatial distribution in Galactic coordinates of the historical members of Aur OB1 and OB2. For the 48 stars with \(RUWE<1.4\), their Galactic proper motions are represented as vectors (scale length indicated in the top left) while the stars without reliable proper motions are shown as points. We also show open clusters as empty squares (Cantat-Gaudin and Anders, 2020), and Hii and star-forming regions as empty circles (Paldalini et al., 2003; Mellinger, 2008; Anderson et al., 2015). The background extinction map shows the integrated visual extinction at 2 kpc from Green et al. (2019).
Figure 2: Proper motion distribution in Galactic coordinates for the historical members of Aur OB1 and OB2, with error bars, for stars with \(RUWE<1.4\).
_Gaia_ photometry was required to have \(|C^{*}|<3\,\sigma_{C*}\), where \(C^{*}\) is the corrected excess flux factor in the \(G_{\rm RP}\) and \(G_{\rm BP}\) bands and \(\sigma_{C*}\) is the power-law on the \(G\) band with a chosen \(3\sigma\) level (Riello et al., 2021). 2MASS photometry was required to have a good quality flag (A, B, C or D, see Cutri et al., 2003) whilst those from UKIDSS had to fulfill \(ErrBits<256\). For UKIDSS we also exclude photometry with either \(J<13.25\), \(H<12.75\) and \(K<12\), below which the photometry risks saturation (Lucas et al., 2008). IGAPS photometry was filtered by excluding saturated photometric bands whose associated class did not indicate a star or probable star (Monguio et al., 2020). We then require at least one valid blue photometric band (either \(g\), \(G_{\rm BP}\) or \(G\)) and a valid near-infrared photometric band.
To remove faint (non-OB) stars, we then apply an absolute magnitude cut, requiring \(M_{K}<1.07\) (if K-band photometry is available), \(M_{H}<1.10\) (otherwise if H-band photometry is available), or \(M_{J}<1.07\) (if only J-band photometry is available). These are the absolute magnitudes of main-sequence A0 stars (Pecaut and Mamajek, 2013).
Finally, the near-IR colour-colour diagram was used to remove background giants, as described in Quintana and Wright (2021).
This led to a working sample of 29,124 sources on which we applied our SED fitting process.
### SED fitting
To calculate the physical properties of the sources, in order to identify OB stars, an SED fitting process was applied, based on the same method in Quintana and Wright (2021) with a few improvements, summarised here:
* We seek to estimate the model parameters log(Mass), Fr(Age), \(d\) and ln(\(f\)) using the _emcece_ package in Python (Foreman-Mackey et al., 2013). Fr(Age) is the fractional age (i.e. the age of star divided by the maximum age at its initial stellar mass) and ln(\(f\)) is a scaling uncertainty to help the convergence of \(\chi^{2}\)(Foreman-Mackey et al., 2013; Casey, 2016). log(\(T_{\rm eff}\)) and log(\(L/L_{\odot}\)) are indirect products of this process, and the extinction \(A_{V}\) was derived using the 3D extinction map from Green et al. (2019) named _Boyestar_. The priors for these parameters are: \[\ln(P(\theta))=\begin{cases}\log(\frac{1}{2L^{3}}\,d^{2}\,\exp\,(\frac{-M}{L} ))&\text{if }\begin{cases}-1.0\leq\log({\rm Mass})\leq 2.0\\ 0.0\leq{\rm Fr}({\rm Age})\leq 1.0\\ 0.0\leq d\leq 5000.0\,{\rm pc}\\ -10.0\leq\ln(f)\leq 1.0\end{cases}\\ -\infty&\text{otherwise}\end{cases}\] (1) with the prior on distance from Bailer-Jones (2015) including a scale length \(L\) set to 1.35 kpc.
* Our model SEDs use stellar spectral models (Werner and Dreizler, 1999; Rauch and Deejen, 2003; Werner et al., 2003; Coelho, 2014), with a fixed value of log \(g=4\) and evolutionary models from Ekstrom et al. (2012). Model spectra were reddened using the Fitzpatrick et al. (2019) extinction laws and convolved with the relevant filter profiles to derive synthetic magnitudes.
* Systematic uncertainties were added to the measured photometric uncertainties. This is equal to 0.03 mag for \(g\), \(r\) and \(i\)(Barentsen et al., 2014; Drew et al., 2014), 0.01 mag for \(G\), \(G_{\rm RP}\), \(G_{\rm BP}\)(Riello et al., 2021), 0.03 mag for \(J_{\rm 2M}\), 0.02 for \(H_{\rm 2M}\) and \(K_{\rm 2M}\)(Skrutskie et al., 2006), and 0.03 mag for \(J_{\rm U}\), \(H_{\rm U}\), \(K_{\rm U}\)(Hodgkin et al., 2009).
* We choose the median value of the posterior distribution. The posterior distribution was explored using a Markov Chain Monte Carlo simulation. This utilised 1000 walkers, 200 burn-in iterations and 200 iterations. If the ln(\(f\)) value was greater than 4, or the difference between the 95th and 5th percentile of log(\(T_{\rm eff}\)) was greater than 0.5 (indicating a lack of convergence), we ran 1000 supplementary burn-in and 200 supplementary iterations, until convergence was achieved or for 6000 supplementary burn-in iterations.
In addition, the extinctions from _Gaia_ DR3 (Creevey et al., 2022; Delchambre et al., 2022) reveal that the _Boyestar_ extinctions tend to be underestimated by \(\sim\)22 %. Instead of using the _Gaia_ DR3 extinctions (due to their incomplete coverage of the Galactic plane, Delchambre et al., 2022), we increase the _Boyestar_ extinctions by 22 per cent to compensate.
### General results
SED fits were performed for all 29,124 candidate OB stars. Histograms of fitted physical parameters are shown in Fig. 4. There are 5434 stars with log(\(T_{\rm eff}\)) \(>4\) (OB stars, 18.66 %) and 115 stars with log(\(T_{\rm eff}\)) \(>4.3\) (O stars, 0.39 %). The median value of log(\(M/M_{\odot}\)) is equal to 0.31 (with a standard deviation of 0.12 dex) while the median value of log(\(L/L_{\odot}\)) is 1.43 (with a standard deviation of 0.44 dex). Most of the stars are located within 4 kpc (consistent with our selection from Bailer-Jones et al., 2021) with an increasing number at larger distances (as we probe a larger volume), while the peak of reddening is located at 1.5 mag, with the bulk at \(A_{V}<3\) mag.
### Incompleteness
Incompleteness in the working sample stems from the selection process. To estimate it, we compute the fraction of stars as a function of
Figure 3: Galactic longitude plotted as a function of distances from Bailer-Jones et al. (2021) for the historical members of Aur OB1 and OB2 with \(RUWE<1.4\).
magnitude which were trimmed during the successive steps of Section 3.1. These steps include the removal of bad astrometric solutions (2-parameter sources, large error on parallaxes and large \(RUWE\)), the removal of bad photometry (blue or NIR) and high BP-RP values. A plot of the completeness level as a function of \(G\) is shown in Fig. 5 for the SED-fitted OB stars (stars with \(\log(T_{\rm eff})>4\) or \(\log(L/L_{\odot})>2.5\)).
To further verify the completeness of our sample, we crossmatch it with the OBA stars from Zari et al. (2021). Their list contains 14,973 stars in the Auriga region and from the 29,124 stars in our sample, there are 4818 stars in common, including 4097 with a SED-fitted \(T_{\rm eff}\) greater than 8000 K (the minimum temperature for Zari et al. 2021). Unsuccessful matches for our list are due to a different \(M_{K}\) threshold (we chose \(M_{K}<1.07\) while they selected stars with \(M_{K}<0\)). Unsuccessful matches from their list are due to our selection process (e.g. we discarded distant stars that they kept). As we estimated the incompleteness due to our selection process (Fig. 5), this comparison shows that we have reached good completeness in probing the population of OB stars in Auriga.
### Comparison with spectroscopic temperatures
To check the quality of the results, we build a sample of spectroscopic temperatures that we compare to our SED-fitted temperatures, by cross-matching our sample within 1 arcsec with two catalogues:
* Stars with spectral types from SIMBAD, filtered by removing sources with a quality measurement on spectral type of 'D' and 'E', along with those without an indicated spectral type and subclass. We then convert the spectral types into effective temperatures using the tabulations from Martins et al. (2005) for the O-type stars (observed scale), from Trundle et al. (2007) for early B-type stars, from Humphreys & McElroy (1984) for late B-type stars of luminosity classes 'I' or 'III' and from Pecaut & Mamajek (2013) for the later spectral types. We set a luminosity class of 'V' when unspecified and chose error bars of one spectral subclass, whilst using the spectral types of the primary star for binaries and interpolating for luminosity classes of 'II' and 'IV'.
* Stars from APOGEE DR17 (Garcia Perez et al. 2016; Abdurro'uf et al. 2022), selecting the sources with a measured \(T_{\rm eff}\) from the pipeline and removing those with a warning on \(T_{\rm eff}\) that are considered unreliable due to their proximity to the upper limit of APOGEE measurements (20,000 K).
We combine 70 stars from SIMBAD with 331 stars from APOGEE, making a sample of 397 unique stars (we use the weighted mean to calculate the temperature for the 4 stars in common). Our SED-fitted temperatures are compared with the spectroscopic temperatures in Fig. 6.
Choosing thresholds on \(\log(T_{\rm eff})>4\) (4.1, 4.2 and 4.3), we define the recovery rate \({\rm RR}={\rm TP}/({\rm TP}+{\rm FN})\) where TP is the number of true positives (where both the SED-fitted and spectroscopic temperatures are above the threshold for selection of an OB star) and FN the number of false negatives (where the SED-fitted temperature is below the threshold and the spectroscopic temperature above). We also define the contamination rate, \({\rm CR}={\rm FP}/({\rm TP}+{\rm FP})\), where FP is
Figure 4: Median fitted parameters for the 29,124 selected sources of the working sample.
Figure 5: Completeness as a function of \(G\) for the 5617 SED-fitted OB stars in the sample divided according to the different steps used to trim the sample. The black curve represents the product of all completeness curves. The blue and orange histograms show the number of sources before (blue) and after (orange) the completeness correction is applied.
the number of false positives (where the SED-fitted temperature is above the threshold and the spectroscopic temperature below). For these thresholds, RR is equal to 88% (80%, 56%, 49% ) and CR is between 17 and 30 %. These results suggest we are better at fitting late B-type stars, which could be due to the sparsity of very hot stars in this region, the high multiplicity of such stars (as our SED fitting code currently models all stars as single stars) or the high uncertainty on spectroscopic temperature of many of the O stars.
Fig. 6 also shows that our SED-fitted temperatures are in better agreement with the APOGEE spectroscopic temperatures than they are with the SIMBAD spectroscopic temperatures (which constitute most of the O-type stars). APOGEE spectra are generally more consistent and of better quality than the spectroscopy from SIMBAD, which might explain the difference. The median error on \(T_{\rm eff}\) for the APOGEE spectroscopy is only \(\sim\) 200 K, to be contrasted with \(\sim\) 1100 K for the SIMBAD spectroscopic sample.
### Clustering analysis with HDBSCAN
In Quintana and Wright (2021), we identified kinematically-coherent OB associations in the Cygnus region by applying a flexible clustering method based on a Kolmogorov-Smirnov (KS) test on Galactic coordinates and proper motions. This choice was feasible because the OB associations were all at a similar distance. The distance spread of the OB stars in Auriga, on the other hand, is much larger.
For this work we therefore use the Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN, McInnes et al., 2017) tool. It constitutes an extension of DBSCAN and identifies clusters by defining their cores through the number of neighbours within a radius \(\epsilon\). In many clustering algorithms, including DBSCAN, the selection of clusters depends heavily upon the value of \(\epsilon\). HDBSCAN overcomes this issue by allowing the user to define clusters at several density thresholds, therefore finding the most reliable groups and clusters.
In our testing, out of all HDBSCAN parameters, only cluster_selection_method, min_cluster_size and min_samples were found to have an influence on the algorithm results. Excess of mass (EOM) and Leaf are the two selection methods. Whilst the former tends to identify larger structures and thereby decreases the noise (see e.g. Kerr et al., 2021), the latter outlines smaller and more homogeneous clusters, hence we favour this second choice as it is more suited to OB associations (see e.g. Santos-Silva et al., 2021). min_cluster_size sets the minimum number of stars for a cluster to be defined whereas min_samples stands for the number of samples within a neighbourhood such that a point is treated like a core point (McInnes et al., 2017). Varying min_cluster_size will only set which cluster is identified (i.e. a cluster is only identified if it has more members than min_cluster_size) while varying min_samples will change the membership itself, and is therefore the most crucial parameter.
The five parameters used for our clustering analysis are: \(X\), \(Y\), \(Z\), \(V_{I}\), \(V_{b}\), where \(XYZ\) are the Galactic cartesian coordinates and \(V_{I}=4.74~{}\mu_{I}~{}\frac{d}{1000}\) is the transverse velocity in the \(l\) direction in units of km s\({}^{-1}\) (with its equivalent in the \(b\) direction).
Our SD parameter space thus contains three parameters in units of pc and two in km s\({}^{-1}\). Each parameter of the same units is normalised with respect to the parameter with the largest extent sharing this unit, i.e. \(X,Y\) and \(Z\) were normalised with respect to \(X\) in order to overcome the stretching along the line-of-sight, while \(V_{b}\) was normalised with respect to \(V_{I}\). As such, all the normalised parameters have values between 0 and 1 but parameters with the same units are directly comparable.
To identify new OB associations we set min_cluster_size to 15 and min_samples to 10, consistent with the typical minimum number of OB stars in OB associations (Humphreys, 1978). We apply HDBSCAN to the 5617 candidate OB stars with \(\log(T_{\rm eff})>4\) or \(log(\frac{L}{L_{\odot}})>2.5\). This threshold was chosen to include evolved high-mass stars with cooler temperatures (Pecaut and Mamajek, 2013), as we did in Quintana and Wright (2021). This process gave 14 groups listed in Table 2.
Subsequently, based on the method by Santos-Silva et al. (2021), we perform a bootstrapping process on the newly identified OB associations. We randomly vary the proper motions and the distance of each star within their uncertainties and apply HDBSCAN to the new sample. Each iteration gives us a new set of associations that we compare to the original associations. If a 'bootstrapped' associations has 5D parameters within 1\(\sigma\) from the median of the original associations, then it corresponds to the same association. When this matching happens, we then compare the individual members of the bootstrapped association to the original association. We repeat this process 10,000 times, calculating the fraction of iterations in which a given association appears, and a fraction of those iterations in which a given star appears in that association. These fractions are taken as the probability that a given association is genuine and a membership probability for each star in each association. We also add stars that do not belong to the original associations, but appear in more than 50 % of iterations in the bootstrapped associations.
To estimate the reliability of our new associations we performed a Monte Carlo simulation to estimate how many OB associations, and with what properties, would be identified from a random distribution of stars. We randomly sampled the Galactic coordinates, PMs and SED-fitted distances of the 5617 candidate OB stars 100 times. For each iteration we ran HDBSCAN to identify new OB associations and performed the same bootstrapping process (with 1000 iterations) to estimate their probabilities. These simulations resulted in a total of 1154 'randomized' associations, i.e., an average of \(\sim\)12 per simulation. The probability for each of these associations is typically very low, with only 188 having probabilities greater than 50%, 77 greater than 80% and 46 greater than 90%, equivalent to \(\sim\)2, \(\sim\)1 and \(<\)1, on average, per simulation. In the real data, we identified 9 groups with
Figure 6: Comparison between the spectroscopic and SED-fitted temperatures for the 397 stars in the Auriga sample. Stars coloured in green are from APOGEE, stars coloured in red are from SIMBAD and stars coloured in blue are in both samples.
probabilities \(>\)50%, 7 with probabilities \(>\)80% and 4 with probabilities \(>\)90%. Comparison with our simulation suggests that the 4 associations with probabilities \(>\)90% are likely to all be real (especially since their probabilities are all \(>\)99%), while the 5 associations with probabilities of 50-90% may include 2 contaminants.
Our simulations do show that false-positive, high-probability associations (\(>\)80%) are almost entirely found nearby (\(d\lesssim\) 1.5 kpc). We therefore discard all the nearby (\(<\)1.5 kpc) OB associations with a probability lower than 90 %, retaining only the very high probability groups (now named associations 1-4) and the very distant group with a moderately-high probability (association 5). The 4 discarded candidate associations would require further data (e.g. RVs), more precise astrometry or expanded membership amongst lower-mass stars to confirm them as being real.
The result of this process is that we are left with 5 new high-confidence, spatially- and kinematically-coherent OB associations in the Auriga region. We show them in Galactic coordinates in Fig. 7, in Galactic transverse velocity in Fig. 8 and in distance in Fig. 9. These new OB associations are distributed over a range of distances from 1 kpc to almost 3 kpc, with many super-imposed on each other on the plane of the sky, explaining the difficulty separating their members with pre-_Gaia_ data.
### Comparison with historical associations and open clusters
We crossmatch the members of our new OB associations with the historical members of Aur OB1 and OB2 from Melnik and Dambis (2020) and the open cluster members from Cantat-Gaudin and Anders (2020), with the results displayed in Table 3.
Association 1 includes stars in both Aur OB1 and NGC 1960 and is the largest foreground association in the area. The other historical members of Aur OB1 are spread over the other new OB associations. Associations 4 and 5 have significant overlaps with Stock 8 and NGC 1893. This comparison suggests that NGC 1893 is located closer than previous estimations (Lim et al., 2018; Cantat-Gaudin and Anders, 2020) at a distance of \(\sim\)2.8 kpc, consistent with the distance from Mel'Nik and Dambis (2009).
## 4 Analysis of the new OB associations
In this section we perform a kinematic and physical analysis of the new OB associations in Auriga, studying their expansion and star formation history.
### Physical properties of the individual associations
We have estimated the observed number of O- and B-type stars in each association. To do so we defined B-type stars as those with SED-fitted \(\log(T_{\rm eff})>4\) and \(\log(T_{\rm eff})<4.3\) and O-type stars as those with SED-fitted \(\log(T_{\rm eff})>4.3\), using the same thresholds than in Section 3.3. Uncertainties were estimated through a Monte Carlo experiment where the effective temperature of each star was randomly sampled within their uncertainties. This is shown in Table 4 and, in line with the HR diagrams in Fig. 10, shows a dominance of late B-type stars and a few O-type stars.
To determine the total mass of each association, we first identified the range of masses over which our sample completeness is expected to be unbiased by the age of our stars. We chose this mass range to be 2.5 \(M_{\odot}\) (the mass of an A0 star) to 7.1 \(M_{\odot}\) (the post main-sequence turn-off mass at an age of 50 Myr, Ekstrom et al., 2012). We then corrected the number of stars according to the incompleteness levels we have calculated and displayed in Fig. 5.
We performed a Monte Carlo simulation sampling stellar masses at random using the mass function from Maschberger (2013). We counted both the number of stars in our selected mass range and the total number and mass of stars. We stopped the simulation only when we reached the total number of observed stars in the selected mass range. This process was repeated 10,000 times, using the uncertainties on the individual SED-fitted stellar masses, to obtain an uncertainty for the total stellar mass of each association. These are provided in Table 4 and range from \(\sim\)900 to \(\sim\)6000 M\({}_{\odot}\). The most massive is association 1, with an estimated initial stellar mass of \(\sim\)6000 M\({}_{\odot}\) and currently containing about 200 B-type members.
### Kinematic properties of the individual associations
We calculated the median coordinates (equatorial and galactic), distances and transverse velocities for each OB association. In addition,
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Association & \(N\) & \(N_{\rm g}\) & \(N_{\rm tot}\) & \(d_{m}\) (pc) & Probability(\%) \\ \hline & 26 & 20 & 21 & 738 & 86.58 \\ & 18 & 18 & 25 & 906 & 83.20 \\
1 & 198 & 186 & 215 & 1056 & 99.98 \\
2 & 41 & 41 & 43 & 1085 & 99.99 \\ & 17 & 16 & 16 & 1475 & 57.56 \\
3 & 99 & 89 & 119 & 1514 & 99.93 \\
4 & 130 & 127 & 138 & 1923 & 99.87 \\ & 15 & 13 & 13 & 1956 & 4.15 \\ & 37 & 19 & 19 & 2188 & 48.99 \\ & 23 & 11 & 11 & 2508 & 9.35 \\ & 21 & 9 & 9 & 2677 & 18.24 \\
5 & 90 & 39 & 39 & 2760 & 82.31 \\ & 83 & 13 & 13 & 2803 & 63.10 \\ & 15 & 7 & 7 & 2951 & 8.35 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Properties of the newly-identified OB associations in Auriga. \(N\) is the initial number of stars in the association (before bootstrapping). \(N_{\rm g}\) is the number of likely members (with a membership probability of at least 50 %) and \(N_{\rm tot}\) is the total number of stars in the associations, adding those appearing during the bootstrapping with a probability of at least 50 %. \(d_{m}\) stands for the median distance of the group. Probability gives the probability that the association is real.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Assoc. & \(N_{\rm hist}\) & Hist. assoc. & \(N_{\rm OC}\) & OC \\ \hline
1 & 7 & Aur OB1 & 25, 4 & NGC 1960, G8 \\
2 & & & 26 & NGC 1912 \\
3 & 1 & Aur OB1 & 9, 4 & NGC 1778, CG16 \\
4 & 1, 8 & Aur OB1, OB2 & 49, 5 & S8, K1 \\
5 & 2 & Aur OB2 & 15 & NGC 1893 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison between our new OB association members and OB stars in the historical associations and in the open clusters from Cantat-Gaudin and Anders (2020). \(N_{\rm hist}\) stands for the number of stars in a historical association whilst \(N_{\rm OC}\) design the number of stars in an open cluster. The notations CG16, G8, K1 and S8 stand respectively for COIN-Gaia_16, Gulliver 8, Kronberger 1 and Stock 8.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Parameters & Units & Assoc. 1 & Assoc. 2 & Assoc. 3 & Assoc. 4 & Assoc. 5 \\ \hline RA(ICRS)\({}_{m}\) & deg & 81.30 & 82.13 & 80.16 & 82.02 & 80.70 \\ DE(ICRS)\({}_{m}\) & deg & 34.97 & 35.84 & 36.57 & 34.77 & 33.94 \\ \(l_{m}\) & deg & 170.72 & 172.24 & 170.70 & 173.09 & 172.82 \\ \(b_{m}\) & deg & -0.16 & 0.70 & 0.11 & -0.03 & -1.48 \\ \(d_{m}\) & pc & 1056 & 1085 & 1514 & 1923 & 2760 \\ \(\sigma_{d}\) & pc & 102.2 & 25.2 & 76.0 & 103.8 & - \\ \(V_{lm}\) & km s\({}^{-1}\) & 12.98 & 22.85 & 23.69 & 15.37 & 14.18 \\ \(\sigma_{V_{l}}\) & km s\({}^{-1}\) & 2.52 & 1.09 & 2.96 & 2.20 & 2.10 \\ \(V_{b_{m}}\) & km s\({}^{-1}\) & -10.75 & -6.16 & -8.10 & -10.58 & -14.84 \\ \(\sigma_{V_{b}}\) & km s\({}^{-1}\) & 1.41 & 0.95 & 2.00 & 2.05 & 1.17 \\ Observed number of B stars & & \(194\pm 3\) & \(40^{+2}_{-1}\) & \(107^{+3}_{-2}\) & \(115^{+3}_{-4}\) & \(32\pm 2\) \\ Observed number of O stars & & \(12^{+2}_{-2}\) & \(0\pm 0\) & \(4\pm 1\) & \(13\pm 2\) & \(3^{+2}_{-1}\) \\ Total stellar initial mass & \(M_{\odot}\) & \(6051^{+206}_{-387}\) & \(1219^{+182}_{-167}\) & \(3075^{+298}_{-276}\) & \(3500^{+315}_{-306}\) & \(879^{+163}_{-136}\) \\ HR diagrams age & Myr & 0-30 & - & 0-20 & 0-5 & 0-10 \\ Related OCs age & Myr & 18-26 & 250-375 & 26 & 4-8 & 1-5 \\ Traceback age & Myr & \(20.9^{+1.1}_{-1.2}\) & \(369.9^{+8.3}_{-22.2}\) & \(11.7^{+2.2}_{-3.0}\) & \(1.6^{+1.3}_{-0.9}\) & - \\ Age & Myr & \(\sim 20\) & - & 10-20 & 0-5 & 0-10 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Properties of the new OB associations. The first column indicates the parameter, where the subscript ‘m’ indicates the median value and ‘\(\sigma\)’ the dispersion. The total initial stellar mass is corrected for observational incompleteness, as described in the text.
Figure 7: Spatial distribution in Galactic coordinates of the 5 new OB associations in Auriga. The background extinction map and the features highlighted in this map are the same as in Fig. 1.
we have computed the intrinsic dispersion in distance and transverse velocities based on the method from Ivezic et al. (2014) using the observational uncertainties. The distance dispersions typically range up to a few tens of pc, while the velocity dispersions range up to 3 km s\({}^{-1}\), consistent with that of other OB associations (Wright, 2020).
### HR diagrams of association members
Fig. 10 shows the HR diagrams for each association, produced with the SED-fitted effective temperatures and luminosities. It is clear that the identified members are dominated by late B-type stars, preventing a straightforward age assessment. Association 1 contain a few stars close to the 50, 100 and 200 Myr isochrones, which would make this much older than other known OB associations, though these may be contaminants. Associations 3 to 5 contain hotter stars that suggest a younger age of \(<\) 20 Myr.
There are a number of factors that can affect the positions of stars in the HR diagram. First among these is extinction, which is derived for each star individually as part of our SED fitting process from the extinction map. The uncertainty in the extinction to a given star, which is derived from the distance uncertainty, propagates through to the uncertainty on the effective temperature and luminosity shown in the HR diagram. An alternative approach might be to use a single extinction value for all members of an association, but the effect of this will be small as the variation in extinction across members of an association is small, with a typical standard deviation in \(A_{V}\) of 0.2 - 0.6 mag. Such a difference in reddening does have a small effect on the derived physical parameters. However, if we reproduce these HR diagrams using the median extinction for all association members, while there are small changes in the position of each star, the extreme outliers do not change significantly.
More significant factors affecting the position of stars in the HR diagram include binarity, and the presence of possible contaminants. Association 1 constitutes a good example of this, for which most of its stars are close to the ZAMS and therefore consistent with being under 20 Myr old, suggesting that its stars sitting on the 50, 100 and 200 Myr isochrones may be contaminants.
### Expansion and traceback age
Fig. 10 shows that many of the OB associations in Auriga have ages of several tens of Myr. Therefore, instead of using a linear expansion model to determine their expansion age, we trace back the associations using the epicycle approximation from Fuchs et al. (2006), and correct for the local standard of rest (LSR) from the values of Schorich et al. (2010). We gather RVs from the APOGEE survey and from SIMBAD. RVs from the literature are discarded if they lack an uncertainty or if their measurement is considered unreliable. If some stars belong to both APOGEE and SIMBAD, we take the weighted mean of the two values. In doing so we obtain a sample of 95 stars with RVs.
We calculated the median RV for each association and track back whole associations rather than individual stars, due to the effects of unresolved close binaries. Again we apply a Monte Carlo's simulation to compute the uncertainties on the median velocities following the method in Quintana and Wright (2022). The results are shown in Table 5.
Combining RVs with Gaia PMs allows us to perform a 3D traceback on these associations. We use the HR diagrams (Fig. 10) together with the ages of the related open clusters to add constraints on the age estimations, which are displayed in Table 1. With this information, we set the upper limit on traceback to 50 Myr in the past for associations 1 and 5, 400 Myr for association 2, 30 Myr for association 3 and 20 Myr for association 4. We trace back each association in time steps of 0.1 Myr, and at each time step we calculate the MAD (median absolute deviation) in Galactic coordinates of the on-sky spatial distribution of association members7, and we estimate the time of minimum MAD when the association is at its most compact. We repeat this process 1000 times to derive uncertainties. An example is shown in Fig. 11 for association 1, while the other associations are shown in Figure A1.
Footnote 7: We do not calculate the MAD in 3D due to the large error bars on RVs that causes uncertain line-of-sight distances as we go back further in the past.
We estimated ages for each association based on the combination of (i) the time for the system to trace back to its most compact state, (ii) the age of any open cluster or star-forming region linked to the association (Section 3.7), and (iii) any age constraints arising from the HR diagram (Fig. 10). These ages are listed in Table 4. For some systems the traceback was able to place reasonable constraints on the age of the system, (e.g., for association 1), while for other associations the best constraint came from the open clusters and star-forming regions the association was linked to (e.g., for associations 4 and 5). For the remaining associations either the HR diagram
Figure 8: Transverse velocity distribution of the 5 new OB associations in Auriga.
provided the best constraint on the age (e.g., for association 3) or very little constraint was possible by any means (e.g., association 2).
## 5 Discussion
In this section we discuss our findings and how the new Auriga OB associations can help understand the star formation history of the region. Our main results are outlined as follows:
* We have shown that both Aur OB1 and Aur OB2 are too extended in PM and distance to be genuine OB associations, encouraging us to revisit the census of OB stars and associations in the region.
* We identified more than 5000 candidate OB stars across the region using our SED fitter, with an estimated reliability of 90 %.
* We identified 5 new high-confidence OB associations in the area that we analysed physically and kinematically, and estimated their age through a combination of 3D kinematic traceback, their link to open clusters and star-forming regions with known ages, and the distribution of members in the HR diagram. Only a small fraction (\(\sim\)10 %) of the identified OB stars have been assigned to these associations.
Figure 10: HR diagrams for the members of the new OB associations in Auriga. Isochrones have been shown from the rotating evolutionary models from Ekström et al. (2012). Positions of some spectral types have been indicated on the top horizontal axis for clarity.
Figure 9: Galactic longitude as a function of SED-fitted distance for the 5 new OB associations in Auriga. The median error bars on distances are respectively \(\sim 30\) pc for associations 1 and 2, \(\sim 50\) pc for association 3, \(\sim 70\) pc for association 4 and \(\sim 160\) pc for association 5.
### The new Auriga OB associations
We have identified 5 new OB associations in Auriga, with total stellar masses from a few hundreds to a few thousand solar masses, and with kinematic properties consistent with other OB associations (Wright 2020). They are likely related to open clusters in the area (see Table 3).
Association 1 shares several members with Aur OB1 and is related to NGC 1960, so it should be seen as the replacement for the historical Aur OB1 association. Similarly, the historical members of Aur OB2 are now divided between associations 4 and 5, respectively related to Stock & NGC 1893. This confirms the suggestion of Marco & Negueruela (2016) to divide Aur OB2 into two different associations, one in the foreground and one in the background.
The Hii region Sh 2-235 instead appears to be related to association 3, since it is located at a similar distance of \(1.36\pm 0.27\) kpc (Foster & Brunt 2015). Association 3 also includes HD 36483, an O9.5IV star (Sota et al. 2011), which may be responsible for ionizing the Hii region.
Association 4 occupies the centre of our region of study, where three OCs are found, along with the Hii region Sh 2-234 (Fig. 7). Sh 2-234 is located at a distance of \(2.19\pm 0.10\) kpc (Foster & Brunt 2015). Its relation to Aur OB2 and the surrounding OCs has been suggested by Marco & Negueruela (2016) and we confirm it to be related to association 4. We cannot comment on whether the star LS V +34 23 is part of association 4 (previously in Aur OB2 and thought to be responsible for ionizing Sh 2-234, Marco & Negueruela 2016) as its _Gaia_ photometry failed our quality checks, preventing us from performing an SED fit. However, association 4 does contain LS V + 34 15, LS V + 34 21 and LS V + 35 25, of respective spectral types O5.5V (Negueruela et al. 2007), O9IV (Roman-Lopes & Roman-Lopes 2019) and O9.5V (Georgelin et al. 1973), each probable sources of ionization for the Hii region Sh 2-234. However, the RV of Sh 2-234 has been measured as \(-21.4\pm 0.2\) km s\({}^{-1}\) (Anderson et al. 2015), which is significantly different from the RV we estimated for association 4 (Table 5), even if those stars are responsible for ionizing the Hii region, the association and the Hii region may not otherwise be related.
### Expansion and age of the OB associations
Our analysis revealed that our OB associations have various ages, from the youngest associations 4 and 5 with ages \(<10\) Myr to associations of several tens of Myr old (association 3). For the OB associations where multiple age indicators were available, the ages derived by different methods were consistent. The exception to this is association 2, with the majority of its OB stars consistent with being on the ZAMS (Fig. 10) while its related OC is several hundreds of Myr old (Table 1), far older than most OB associations.
In Section 4.4 we showed that nearly all our OB associations traced back into a more compact configuration in the past, which is a signature of expansion (see e.g. Wright & Mamajek 2018 and Miret-Roig et al. 2022). We however point out that associations 4 and 5 reached their most compact state very recently (Fig. A1).
### OB stars unassigned to groups
The 5 OB associations contain 554 OB stars in total from our sample of 5617 SED-fitted OB stars in the area. This means that \(\sim\)90 % of the OB stars have been unassigned to any stellar group, which could be explained by several factors.
In Section 3.6, we imposed a minimum size of 15 OB stars per association to be consistent with their definition (Humphreys 1978; Wright 2020). There could be other stellar groups in the area which are dominated by low-mass stars and only contain a handful of OB stars. Similarly, some OB stars initially belonging to a group were rejected during the bootstrapping process (see Table 2). This was particularly the case for most distant stars (\(>2\) kpc) as the _Gaia_ parallaxes become less precise.
Our sample includes many late B-type stars. A B9 star with an initial stellar mass of \(2.75\ M_{\sun}\) (Pecaut & Mamajek 2013) has a lifetime of \(\sim 700\) Myr as predicted by stellar evolutionary models (Ekstrom et al. 2012). This value is far beyond the typical lifetime of an OB association (Wright 2020) and implies that even if those stars were born clustered, they would probably have dispersed into the Galactic field population since.
It is also possible that some of these OB stars formed within associations but were ejected and became runaways. Notably, massive stars are more likely to belong to multiple systems (Lada 2006), and when the primary star undergoes a supernova explosion, it can eject the secondary star beyond the group it was born into.
### Distribution of OB associations and Galactic structure
The Perseus spiral arm intercepts our sightline at a distance of approximately 2 kpc (Reid et al. 2019), at approximately the position of association 4, the youngest of our new OB associations. Fig. 12 shows the positions of our new OB associations, with their ages, relative to the position of the Perseus spiral arm. While association 4 is coincident with the current position of the Perseus spiral arm, the associations closer to us are older, indicating a potential age gradient.
To determine whether this age gradient is related to the motion of the Perseus spiral arm, we model the positions of the spiral arm and our new OB associations over the last 20 Myrs. We use the spiral arm model from Reid et al. (2019) and the spiral arm pattern speed of \(\mathbf{\Omega}_{P}=-28.2\pm 2.1\) km s\({}^{-1}\) from Dias et al. (2019). We trace back the position of the Perseus spiral arm 20 Myr into the past in the frame of the LSR.
Fig. 12 shows the position of the OB associations with the Perseus spiral arms, at intervals of 10 Myr, back to 20 Myr in the past. At 10 Myr in the past it is clear that association 3 (with an estimated age
Figure 11: MAD of the on-sky spatial distribution for members of association 1 as a function of traceback time. The age of the related open cluster (NGC 1960) is shown, taken from Table 1.
of 10-20 Myr) is coincident with the spiral arm, while at 20 Myr in the past, association 1 (estimated age of \(\sim\)20 Myr) is coincident with the spiral arm. Association 2 crosses the spiral arm \(\sim\)20 Myr ago as well, despite its related OC and traceback suggesting an older age (Fig. A1), which may suggest that the association is younger and not related to NGC 1912, or that the association did not form within the spiral arm. As for association 5, it stays too distant to be related to the Perseus spiral arm and may have formed outside (or in the Outer spiral arm).
This result shows that OB associations can be used not only as tracers for the current positions of spiral arms but also as a probe for the star formation history of a region and potentially the progress of a spiral arm through the region.
## 6 Conclusions
We have shown that Aur OB1 and OB2 are not genuine OB associations, because their members are characterized by a too large spread in proper motion and distance. Applying an improved SED fitting tool, we have identified 5617 OB stars with a reliability of \(\sim\) 90 % for the lowest temperature threshold.
Using a clustering algorithm (HDBSCAN), we have identified 5 high-confidence OB associations that we connect to the open clusters and star-forming regions in the area. Association 1 is the main foreground association at a distance of \(\sim\) 1 kpc and with a mass of \(\sim\) 6000 \(M_{\odot}\) and should replace Aur OB1 due to its common members and relation with NGC 1960. Similarly, we argue that Aur OB2 should be replaced by association 4 (at \(\sim\) 1.9 kpc and with a total mass of \(\sim\) 3500 \(M_{\odot}\)), and 5 (at \(\sim\) 2.8 kpc and with a total mass of \(\sim\) 900 \(M_{\odot}\)).
We have analysed these OB associations, combining HR diagrams and kinematic traceback, to constrain their ages. We have also studied their expansion, their total stellar masses, their number of OB stars and their 3D position.
We have also identified an age progression between several of these associations, that suggests their origins may have been within the Perseus spiral arm. This shows that OB associations constitute useful tools to study recent star formation history and the position and motion of the Galactic spiral arms.
## Acknowledgements
We thank the anonymous referee for their thorough reading of the manuscript and their insightful comments. ALQ acknowledges receipt of an STFC postgraduate studentship. NJW acknowledges receipt of a Leverhulme Trust Research Project Grant (RPG-2019-379). The authors would like to thank John Southworth for discussions, and Eleonara Zari for her help to calculate the position of the spiral arms. This paper makes uses of data processed by the Gaia Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)) and obtained by the Gaia mission from the European Space Agency (ESA) ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), as well as the INT Galactic Plane Survey (IGAPS) from the Isaac Newton Telescope (INT) operated in the Spanish Observatorio del Roque de los Muchachos. Data were also based on the Two Micron All Star Survey, which is a combined mission of the Infrared Processing and Analysis Center/California Institute of Technology and the University of Massachusetts, along with The UKIDSS Galactic Plane Survey (GPS), a survey carried out by the UKIDSS consortium with the Wide Field Camera performing on the United Kingdom Infrared Telescope. This work also benefited from the use of _TOPCAT_(Taylor, 2005), Astropy (Astropy Collaboration et al., 2013) and the Vizier and SIMBAD database, both operated at CDS, Strasbourg, France.
## Data Availability
The data underlying this article will be uploaded to Vizier.
|
2306.11058 | Reciprocal hydrodynamic response estimation in a random spreading sea | Direct estimation of the hydrodynamic response of an offshore structure in a
random spreading sea can lead to large computational costs. In this paper the
actual spreading sea is replaced by an idealised diffuse wave field and the
diffuse field reciprocity (DFR) relationship is derived analytically and
verified against diffraction analysis for offshore application. The DFR
approach provides an analytical expression for the estimation of the wave
loading spectrum in a spreading sea. It is very efficient because only the
added damping coefficients are required. Furthermore, if normalised to the peak
amplitude of a spreading sea, an upper bound response can be obtained using the
reciprocal approach. And this is demonstrated using a spar type floating wind
turbine. Given that the hydrodynamic coefficients are routine outputs for
offshore structural design, engineers would obtain the upper bound response
without additional computational cost using this new approach. | Jiannan Yang, Robin Langley, Richard Lines | 2023-06-19T16:47:51Z | http://arxiv.org/abs/2306.11058v1 | # Reciprocal hydrodynamic response estimation in a random spreading sea
###### Abstract
Direct estimation of the hydrodynamic response of an offshore structure in a random spreading sea can lead to large computational costs. In this paper the actual spreading sea is replaced by an idealised diffuse wave field and the diffuse field reciprocity (DFR) relationship is derived analytically and verified against diffraction analysis for offshore application. The DFR approach provides an analytical expression for the estimation of the wave loading spectrum in a spreading sea. It is very efficient because only the added damping coefficients are required. Furthermore, if normalised to the peak amplitude of a spreading sea, an upper bound response can be obtained using the reciprocal approach. And this is demonstrated using a spar type floating wind turbine. Given that the hydrodynamic coefficients are routine outputs for offshore structural design, engineers would obtain the upper bound response without additional computational cost using this new approach.
Offshore design; Diffuse field; Haskind relation; Potential damping; Blocked force; Floating wind turbine
## 1 Introduction
Realistic sea states contain waves that are dependent on frequency as well as direction. The response of an offshore structure can be sensitive to both the frequency and the heading of an incident ocean wave, and the most realistic way of ensuring that this effect is captured in a numerical simulation is to model the wave environment as a random spreading sea.
For example, large floating offshore structures subject to multi-directional random waves have been studied in [1], where the importance of accounting for directional spreading in wave force and motion response calculation is highlighted. In [2], studies of a four leg jacket supported offshore wind turbine at three different sites show that the structural capacity is sensitive to the load directions, and a unidirectional sea cannot account for the directional dependence of capacity and loading that will influence the reliability of the offshore system.
Despite its importance, offshore designers and engineers often adopt a unidirectional sea for their hydrodynamic analysis. This is partly due to the fact that measuring directionality is more complicated than measuring the free surface elevations [3, 4], also because accounting for a spreading sea can lead to large computational costs. For example, if a frequency domain analysis is performed then the dynamic response must be computed at a large number of combinations of frequency and wave headings, and this can lead to large computational costs, particularly so at the early design stage when many possible design configurations need to be analysed. The motivation of this paper is thus to look for an efficient approach to estimate the hydrodynamic response in a spreading sea.
In this paper, to reduce the computational cost, the actual spreading sea is replaced by an idealized diffuse wave field, and a technique from the vibro-acoustic literature known as "diffuse field reciprocity (DFR)" [5] is used to express the cross spectrum of the wave loading in terms of the potential damping matrix of the structure.
In room acoustics, a diffuse sound filed is an ideal probabilistic model describing a sound field consisting of a very large set of statistically uncorrelated elemental plane waves. The propagation direction of the waves are random with a uniform probability distribution [6]. This enables the wave field, instead of the individual waves, to be treated in a stochastic manner. In analogy to the fluid media, the diffuse field concept has also been applied to waves in structures [7, 8]. Furthermore, the wave approach to formulate the widely used Statistical Energy Analysis (SEA) is mainly based on the diffuse field assumption [9].
An efficient approach to tackle this vibroacoustic problem has been achieved by using the diffuse field reciprocity (DFR) principle [5]. This principle states that the loading applied by a diffuse wave field on a deterministic structure can be expressed in terms of the energy in the diffuse field and the radiation properties of the structure into an infinite space. This methodology has been applied to numerous systems in vibro-acoustics. In addition, it is shown in [10] that the diffuse field reciprocity principle can also be applied to electromagnetic systems, enabling the currents induced in a wiring system by diffuse electromagnetic waves to be computed in an efficient manner.
It is shown in this paper that the hydrodynamic response of an offshore structure can be successfully recovered through the reciprocal DFR principle in an idealised diffuse sea environment. In this way, instead of solving a full diffraction problem, only the radiation potential in still water needs to be computed. In addition, it is possible to estimate an upper bound response in a spreading sea assuming the worst sea state is coming from all directions. Compared to the commonly used unidirectional sea assumption, the DFR approach provides a new reciprocal approach that is as efficient, considers all degree of freedoms and provides a higher safety factor. This approach is weakly analogous to the Haskind relation [11] that is employed in diffraction theory, although in the present case the cross spectra of the wave loading are considered rather than the diffraction forces.
In what follows, the formulation of the diffuse field reciprocity (DFR) principle is introduced in Section 2. The analytical verification of the proposed reciprocal approach is given in Section 3, in terms of the hydrodynamic response of a simple articulated buoy, by demonstrating the equivalence between the reciprocal DFR approach and the direct approach via a full diffraction analysis. In Section 4, an upper bound is introduced together with discussions of how the upper bound can be estimated efficiently using the DFR approach. Following the discussions, a demonstration case study using a spar type floating wind turbine is presented in Section 5. Concluding remarks are given in Section 6.
## 2 Hydrodynamic response in a spreading sea
For a linear system subject to a random wave excitation, the covariance matrix of the structure response at degrees of freedom \(\zeta\) can be found as [12]:
\[\boldsymbol{\sigma}_{\zeta}^{2}=\iint\mathbf{G}(\omega,\theta)\mathbf{G}^{ \,\mathsf{T}}(\omega,\theta)S_{\eta\eta}(\omega,\theta)d\omega d\theta \tag{1}\]
where \(\mathbf{G}(\omega,\theta)\in\mathbb{C}^{\mu}\) is the transfer function, with \(G_{j}\) relates the structural response at the degree-of-freedom \(\zeta_{j}\) to a surface wave of unit elevation. \(S_{\eta\eta}(\omega,\theta)\) is the directional spectrum of the surface wave elevation at frequency \(\omega\) with wave direction \(\theta\). The notation \([\cdot]^{\,\mathsf{T}}\) indicates complex conjugate transpose.
A blocked force is the force experienced by the structure when it is subject to incoming waves but held stationary [13, 14]. Using this concept, the transfer function \(G(\omega,\theta)\) can be further defined as:
\[\mathbf{G}(\omega,\theta)=\mathbf{H}(\omega)\mathbf{f}_{\mathbf{b}}(\omega,\theta) \tag{2}\]
where \(\mathbf{f}_{\mathbf{b}}\) is the blocked force due to a wave of unit elevation. Note that the blocked force is dependent both on frequency and the incoming wave direction.
\(\mathbf{H}\in\mathbb{C}^{\mu\times\mu}\) is the structural transfer function matrix, which for a linear system can be found as the inverse of the dynamic stiffness matrix:
\[\mathbf{H}(\omega)=\left[-\omega^{2}\mathbf{M}+i\omega\mathbf{C}+\mathbf{K} \right]^{-1} \tag{3}\]
where \(\mathbf{M}\), \(\mathbf{C}\) and \(\mathbf{K}\) are the inertia, damping and stiffness matrices.
For offshore engineering, it is a common practice to consider a frequency independent spreading function \(D(\theta)\). The directional spectrum of the wave elevation can then be written as [15]:
\[S_{\eta\eta}(\omega,\theta)=S_{\eta\eta}(\omega)D(\theta) \tag{4}\]
where \(S_{\eta\eta}(\omega)\) is the unidirectional wave spectrum, and the following condition must be satisfied for the spreading function:
\[\int_{0}^{2\pi}D(\theta)d\theta=1 \tag{5}\]
Therefore, Eq 1 can be expanded as:
\[\boldsymbol{\sigma}_{\zeta}^{2}=\int\mathbf{H}(\omega)\left[\int_{0}^{2\pi} \mathbf{f}_{b}(\omega,\theta)\mathbf{f}_{b}^{*\mathsf{T}}(\omega,\theta)D( \theta)d\theta\right]\mathbf{H}^{*\mathsf{T}}(\omega)S_{\eta\eta}(\omega, \theta)d\omega \tag{6}\]
Eq 6 indicates that a two dimensional integration over wave heading as well as frequency are required for the estimation of the hydrodynamic response in a spreading sea. As a large number of wave headings is normally required (e.g., as much as 12 headings are recommended in [16]), the computational cost for analysis in a spreading sea can be prohibitive, especially at early design stage. Motivated to overcome this issue, we focus on simplifying the part of integration that is taken over the wave headings \(\theta\) (the expression inside the square brackets of Eq 6).
The key insight here is that the spreading function \(D(\theta)\) in Eq 5 can be seen as a measure for the fraction of sea states, out of a large number of ensemble, that are coming from direction \(\theta\). In other words, the wave direction \(\theta\) is regarded as a random variable, with its probability distribution described by the density function \(D(\theta)\).
With this view, the expression inside the square brackets in Eq 6 is the expected cross spectrum of the blocked force by ensemble average. The covariance matrix of the hydrodynamic response can then be more compactly expressed as:
\[\boldsymbol{\sigma}_{\zeta}^{2}=\int\mathbf{H}(\omega)\mathbb{E}_{\theta} \left[\mathbf{f}_{b}\mathbf{f}_{b}^{*\mathsf{T}}\right]\mathbf{H}^{*\mathsf{ T}}(\omega)S_{\eta\eta}(\omega,\theta)d\omega \tag{7}\]
where the expectation is taken over the incident wave headings \(\theta\) over 0 to \(2\pi\).
At first sight, it might appear that we haven't gained any computational saving by converting the expression from Eq 6 to Eq 7 by taking the ensemble average point of view, because large computational efforts are still required to compute the expected cross spectrum for the wave forces.
However, if the spreading sea is replaced by an idealised diffuse wave field, the technique from the vibro-acoustics literature known as "diffuse field reciprocity (DFR)" [5] can be used to express the cross spectrum of the wave loading in terms of the resistive impedance to radiation in a reciprocal manner. In this way, instead of solving a full diffraction problem, only the radiation potential in still water needs to be computed.
From [5], the diffuse field reciprocity (DFR) relationship is given as:
\[\mathbb{E}\left[\mathbf{f}_{b}\mathbf{f}_{b}^{*\mathsf{T}}\right]=\frac{4E( \omega)d\omega}{\pi\omega n(\omega)}\text{Im}(\mathbf{D}_{\text{dir}}(\omega)) \tag{8}\]
where \(\mathbb{E}[\cdot]\) is the mathematical expectation or ensemble average and \(\mathbf{f}_{b}\) represents the vector of blocked forces. On the right hand side of Eq 8, \(E(\omega)\) and \(n(\omega)\) are the spatially averaged system energy and modal density respectively [17], and are both frequency dependent. The matrix \(\mathbf{D}_{\text{dir}}\) is the dynamic stiffness matrix describing the direct filed radiation.
The diffuse field reciprocity (DFR) from Eq 8 relates the cross spectrum of the blocked force to the resistive part of the direct field dynamic stiffness \(\mathbf{D}_{\text{dir}}\). This corresponds to the wave energy radiated into the far field. For a body moving on or near a free water surface, energy got carried away from the moving structure by travelling waves. This results an energy loss and this is called added damping or potential damping \(\mathbf{C}_{\text{pot}}\). In addition to the travelling waves, there are also evanescent waves near the structure. They do not dissipate energy and are related to the reactive part of direct field dynamic stiffness matrix \(\mathbf{D}_{\text{dir}}\). Therefore, the radiated force due to the structure oscillation in still water can be obtained as:
\[\mathbf{f}_{\text{rad}}=\mathbf{D}_{\text{dir}}\boldsymbol{\zeta}=(-\omega^{2 }\mathbf{M}_{a}+i\omega\mathbf{C}_{\text{pot}})\boldsymbol{\zeta} \tag{9}\]
where \(\mathbf{f}_{\text{rad}}\) is the force vector due to structure movement and \(\mathbf{M}_{a}\) is the added mass matrix. Therefore, the imaginary part of \(\mathbf{D}_{\text{dir}}\), which is required for the reciprocal relationship in Eq 8, is directly related to the potential damping matrix, i.e., \(\text{Im}(\mathbf{D}_{\text{dir}})=\omega\mathbf{C}_{\text{pot}}\).
It can be seen from Eq 8 that the direct integration over the wave headings is reduced to a simple algebraic expression, by taking the ensemble average in a diffuse sea. This allows an efficient estimate for the hydrodynamic response in a spreading sea environment.
The diffuse field reciprocity (DFR) relation were originally developed in vibro-acoustic field and as a result the parameters in Eq 8 in offshore context are yet to be defined. In Appendix A, the modal density and spatial averaged
energy are formulated in an offshore context. Putting all the components together, the DFR relationship in offshore application can be found as:
\[\begin{split}\mathbb{E}\left[\mathbf{f}_{b}\mathbf{f}_{b}^{\star} \right]&=\frac{4E(\omega)d\omega}{\pi\omega n(\omega)}\text{Im}( \mathbf{D}_{\text{dir}}(\omega))\\ &=\frac{2\rho g^{2}S_{\eta\eta(\omega)}d\omega}{\omega k}\left[ \tanh(kd)+kd\operatorname{sech}^{2}(kd)\right]\mathbf{C}_{\text{pot}}\end{split} \tag{10}\]
where \(k\) is the wavenumber, \(d\) is the water depth. Note that the above expression is applicable for water of finite depth. For deep water, \(kd\) is large, and \(\tanh(kd)\approx 1\), \(\operatorname{sech}(kd)\approx 0\). Therefore, the DFR relationship in Eq 10 become much simplified.
The reciprocal DFR relationship in Eq 10 might look unfamiliar to researchers and practitioners in offshore or marine engineering. Nevertheless, relating the exciting forces and moments in terms of the far-field behaviour of the radiation potential is not new and is known as Haskind relation in diffraction theory [11, 18]. Except that in the present case, cross spectrum of the wave loading is considered rather than only the diffraction forces.
The cross spectrum of the blocked force of an offshore structure in a diffuse sea can then be obtained analytically once the potential damping matrix is known. In Section 3, for the purpose of analytical verification, the potential damping coefficients are obtained by computing the radiation force with a diffraction analysis. However, as to be discussed in Section 4.1, the hydrodynamic coefficients, such as added mass and potential damping, are well documented for typical offshore structures. For new designs, a hydrodynamic model for the hydrodynamic coefficients is usually developed as part of the design procedure [16]. Therefore, hydrodynamic analysis using the proposed diffuse field reciprocity approach can take full advantage of the readily available damping coefficients. And this is demonstrated in Section 5 with an example using a spar type floating wind turbine.
## 3 Analytical verification using a simple articulated buoy
To verify the applicability of the diffuse field reciprocity (DFR) on offshore structures in a diffuse sea environment, we compare the blocked force spectrum calculated from two different approaches: a direct approach, where a diffraction analysis is conducted, and the reciprocal approach using DFR from Eq 10.
As stated in the introduction, the hydrodynamic coefficients required for the DFR are mostly well documented or ready to be extracted from standard numerical methods [19]. However, for completeness, here we demonstrate briefly the steps to obtain the added mass and damping coefficients by solving the radiation problem analytically for a simple articulated buoy.
The buoy of length \(d\), as shown in Figure 1, is a long and hollow column anchored to the seabed via a ball joint. It has two degrees of freedom, rotation around the \(x\) axis (out of plane), and rotation around the \(y\) axis (in plane). In this example, the buoy is considered to be a rigid structure with relative large dimensions compared to the wavelength of surface waves so that drag force can be ignored here.
Figure 1: A rigid articulated buoy with 2 degrees of freedom subject to a wave incident at angle \(\theta\) (SWL: still water line).
### _Reciprocal approach_ with hydrodynamic coefficients
Using Bernoulli's equation [15], the dynamic pressure acting on a structure can be found via the radiated velocity potential:
\[p_{r}=-\rho\left(\frac{\partial\phi}{\partial t}\right)_{a} \tag{11}\]
where \(a\) is the radius of the buoy and \(\phi\) is the velocity potential. Integrating the pressure along the buoy surface, the overall moment acting on the structure can be obtained as:
\[f_{\text{rad}}=-\int_{-d}^{0}\int_{0}^{2\pi}(z+d)p_{r}\mathbf{n}d\theta dz \tag{12}\]
where \(\theta\) is the circumferential coordinate and \(\mathbf{n}\) is the directional normal vector, which is equal to \(a\cos\theta\).
We call \(f_{\text{rad}}\) the radiation moment and it is calculated with respect to the bottom joint of the buoy. Assuming time harmonic dependence, and use the velocity potential \(\phi\) for the simple articulated buoy as given in Appendix B, the radiation moment can be found as:
\[f_{\text{rad}}=-\zeta\omega^{2}e^{-i\omega t}\sum_{p=0}^{\infty}\frac{\pi\rho a }{k_{p}}\frac{H_{1}^{(1)}(k_{p}a)}{\left[H_{1}^{(1)}(k_{p}a)\right]^{\prime}} \frac{\left[\frac{d}{k_{r}}\sinh(k_{p}d)-\frac{1}{k_{r}^{\prime}}(\cosh(k_{p} d)-1)\right]^{2}}{\frac{1}{4k_{p}}\sinh(2k_{p}d)+\frac{d}{2}} \tag{13}\]
where \(H_{q}^{(1)}\) is the Hankel function of first kind and \(qth\) order.
\(k_{p}\) is the wave number and can be found from the dispersion relation, \(k\tanh(kd)=\omega^{2}/g\), via numerical iterative routines such as Newton Raphson's method. There are two real roots and infinite number of imaginary roots for wave number \(k\) at each frequency \(\omega\). The two real roots represent waves propagating in opposite directions, while the imaginary values denote the evanescent waves. For the ease of notation, from this point on, we will use \(k\) to represent the wavenumber and drop the summation sign.
Given a unit displacement input (\(\zeta=1\)), the radiation moment can be found and the potential damping matrix \(\mathbf{C}_{\text{pot}}\) can then be obtained as the imaginary part of \(f_{\text{rad}}\) from Eq 9 :
\[\mathbf{C}_{\text{pot}}=\frac{\text{Im}\left[f_{\text{rad}}(\zeta=1)\right]} {\omega}=\frac{-4\omega^{2}\rho}{k^{5}({J_{1}^{\prime}}^{2}(ka)+{Y_{1}^{\prime }}^{2}(ka))}\frac{(kd\sinh(kd)+1-\cosh(kd))^{2}}{(\tanh(kd)+kd\operatorname{ sech}^{2}(kd))\cosh^{2}(kd)}\begin{bmatrix}1&0\\ 0&1\end{bmatrix} \tag{14}\]
where \(J\) and \(Y\) are the Bessel function of first and second kind respectively. The \(2\times 2\) matrix corresponds to the two degrees of freedom of the buoy. As the two rotation motions, \(\zeta_{1}\) and \(\zeta_{2}\) as seen in Figure 1, are independent due to symmetry, the matrix is diagonal.
Using the DFR relation in Eq (9), the ensemble average of the cross spectra of the blocked moment can be found as:
\[\mathbb{E}\left[\mathbf{f}_{b}\mathbf{f}_{b}^{\ast}\mathbf{\!{}^{\ast}\!{}^{ \ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{ \ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast} \!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{} ^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{ \ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast} \!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{} ^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{ \ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast} \!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{ \ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{ \ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast} \!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{} \!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{ \ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{} ^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{ \ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{ \ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{ \ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{} \!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{} \!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{} \!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{} \!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{ \ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{ \ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{ \ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{} \!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{ \ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!{}^{ \ast}\!{}^{\ast}\!{}^{\ast}\!{}^{\ast}\!
where \(H\) is wave height. \(C_{m}\) is the effective inertia coefficient and \(\delta\) is the phase angle:
\[C_{m}=\frac{4\left[J_{1}^{{}^{\prime}2}(ka)+Y_{1}^{{}^{\prime}2}(ka)\right]^{-1/2 }}{\pi(ka)^{2}}\qquad\text{and}\qquad\delta=-\tan^{-1}\left[\frac{Y_{1}^{{}^{ \prime}2}(ka)}{J_{1}^{{}^{\prime}2}(ka)}\right]\]
In the present case for the simple buoy with two degrees of freedom, the blocked moment vector can be expressed simply as:
\[\mathbf{f}_{b}=f_{b}\begin{bmatrix}\cos\theta\\ \sin\theta\end{bmatrix} \tag{17}\]
For excitation due to a random wave, the wave height is then related to the random wave spectrum, i.e., \(H=2\sqrt{2S_{\eta\eta}(\omega)d\omega}\).
The cross spectrum of the blocked moment can then be obtained as:
\[\mathbf{f}_{b}\mathbf{f}_{b}^{\text{-}\mathsf{T}}=S_{\eta\eta}(\omega)d\omega \frac{16\rho^{2}g^{2}}{k^{6}}\left(\frac{kd\sinh(kd)+1-\cosh(kd)}{\cosh(kd)} \right)^{2}\frac{1}{\left(J_{1}^{{}^{\prime}2}(ka)+Y_{1}^{{}^{\prime}2}(ka) \right)}\begin{bmatrix}\cos^{2}\theta&\cos\theta\sin\theta\\ \cos\theta\sin\theta&\sin^{2}\theta\end{bmatrix} \tag{18}\]
Eq 18 gives the blocked moment cross spectrum matrix for a single incident wave at angle \(\theta\). It follows that the expected cross spectrum of the blocked moment in a directional sea is:
\[\mathbb{E}\left[\mathbf{f}_{b}\mathbf{f}_{b}^{\text{-}\mathsf{T}}\right]= \int_{0}^{2\pi}\mathbf{f}_{b}\mathbf{f}_{b}^{\text{-}\mathsf{T}}D(\theta)d\theta \tag{19}\]
where we recall from the introduction that the directional spreading function \(D(\theta)\), subject to Eq 5, is viewed as equivalent to a probability density function. In a diffuse sea, waves from different directions all have the same amplitudes (in an ensemble sense) and as a result \(D(\theta)=1/2\pi\). Therefore, in a diffuse sea, the cross spectrum of the blocked moment from Eq (16) can be found as:
\[\mathbb{E}\left[\mathbf{f}_{b}\mathbf{f}_{b}^{\text{-}\mathsf{T} }\right]_{\text{direct}} =\int_{0}^{2\pi}\mathbf{f}_{b}\mathbf{f}_{b}^{\text{-}\mathsf{T} }D(\theta)d\theta \tag{20}\] \[=S_{\eta\eta}(\omega)d\omega\frac{16\rho^{2}g^{2}}{k^{6}}\left( \frac{kd\sinh(kd)+1-\cosh(kd)}{\cosh(kd)}\right)^{2}\] \[\frac{1}{\left(J_{1}^{{}^{\prime}2}(ka)+Y_{1}^{{}^{\prime}2}(ka) \right)}\begin{bmatrix}1/2&0\\ 0&1/2\end{bmatrix}\]
It can be seen that the results from the direct approach, where Eq 20 gives the cross spectrum of the blocked moments by solving the diffraction problem directly, and the reciprocal approach, where Eq 15 gives the cross spectrum of the block moment via DFR approach, are exactly the same, thus verifying the validity of the DFR relationship for offshore applications.
A full diffraction analysis normally requires computations for wave incidence from all directions. In this section, with an application to a simple articulated buoy, we have demonstrated that by taking advantage of the analytical DFR relation, this \(\theta\)-wise integration can be avoided using the reciprocal approach instead for a diffuse sea excitation. This would largely improve the computational efficiency and allow a very quick estimation for the cross spectrum of the blocked force and the resulted structural responses.
## 4 Discussions
### An efficient reciprocal approach in spreading seas
The response of an offshore structure can be sensitive to both the frequency and the heading of an incident ocean wave, and the most realistic way of ensuring that this effect is captured in a numerical simulation is to model the wave environment as a random spreading sea.
If a frequency domain analysis is performed, then the dynamic response must be computed at a large number of combinations of frequency and wave heading. This can lead to large computational costs, particularly so at the early design stage when many possible design configurations need to be analysed. According to the design recommendation of structural design of offshore ships from DNV-RP-C102 [16], when a spreading function is applied for the analysis, the wave heading angle spacing should be equal or less than 30 degrees. That is minimum 7 wave angles to be analysed
(in case of "all heading included" this results in at least 12 heading angles). Considering the wide design space and large number of design variations, an analysis using a spreading sea is often infeasible during the early design stage, particularly for analysis like fatigue.
The proposed diffuse field reciprocity (DFR) method provides a new efficient approach to estimate the hydrodynamic response. The improvement of efficiency is a result of the replacement of the integration over wave headings by an ensemble average as seen in Eq 7.
Moreover, the analytical reciprocal relationship that relates the wave loading directly to the structure's radiation behaviour makes DFR principle well positioned for offshore engineering applications. This is because the hydrodynamic radiation problems have been extensively studied and the resulting hydrodynamic coefficients are well documented.
For simple geometries, the analytical expressions are readily available. For example, a floating circular cylinder in finite-depth water in [20] and a vertical surface-piercing circular cylinder extending to the seabed and undergoing horizontal oscillations in [21]. For a large structure with more complicated geometries or flexible structures with multiple modes, the numerical approaches such as boundary element or finite element method is used in [19] to study hydrodynamic coefficients. For example, a coupled finite element and boundary element method is used in [19] to study a plate-water model, in which the radiated potential is decomposed using modal expansion method in correspondence to structural deflections. Wang and Chen [22] used higher order boundary element method to solve the added mass and damping coefficients for a FPSO system (floating production, storage and offloading system). Jonkman [23] calculated hydrodynamic added mass and damping matrices for all six rigid body DoFs of the OCE-Hywind spar-buoy using WAMIT. Therefore, the proposed reciprocal DFR method takes advantage of the readily available hydrodynamic coefficients and offers designers and engineers a fast option to estimate the structural response in a random spreading sea.
The DFR approach is based on potential flow assumption and as a result, the drag force contribution has been neglected. In cases that the drag mainly provides damping effect, such as for offshore floating structures of large dimensions, it is expected that DFR can still provide reasonable estimations for the hydrodynamic response because the drag force would be much smaller than the inertia force. And this is demonstrated in Section 5 using a spar type floating wind turbine.
### A fast upper bound estimation
Wave spreading functions are generally dependent on the location of the site and the wave height [24]. When the spreading function is not fixed at early design stages, offshore designers and engineers often adopt a unidirectional sea for their hydrodynamic analysis.
This was generally thought to be the most conservative design practice as all the wave energy focused in one direction. However, this is not true in all situations because the transfer function \(G(\omega,\theta)\) in Eq (1) is also dependent on the direction of excitation. In cases where the structure has motions decoupled from the assumed excitation direction or the sensitive wave incident direction cannot be identified easily, the overall responses can be underestimated.
On the contrary, all degrees of freedom for the structure are considered at the same time using the proposed diffuse field reciprocity (DFR) approach. If the wave spectrum in the diffuse field is chosen to be the peak amplitude of a spreading sea, an upper bound for the dynamic response can be obtained efficiently.
To show that, we can take the peak amplitude of a spreading sea:
\[S_{\eta\eta}(\omega,\theta)=S_{\eta\eta}(\omega)D(\theta)\leq\max_{\theta\in[0,2\pi]}\{D(\theta)\}S_{\eta\eta}(\omega)=D_{0}S_{\eta\eta}(\omega) \tag{21}\]
where \(D_{0}\) is generally smaller than one and commonly used spreading functions can be found in [25]. Substitute this peak amplitude for the diffuse field in Eq 6, the upper bound hydrodynamic response can be obtained as:
\[\begin{split}\boldsymbol{\sigma}_{\zeta}^{2}&\leq D _{0}\int\mathbf{H}(\omega)\left[\int_{0}^{2\pi}\mathbf{f}_{b}(\omega,\theta) \mathbf{f}_{b}^{*\top}(\omega,\theta)d\theta\right]\mathbf{H}^{*\top}(\omega )S_{\eta\eta}(\omega)d\omega\\ &=2\pi D_{0}\int\mathbf{H}(\omega)\mathbb{E}_{\theta}\left[ \mathbf{f}_{b}\mathbf{f}_{b}^{*\top}\right]\mathbf{H}^{*\top}(\omega)S_{\eta \eta}(\omega)d\omega\end{split} \tag{22}\]
and this can be estimated efficiently using the diffuse field reciprocity (DFR) relationship derived in this paper.
For example, one commonly used model for the spreading function is the cosine-squared model:
\[D(\theta)=\left\{\begin{matrix}\frac{2}{\pi}\cos^{2}(\theta-\theta_{p})& \text{for}|\theta-\theta_{p}|\leq\pi/2\\ 0&\text{otherwise}\end{matrix}\right. \tag{23}\]
where \(\theta_{p}\) is the dominant wave direction and is assumed to be zero in the following analysis. So the peak amplitude in this case is then given as:
\[D_{0}=\max\{D(\theta)\}=\frac{2}{\pi} \tag{24}\]
In a strict sense, scaling the diffuse field by \(2\pi D_{0}\) in Eq 22 violate the constraint given in Eq 5 where the total integration of the spreading function should be one. On the other hand, Eq 22 is equivalent to assume the worst case for wave excitation from independent wave headings ranging from 0 to \(2\pi\). Only that with DFR approach, this worst case response can be obtained with a single calculation. When the wave spreading function is unknown, \(D_{0}\) can be taken as one in Eq 24. This provides a new way to estimate the worst possible responses for the design. In addition, as an upper bound, Eq 22 can be incorporated very efficiently in design optimization iterations to minimize the worst responses.
## 5 Demonstration study with a spar type floating wind turbine
Floating offshore wind turbines (FOWTs) enables economic offshore wind electricity generation in deep waters where fixed foundation turbines are not feasible. Hywind Scotland is the world's first commercial wind farm using floating wind turbines and it consists of a spar type platform [26]. A sketch of the Hywind FOWT is shown in Figure 1(a). In this paper, the Hywind spar type floating platform is used to demonstrate the application of the diffused field reciprocity (DFR) to offshore floating systems.
To apply the DFR method, we need two elements: 1) the hydrodynamic coefficients for the direct field dynamic stiffness matrix. As stated earlier, these coefficients are mostly well documented or ready to be extracted from standard numerical methods [19]. This is exactly the case here for the Hywind turbine where we directly extract these coefficients from the literature. 2) the transfer function matrix that relates the structural response to blocked forces. To construct the transfer function, a simplified rigid body model for the FOWT is introduced. These two elements are discussed in details in the next two subsections. And the numerical results are given in Section 5.3.
### Hydrodynamic added mass and damping matrix
The hydrodynamic added mass and damping for OC3-Hywind spar wind turbine have been computed by Jonkman [23], where the linear potential flow problem was solved using the WAMIT computer program. WAMIT uses a three-dimensional numerical-panel method in the frequency domain to solve the linearized potential-flow hydrodynamic radiation and diffraction problems for the interaction of surface waves with offshore platforms of arbitrary geometry.
These coefficients are reproduced in Figure 3 for ease of reference. A11, A55 and A15 are the added mass coefficients for surge, pitch and coupled surge-pitch motion respectively, and B11, B55 and B15 are the potential damping coefficients correspondingly. Although in [23], the added mass and damping matrices were computed for all six degrees of freedom, making use of the axis-symmetry of the structure and for simplicity, only the data for surge and pitch motion are used in this paper.
Given the data in Figure 3, the \(2\times 2\) added mass matrix \(\mathbf{M}_{a}\) and potential damping matrix \(\mathbf{C}_{p}\) can then be formed correpondings to the surge and pitch motion. These can then be used in the DFR equation for hydrodynamic response estimation in a random spreading sea.
Figure 2: A simple rigid body model for Hywind spar wind turbine.
### Transfer function matrix
In frequency domain, we can formulate the transfer function matrix as below:
\[\mathbf{H}(\omega)=\left[-\omega^{2}(\mathbf{M}_{s}+\mathbf{M}_{a})+i\omega( \mathbf{C}_{p}+\mathbf{C}_{D})+K\right]^{-1} \tag{25}\]
where \(\mathbf{M}\), \(\mathbf{C}\) and \(\mathbf{K}\) are the inertia, damping and stiffness matrices. The inertia matrix includes the inertia of the structure \(\mathbf{M}_{s}\) and the added mass \(\mathbf{M}_{a}\) due to fluid acceleration. The damping matrix includes both potential damping \(\mathbf{C}_{p}\) and viscous damping \(\mathbf{C}_{D}\). The potential damping is the result of waves travelling away due to radiation in a potential flow, while the viscous damping accounts for effect due to wave separation. As a simplification, the viscous damping part can be approximated using the linearized drag force from Morison's equation [27]. In addition to estimating the viscous damping, as a comparison to the DFR approach, we can also compute the structural response using Morison's equation with a unidirectional sea state. The details for the formulation of the matrices in Eq 25 are given in Appendix C.
### Results
#### a) List of parameters
The structure is approximated as two uniform sections, as shown in Figure 2(b), with constant diameter \(D_{1}\) and \(D_{2}\) respectively.
The values used for the simplified model in this study are listed in Table 1 and they are basically same as those given in [23]. Except that in our case the \(D_{1}\) section has its top level at 6 m below SWL and this gives approximately the same volume as the original structure which has a tapered section between \(D_{1}\) and \(D_{2}\). The simplified model has its centre of mass at 89.7 m and it is very close to the value given in [23] which is 89.9 m.
#### b) Wave spectrum
A JONSWAP wave spectrum, with wave significant height of 2.5 m and peak period of 0.8 rad/s, is used here. When using the reciprocal DFR approach, the wave spectrum \(S_{\eta\eta}(\omega)\) can be used directly in Eq 10 to get the blocked forces.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Fluid & \multicolumn{6}{c}{Structure} & \multicolumn{2}{c}{Mooring Line} & \multicolumn{2}{c}{Morison’s Coefficients} \\ \hline Density & Density & Length above & Length below & Diameter & Centre of mass & Stiffness & Inertia & Drag \\ \hline \(\rho_{f}\) & \(\rho_{s}\) & \(L_{1}\) & \(L_{2}\) & \(D_{1}D_{2}\) & \(x_{c}\) & \(s\) & \(C_{a}\) & \(C_{d}\) \\ \hline
1.03E+03 & 8.50E+03 & 87 & 120 & 6.5/9.4 & 89.7 & 3.80E+09 & 1 & 1 \\ kg/m\({}^{3}\) & kg/m\({}^{3}\) & m & m & m & m & N/m & - & - \\ \hline \end{tabular}
\end{table}
Table 1: List of parameters
Figure 3: Hydrodynamic added mass (A11, A55 and A15) and damping (B11, B55 and B15) for OC3-Hywind spar [23]
When Morison's method is used, assuming the area under the spectrum segment is equal to the variance of the wave component, the wave component amplitude is then related to the random wave spectrum, i.e., \(a_{w}(\omega)=\sqrt{2S_{\eta\eta}(\omega)d\omega}\). And this can be used in the Morison's equations to obtain the wave forces.
**c) Blocked force in a diffused field**
Morison's equation estimates the wave forces empirically and takes account of the diffraction effect using the coefficient \(C_{a}\) (\(C_{a}=1\) is used here as Hywind is of circular cylinder shape). When the drag force contribution is small, as in this case for spar type FOWT with large diameter, it is expected that the estimation of blocked forces from the DFR approach should agree with the result using Morison's equation.
If the blocked forces due to an incident wave at angle \(\theta\) is denoted as \(\mathbf{f}_{M}\) (see for example Figure 1b), its component with respect to the degrees of freedom \(\zeta\) can then be resolved as \(\mathbf{f}_{b}=\mathbf{f}_{M}\cos\theta\). As we are interested in the response in a diffuse field, we can take the ensemble average of the above blocked force from Morison's equation over wave headings from 0 to \(2\pi\):
\[\begin{split}\mathbb{E}\left[\mathbf{f}_{b}\mathbf{f}_{b}^{*\top }\right]_{\text{Morison}}&=\int_{0}^{2\pi}\mathbf{f}_{b}\mathbf{ f}_{b}^{*\top}D_{\text{diff}}(\theta)d\theta\\ &=\int_{0}^{2\pi}(\mathbf{f}_{M}\cos\theta)(\mathbf{f}_{M}^{*\top }\cos\theta)\frac{1}{2\pi}d\theta\\ &=\frac{1}{2}\mathbf{f}_{M}\mathbf{f}_{M}^{*\top}\end{split} \tag{26}\]
where \(D_{\text{diff}}(\theta)=1/2\pi\) as there is equal probability of waves coming in any directions, as discussed in the introduction. The spectra of the wave forces on the FOWT, from both DFR approach and Morison's equation, are compared in Figure 4.
As the rigid body model shown in Figure 2b is an approximation to the Hywind structure, it is not expected that the model based results from Morison's equation to match exactly the DFR results based on the hydrodynamic coefficients from Hywind. Nevertheless, it is clear from Figure 4 that the two approaches agree very well on the estimation of the wave forces.
Although the DFR approach is based on potential flow assumption and as a result the drag force contribution has been neglected, for a typical spar type floating wind turbine, it is expected that the linearized drag mainly provides damping effect and the drag force would be much smaller than the inertia force because the structure diameter is large. This is confirmed in Figure 4 where the Morison's force is dominated by inertia contribution.
**d) Upper bound of response variance**
As discussed in Section 4.2, the proposed DFR approach provides a very efficient way to estimate the worst possible responses for the design of offshore structures. The upper bound of the structural response spectrum with the normalization based on the cosine-squared spreading function in Eq 23, is shown in Figure 5 for both rigid body displacement and rotation of the FOWT. The results are computed using the upper bound expression in Eq 22, and the DFR relation derived in Eq 10. In comparison, the response due to a unidirectional wave using Morison's equation is also shown in Figure 5.
Figure 4: Ensemble averaged power spectrum of blocked force (left) and moment (right) from diffuse-field reciprocity (DFR) approach and Morison’s equation. Morison’s equation includes both inertial and drag force contributions. The magnitude of the drag contribution is not zero but many orders smaller.
Offshore designers and engineers often adopt a unidirectional sea for their hydrodynamic analysis, because it is simple and the response tends to be on the conservative side. However, in cases where the structure has motions decoupled from the assumed excitation direction or the sensitive wave incident direction cannot be identified easily, the overall responses can be underestimated.
As can be seen in Figure 5, using the DFR approach, we can direclty estimate the worst response performance due to a random spreading sea. Although not explicitly demonstrated in this case study with an axisymmetric FOWT, it can be easily seen that, e.g., from Eq 7, all degree of freedoms of the structure under design can be considered at the same time. Therfore, the proposed reciprocal DFR approach is as efficient as using Morison's equation for an unidirectional sea excitation, and it can be especially useful for general asymmetric floating systems.
## 6 Conclusions
A new approach based on the diffuse field reciprocity (DFR) principle has been introduced in this paper to study the hydrodynamic response of offshore systems. This reciprocal approach, first developed for vibroacoustic analysis, is proven to be applicable in offshore context with verification against an analytical hydrodynamic diffraction analysis.
The DFR approach is efficient because the wave heading integration is avoided in an idealised spreading sea. Moreover, the analytical DFR relationship that relates the wave loading directly to the structure's radiation behaviour makes DFR principle well positioned for offshore engineering applications. This is because the hydrodynamic radiation problems have been extensively studied and the resulting hydrodynamic coefficients are well documented (e.g., the Hywind example considered in Section 5). Once the hydrodynamic coefficients are known for the structure under design, the wave loading spectrum is readily available using the reciprocal DFR relationship.
Using the DFR approach, it is possible to estimate an upper bound response in a spreading sea assuming the worst sea state is coming from all directions. Compared to the commonly used unidirectional sea assumption, the DFR approach provides a new reciprocal approach that is as efficient, considers all degree of freedoms and provides a higher safety factor. And this is demonstrated using a spar type floating wind turbine, where the normalised response from the reciprocal approach is higher than response due to a unidirectional sea.
Although not explicitly demonstrated in this case study, it is expected that the DFR approach is most useful for general asymmetric offshore systems as all degree of freedoms of the structure under design can be considered at the same time. This would provide offshore designers and engineers a fast tool to estimate the upper bound response of the structure under study and minimize the risk of failure from early design stage.
## Acknowledgment
For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising.
Figure 5: Spectrum of structural response, rigid body displacement (left) and rotation (right), with the upper bound (UBound) estimated from diffuse-field reciprocity (DFR) approach, in comparison with the results from Morison’s equation.
## Data availability statement
The datasets generated during and/or analysed during the current study are available in the GitHub repository: [https://github.com/longitude-jyang/Diffuse-field-reciprocity-for-hydrodynamics](https://github.com/longitude-jyang/Diffuse-field-reciprocity-for-hydrodynamics)
|
2305.08059 | Semantic-aware Dynamic Retrospective-Prospective Reasoning for
Event-level Video Question Answering | Event-Level Video Question Answering (EVQA) requires complex reasoning across
video events to obtain the visual information needed to provide optimal
answers. However, despite significant progress in model performance, few
studies have focused on using the explicit semantic connections between the
question and visual information especially at the event level. There is need
for using such semantic connections to facilitate complex reasoning across
video frames. Therefore, we propose a semantic-aware dynamic
retrospective-prospective reasoning approach for video-based question
answering. Specifically, we explicitly use the Semantic Role Labeling (SRL)
structure of the question in the dynamic reasoning process where we decide to
move to the next frame based on which part of the SRL structure (agent, verb,
patient, etc.) of the question is being focused on. We conduct experiments on a
benchmark EVQA dataset - TrafficQA. Results show that our proposed approach
achieves superior performance compared to previous state-of-the-art models. Our
code will be made publicly available for research use. | Chenyang Lyu, Tianbo Ji, Yvette Graham, Jennifer Foster | 2023-05-14T03:57:11Z | http://arxiv.org/abs/2305.08059v1 | # Semantic-aware Dynamic Retrospective-Prospective Reasoning for Event-level Video Question Answering
###### Abstract
Event-Level Video Question Answering (EVQA) requires complex reasoning across video events to obtain the visual information needed to provide optimal answers. However, despite significant progress in model performance, few studies have focused on using the explicit semantic connections between the question and visual information especially at the event level. There is need for using such semantic connections to facilitate complex reasoning across video frames. Therefore, we propose a semantic-aware dynamic retrospective-prospective reasoning approach for video-based question answering. Specifically, we explicitly use the Semantic Role Labeling (SRL) structure of the question in the dynamic reasoning process where we decide to move to the next frame based on which part of the SRL structure (agent, verb, patient, etc.) of the question is being focused on. We conduct experiments on a benchmark EVQA dataset - TrafficQA. Results show that our proposed approach achieves superior performance compared to previous state-of-the-art models. Our code will be made publicly available for research use.
## 1 Introduction
This paper focuses on one specific variant of Video Question Answering (VQA) Xu et al. (2016); Yu et al. (2018); Zhong et al. (2022), namely Event-level VQA (EVQA) Xu et al. (2021). In general, the objective of the VQA task is to provide an answer to a visual-related question according to the content of an accompanying video. Despite significant recent progress in VQA, EVQA still remains one of the most challenging VQA-based tasks since it requires complex reasoning over the _events_ across video frames Sadhu et al. (2021); Zhong et al. (2022); Liu et al. (2022). To tackle the challenges in EVQA, a number of approaches have been proposed Xu et al. (2021). Luo et al. (2022) propose a temporal-aware bidirectional attention mechanism for improving event reasoning in videos, while Zhang et al. (2022) propose a novel model named Energy-based Refined-attention Mechanism (ERM), which obtains better performance compared to previous approaches with a smaller model size. Liu et al. (2022), on the other hand, incorporate visual-linguistic causal dependencies based on Graph Convolutional Networks Kipf and Welling (2017) for enhancing cross-modal event reasoning for EVQA.
Despite recent advances, conventional EVQA approaches generally fail to take into account the explicit semantic connection between questions and the corresponding visual information at the event level. Therefore, we propose a new approach that takes advantage of such semantic connections, using the Semantic Role Labeling (SRL) Marquez et al. (2008); Palmer et al. (2010); He et al. (2017) structure of questions. The model uses SRL information to learn an explicit semantic connection between the text-based questions and visual information in videos. Additionally, we carry out a multi-step reasoning mechanism over video frames to avoid adapting to spurious correlation and shortcuts by explicitly learning the reasoning process itself Yi et al. (2018); Zhang et al. (2021); Picco et al. (2021); Hamilton et al. (2022); Zhu (2022).
Specifically, in each reasoning step, the model should explicitly decide which frame should be focused on by predicting the reasoning direction (_retrospective_ or _prospective_). In terms of the question, in each reasoning step, we focus on one or more specific SRL arguments with high attention weights, and model its connection with the visual information (i.e., video frames) contained within the corresponding video. For example, for a question such as _[ARG1: How many cars] were [Verb: involved] [ARG2: in the accident?]_, the model concentrates on the _ARG2_ when locating the accident, before determining how many cars were
involved in the accident (_ARGI_). In a specific reasoning step, \(t\), we inject the relevant visual information based on the semantic connection between the question and video frames by updating a hidden vector. This vector is ultimately expected to contain the necessary information for predicting the correct answer. In the reasoning process, we employ a _coverage mechanism_Tu et al. (2016) to improve the coverage of the SRL arguments of question. Namely, instead of simply focusing on a small number of specific arguments, the model is capable of including a large range of arguments.
To investigate the effectiveness of the proposed approach, we conduct experiments on a benchmark EVQA dataset: TrafficQA. Results reveal the model to achieve performance superior to that of existing baselines for a range of reasoning types (e.g., counterfactual, prospective).
## 2 Methodology
An overview of our approach is shown in Figure 1. Suppose the input of our model consists of a video \(V\) composed of \(n\) image frames sampled from it: \(V=\{f_{0},f_{1},......,f_{n-1}\}\), and a corresponding question \(Q=\{w_{0},w_{1},......,w_{m-1}\}\) with associated SRL arguments \(S=\{S_{0},S_{1},......,S_{N-1}\}\) where \(S_{i}=\{w_{i},w_{i+1},......,w_{k}\}\). All frames \(V=\{f_{0},f_{1},......,f_{n-1}\}\) are fed into an Image Encoder followed by temporal attention modeling to produce temporal-aware frame representations \(V^{{}^{\prime}}=\{f^{{}^{\prime}}_{0},f^{{}^{\prime}}_{1},......,f^{{}^{ \prime}}_{n-1}\}\in\mathbf{R}^{n\times d}\). Meanwhile, we use a Text Encoder to obtain the representations of the question with its corresponding SRL arguments: \(Q^{{}^{\prime}}\in\mathbf{R}^{1\times d}\) and \(S^{{}^{\prime}}\in\mathbf{R}^{N\times d}\). We then perform multi-step reasoning in which we iteratively update the hidden state vector \(h\) with the visual information from frame representations based on the attention weights between them and the SRL arguments of the question. \(h\) is updated from the initial step \(h_{0}\) to the final step \(h_{T-1}\) where \(T\) is the total number of reasoning steps. Finally, we predict the most probable answer \(a\) based on \(h_{T-1}\).
### Multi-step Reasoning
Before the first reasoning step, we initialize:
\[h_{0}=Attn(Q^{{}^{\prime}},V^{{}^{\prime}},V^{{}^{\prime}}) \tag{1}\]
\[j=argmax(AttnWeights(Q^{{}^{\prime}},V^{{}^{\prime}},V^{{}^{\prime}})) \tag{2}\]
where \(Attn\) serves as the \(q,k,v\)_attention1_ modeling Vaswani et al. (2017) and \(j\) represents the
Figure 1: Overview of our approach for multi-step visual reasoning. In each reasoning step, the model predicts the reasoning direction (either _retrospective_ or _prospective_) and focuses on a specific SRL argument with high attention weights. A _coverage mechanism_ is employed to improve the coverage of SRL arguments in the question.
index of the frame with the highest attention weight. In each specific reasoning step \(t\), we firstly use \(h_{t-1}\) as the _attention key_ to obtain the relevant SRL argument: \(S_{t}^{{}^{\prime}}=Attn(h_{t-1},S^{{}^{\prime}},S^{{}^{\prime}})\). Subsequently, we infer the next focused frame by:
\[V^{focus}=Attn(r_{t},V^{{}^{\prime}},V^{{}^{\prime}}) \tag{3}\]
where \(r_{t}=g(h_{t-1},S_{t}^{{}^{\prime}})\). Finally, we update the hidden state vector \(h_{t-1}\) based on the currently focused frame (the frame with the largest attention weight):
\[h_{t}=\delta(h_{t-1},V^{focus}) \tag{4}\]
### Retrospective-Prospective Reasoning
We propose a _Retrospective-Prospective Reasoning_ mechanism for Eq.3 in order to explicitly decide whether the model should move to future frames (_prospective reasoning_) or move back to previous frames (_retrospective reasoning_). We obtain the _retrospective frame_\(V^{retro}\) and _prospective frame_\(V^{prosp}\) by:
\[V^{retro}=\psi(g(h_{t-1},S_{t}^{{}^{\prime}}),V^{{}^{\prime}},RetroMask(j)) \tag{5}\]
\[V^{prosp}=\phi(g(h_{t-1},S_{t}^{{}^{\prime}}),V^{{}^{\prime}},ProspMask(j)) \tag{6}\]
where \(\psi\) and \(\phi\) are Masked Attention that are used to obtain _retrospective_ and _prospective_ frames, \(g(h_{t-1},S_{t}^{{}^{\prime}})\) and \(V^{{}^{\prime}}\) serve as _query_ and _key, value_ respectively. \(RetroMask(j)\) means all frames after \(j\) (\(f_{i>j}\)) will be masked whereas \(ProspMask(j)\) means that all frames before \(j\) (\(f_{i<j}\)) will be masked. After obtaining \(V^{retro}\) and \(V^{prosp}\) we generate a probability:
\[p=\sigma(\lambda(V^{retro},V^{prosp})) \tag{7}\]
If \(p\) is larger than a pre-defined threshold \(\alpha\), we update \(h_{t}=\delta(h_{t-1},V^{retro})\),otherwise we update \(h_{t}=\delta(h_{t-1},V^{prosp})\) as in Eq. 4. The index for the next-focused frame \(j\) is also updated accordingly. We present further details of our algorithm in the Appendix.
### Coverage Mechanism
We additionally propose to employ a _coverage mechanism_Tu et al. (2016) to encourage the model to include as many SRL arguments as possible in the reasoning process. Specifically, we track the attention distribution \(C_{t}\in\mathbf{R}^{1\times N}\) of \(h_{t-1}\) on all SRL arguments \(S\)
\[C_{t}=C_{t-1}+\frac{AttnWeights([h_{t-1};C_{t-1}],S^{{}^{\prime}},S^{{}^{ \prime}})}{\chi} \tag{8}\]
where \(\chi\) represents the normalization factor.2 We obtain the weighted \(S_{t}^{{}^{\prime}}\) by \(S_{t}^{{}^{\prime}}=Attn([h_{t-1};C_{t-1}],S^{{}^{\prime}},S^{{}^{\prime}})\) where we concatenate \(C_{t-1}\) to \(h_{t-1}\) as an additional input to the _Attn_ function for the purpose of informing the model to assign more attention weights to previously less-focused SRL arguments, in order to improve the coverage for all SRL arguments.
Footnote 2: In this work, we use the number of SRL arguments of the corresponding question as the normalization factor.
### Training Objective
For the answer prediction, we encode all answer options \(A=\{a_{0},......,a_{M-1}\}\) separately and then select the one with the highest similarity with \(h_{T-1}\). We optimize our model parameters \(\theta\) using _Cross Entropy_ loss:
\[J(\theta)=-\sum_{i}\sum_{k}log\frac{e^{F(a_{k},h_{T-1})}}{\sum_{j=0}^{M-1}e^{ F(a_{j},h_{T-1})}}y_{i,k} \tag{9}\]
where \(F\) is the function measuring the similarity between answer candidate and \(h_{T-1}\), and \(y_{i,k}\) represents the answer label for the \(i-\)th example - if the correct answer for the \(i-\)th example is the \(k-\)th answer then \(y_{i,k}\) is 1 otherwise it is 0.
## 3 Experiments
### Dataset
We employ a benchmark dataset for EVQA - TrafficQA Xu et al. (2021) which contains 62,535 QA pairs and 10,080 videos. We follow the standard split of TrafficQA - 56,460 pairs for training and 6,075 pairs for evaluation. We further sample 5,000 examples from training data as the dev set.
\begin{table}
\begin{tabular}{l c c} \hline \hline Models & Setting-1/4 & Setting-1/2 \\ \hline Q-type (random) Xu et al. (2021) & 25.00 & 50.00 \\ QE-LSTM Xu et al. (2021) & 25.21 & 50.45 \\ QA-LSTM Xu et al. (2021) & 26.65 & 51.02 \\ Avgpooling Xu et al. (2021) & 30.45 & 57.50 \\ CNN+LSTM Xu et al. (2021) & 30.78 & 57.64 \\ I3D+LSTM Xu et al. (2021) & 33.21 & 54.67 \\ VIS+LSTM Ren et al. (2015) & 29.91 & 54.25 \\ BERT-VQA Yang et al. (2020) & 33.68 & 63.50 \\ TVQA Lei et al. (2018) & 35.16 & 63.15 \\ HCRN Le et al. (2020) & 36.49 & 63.79 \\ Eclipse Xu et al. (2021) & 37.05 & 64.77 \\ ERM Zhang et al. (2022) & 37.11 & 65.14 \\ TMBC Luo et al. (2022) & 37.17 & 65.14 \\ CMCIR Liu et al. (2022) & 38.58 & N/A \\ Ours & **43.19** & **71.63** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Evaluation results on TrafficQA dataset.
### Experimental Setup
We use CLIP ViT-B/16 Radford et al. (2021) 3 to initialize our image encoder and text encoder. We evenly sample 10 frames from each video in the TrafficQA dataset. The SRL parser employed in the experiments is from AllenNLP Gardner et al. (2018); Shi and Lin (2019). We train our model over 10 epochs with a learning rate of \(1\times 10^{-6}\) and a batch size of 8. The optimizer is AdamW Loshchilov and Hutter (2019). We set the maximum reasoning step \(T\) to 3 and we use a temperature \(\tau\) of 0.2 in _Attention_ modeling. The hyper-parameters are empirically selected based on the performance on dev set. There are two experimental settings for TrafficQA Xu et al. (2021): 1) Setting-1/2, this task is to predict whether an answer is correct for a given question based on videos; 2) Setting-1/4: this task follows the standard setup of multiple-choice task in which the model is expected to predict the correct the answer from the four candidate options.
Footnote 3: [https://openai.com/blog/clip/](https://openai.com/blog/clip/)
### Results
The experimental results on the test set of TrafficQA are shown in Table 1, where we also include the previous baseline models for EVQA.4 The results show that our proposed approach obtains accuracy of 43.19 under the multiple-choice setting, which surpasses previous state-of-the-art approaches including Eclipse Xu et al. (2021), ERM Zhang et al. (2022), TMBC Luo et al. (2022) and CMCIR Liu et al. (2022) by at least 4.5 points. Furthermore, our approach achieves an accuracy of 71.63 under Setting 1/2, outperforming previous strong baselines by at least 6 points. The results show the effectiveness of our proposed multi-step reasoning approach for event-level VideoQA.
Footnote 4: Some of the baseline results are taken from Xu et al. (2021).
Ablation StudyWe conduct experiments on the dev set of TrafficQA, investigating the contribution of both the _retrospective-prospective reasoning_ and _coverage mechanism_ on the performance of our proposed EVQA approach. The results are shown in Table 3, which reveals that multi-step reasoning is critical in terms of model performance while the _coverage mechanism_ can provide additional, albeit less substantial, improvements.
Results by Question TypeWe take a closer look at model performance on different question types, e.g. reverse reasoning, counterfactual reasoning, etc. The results are shown in Table 2. They reveal that our proposed approach outperforms previous state-of-the-art models on all individual question types by a large margin with large improvements seen for _introspection_, _reverse_ and _counterfactual_ questions.
Effect of Reasoning StepsWe study the effect of varying reasoning steps. The results are shown in Table 4. Increasing reasoning steps improves performance, especially from 1 step to 3 steps. Additionally, the performance (both Setting 1/4 and 1/2) is stable with reasoning steps exceeding three.
## 4 Conclusion and Future Work
In this paper, we propose a multi-step dynamic retrospective-prospective approach for EVQA. Our approach employs a multi-step reasoning model
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{6}{c}{Question Type} \\ \cline{2-9} & Basic & Attribution & Introspection & Counterfactual & Forecasting & Reverse & All \\ \hline HCRN Le et al. (2020) & 34.17 & 50.29 & 33.40 & 40.73 & 44.58 & 50.09 & 36.26 \\ VQAC Kim et al. (2021) & 34.02 & 49.43 & 34.44 & 39.74 & 38.55 & 49.73 & 36.00 \\ MASNSeo et al. (2021) & 33.83 & 50.86 & 34.23 & 41.06 & 41.57 & 50.80 & 36.03 \\ DualVGR Wang et al. (2021) & 33.91 & 50.57 & 33.40 & 41.39 & 41.57 & 50.62 & 36.07 \\ CMCIR Liu et al. (2022) & 36.10 & 52.59 & 38.38 & 46.03 & 48.80 & 52.21 & 38.58 \\ Ours & **37.05** & **52.68** & **43.91** & **50.81** & **54.26** & **55.52** & **43.19** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results by various _question type_ on the dev set of TrafficQA. The highest performance are in bold.
\begin{table}
\begin{tabular}{l c c} \hline \hline Models & Setting-1/4 & Setting-1/2 \\ \hline Model w/o MR and CM & 42.53 & 69.61 \\ Model w/o CM & 46.15 & 74.97 \\ Model & 47.38 & 75.83 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation study results on TrafficQA dev set, where _MR_ represents _Multi-step Reasoning_ and _CM_ represents _Coverage Mechanism_. MR and CM are coupled in our approach.
that explicitly learns reasoning based on the semantic connection of the SRL structure of a question and corresponding video frames. We additionally proposed a _coverage mechanism_ to improve the coverage of SRL arguments in the reasoning process. Experimental results show that the proposed approach obtains superior performance compared to that of state-of-the-art EVQA models.
## Limitations
This papers focuses on a variety of VideoQA - event-level VideoQA, we only incorporate _event_ information from the question (textual) side as we think that parsing video frames is inaccurate and could introduce unexpected errors, we should also explore how to inject _event-level_ information from visual side in the future with more competitive visual parsing models. Our experiments are only conducted on one dataset due to resource constraint, we should also conduct experiments on more datasets to verify the effectiveness of our approach.
|
2310.05913 | Protected Fermionic Zero Modes in Periodic Gauge Fields | It is well-known that macroscopically-normalizable zero-energy wavefunctions
of spin-$\frac{1}{2}$ particles in a two-dimensional inhomogeneous magnetic
field are spin-polarized and exactly calculable with degeneracy equaling the
number of flux quanta linking the whole system. Extending this argument to
massless Dirac fermions subjected to magnetic fields that have \textit{zero}
net flux but are doubly periodic in real space, we show that there exist
\textit{only two} Bloch-normalizable zero-energy eigenstates, one for each spin
flavor. This result is immediately relevant to graphene multilayer systems
subjected to doubly-periodic strain fields, which at low energies, enter the
Hamiltonian as periodic pseudo-gauge vector potentials. Furthermore, we explore
various related settings including nonlinearly-dispersing band structure models
and systems with singly-periodic magnetic fields. | Vo Tien Phong, Eugene J. Mele | 2023-10-09T17:58:31Z | http://arxiv.org/abs/2310.05913v2 | # Protected Fermionic Zero Modes in Periodic Gauge Fields
###### Abstract
It is well-known that macroscopically-normalizable zero-energy wavefunctions of spin-\(\frac{1}{2}\) particles in a two-dimensional inhomogeneous magnetic field are spin-polarized and exactly calculable with degeneracy equaling the number of flux quanta linking the whole system. Extending this argument to massless Dirac fermions subjected to magnetic fields that have _zero_ net flux but are doubly periodic in real space, we show that there exist _only two_ Bloch-normalizable zero-energy eigenstates, one for each spin flavor. This result is immediately relevant to graphene multilayer systems subjected to doubly-periodic strain fields, which at low energies, enter the Hamiltonian as periodic pseudo-gauge vector potentials. Furthermore, we explore various related settings including nonlinearly-dispersing band structure models and systems with singly-periodic magnetic fields.
## I Introduction
The motion of a charged particle in a uniform magnetic field is one of the simplest well-studied elementary problems at both the classical and quantum levels [1; 2]. For a spatially-varying field, the problem is more challenging even in its classical treatment and arises in a variety of physical contexts, ranging from strategies for trapping of ultracold atoms [3; 4; 5] to the shapes of orbits for charged particles circulating around magnetic field lines in plasmas [6; 7]. Surprisingly, the two-dimensional motion of a charge in an inhomogeneous magnetic field remains an analytically-accessible problem in the extreme quantum limit owing to a separation of two frequency (energy) scales. In a strong magnetic field, the kinetic energy is effectively quenched by the cyclotron motion and the guiding center of the cyclotron orbit can drift slowly in the presence of additional potentials. Aharonov and Casher famously found that an exact cancellation of the zero-point energy in the lowest cyclotron orbit and the Zeeman splitting with gyromagnetic factor \(g=2\) produces a spin-polarized zero-energy state that is macroscopically degenerate [8; 9]. This is the exact analog of the lowest Landau level if the field were made spatially uniform. Even with spatial variation, the degeneracy is determined only by the total magnetic flux linking the macroscopic system and not on the spatial distribution of the field [8; 9; 10].
A variant of this problem arises in two-dimensional materials that are periodically patterned laterally [11; 12; 13]. For example, in the linearly-dispersing bands for electrons in a single layer of graphene, a periodic lattice strain is a momentum boost encoded as an effective pseudo-vector potential [14]. This is completely analogous to the electromagnetic vector potential except for its sign change in two time-reversed valleys. If the strain pattern is made periodic, the total pseudo-flux that links the unit cell is separately zero for each valley [15; 16; 17; 18; 19; 20; 21; 22]. A naive application of the Aharonov-Casher theorem would therefore exclude the possibility of zero modes since the total flux is zero and there is no analog to the Zeeman spin polarization energy with \(g=2\).
In this article, we exploit the fact that neither of these conditions is necessary when the pseudo-gauge field varies in space but is made periodic on a superlattice. The wavefunctions in the periodic problem need only be Bloch normalizable, with support on a finite real-space supercell instead of a macroscopic two-dimensional domain. This weakens the normalizability condition and enables the existence of exactly two zero-energy modes per valley. The constraint of restricting these modes to a single pseudospin polarization is thereby also eliminated: the analytic structure of these zero modes protects one member from both pseudospins (sublattice polarizations) in each valley. These zero-energy states are parts of dispersive low-energy bands whose bandwidths can be estimated from the velocity at these zero-energy crossings. The velocity depends on the strength of the periodic pseudo-field and can be significantly smaller than the backfolding energy scale produced by the periodic superlattice. For a general period and a general field strength, one finds a manifold of spectrally-isolated low-energy bands that possess a nontrivial quantum geometry [18; 19; 20; 23; 24]. These zero-energy eigenfunctions have recently been studied in Dirac systems subjected to real periodic magnetic fields [25; 26] and strain-induced pseudo-magnetic fields [20]. In the following, we develop these ideas further for the linearly dispersing Dirac model. We then show that these results generalize to other nonlinearly-dispersing long-wavelength band structure models.
## II Aharonov-Casher Argument
For completeness, we begin with a brief summary of Aharonov and Casher's construction [8; 9; 27; 28]. We consider a two-dimensional Dirac Hamiltonian in the presence of a spatially-dependent (not yet assumed to be periodic) magnetic field \(\mathbf{B}(\mathbf{r})=B(\mathbf{r})\hat{e}_{z}=\left[\partial_{x}A_{y}( \mathbf{r})-\partial_{y}A_{x}(\mathbf{r})\right]\hat{e}_{z}\) of the
form
\[\mathcal{H}_{1}=\hbar v_{F}\left(-i\nabla_{\mathbf{r}}+\frac{e}{\hbar}\mathbf{A}( \mathbf{r})\right)\cdot\boldsymbol{\sigma}, \tag{1}\]
where \(v_{F}\) is the Dirac velocity and the Pauli matrices \(\boldsymbol{\sigma}\) act on generalized spin space [29]. Though not strictly necessary, for simple analytic control, we assume that \(B(\mathbf{r})\) has compact support so that the total flux \(\Phi=\int_{\mathbb{R}^{2}}B(\mathbf{r})d^{2}\mathbf{r}\) is finite: \(\left|\left|\Phi\right|\right/\Phi_{0}\right|=N\), where \(\Phi_{0}=h/e\) is the flux quantum and \(N\in\mathbb{Z}_{>0}\) is a positive integer. Assuming Lorenz gauge \(\nabla\cdot\mathbf{A}=0\), we can choose a scalar potential \(\phi(\mathbf{r})\) such that \(\partial_{x}\phi(\mathbf{r})=A_{y}(\mathbf{r})\) and \(\partial_{y}\phi(\mathbf{r})=-A_{x}(\mathbf{r})\). This scalar potential satisfies Poisson's equation sourced by the magnetic field, \(\Delta\phi(\mathbf{r})=B(\mathbf{r})\), with formal solution
\[\phi(\mathbf{r})=\frac{1}{2\pi}\int_{\mathbb{R}^{2}}B(\mathbf{r}^{\prime}) \ln|\mathbf{r}-\mathbf{r}^{\prime}|d^{2}\mathbf{r}^{\prime}. \tag{2}\]
By writing the zero-energy eigenstates for Hamiltonian (1) as
\[\psi(\mathbf{r})=\begin{pmatrix}e^{+e\phi(\mathbf{r})/\hbar}f_{+}(\mathbf{r} )\\ e^{-e\phi(\mathbf{r})/\hbar}f_{-}(\mathbf{r})\end{pmatrix}, \tag{3}\]
we find that \(\left(\partial_{x}\pm i\partial_{y}\right)f_{\pm}\left(\mathbf{r}\right)=0\). This implies that \(f_{\pm}\) are both entire functions [30]. The forms of these functions are constrained by the normalizability of the wavefunction. To ascertain these constraints, we observe that as \(\left|\mathbf{r}\right|\rightarrow\infty\), the scalar potential tends to \(\phi(\mathbf{r})\rightarrow\Phi\ln|\mathbf{r}|\left/2\pi=\ln|\mathbf{r}|^{ \Phi/2\pi}\right.\). The exponentials in Eq. (3) have asymptotic behaviors \(e^{\pm e\phi(\mathbf{r})/\hbar}\rightarrow|\mathbf{r}|^{\pm\Phi/\Phi_{0}}\). Since entire functions do not decay globally, we only admit \(e^{-\eta e\phi(\mathbf{r})/\hbar}\) to ensure normalizability, where \(\eta=\text{sign}\left(\Phi\right)\). Finally, we require \(\lim_{|\mathbf{r}|\rightarrow\infty}|\mathbf{r}||f_{-\eta}(\mathbf{r})|e^{- \eta e\phi(\mathbf{r})/\hbar}=0\), which implies that \(f_{+}(z=x+iy)\) or \(f_{-}(\bar{z}=x-iy)\) is a polynomial of degree at most \(N-1\). Thus, the \(N\) independent solutions are
\[\psi_{n}(\mathbf{r})=\begin{pmatrix}\Theta\left[-\eta\right]\\ \Theta\left[+\eta\right]\end{pmatrix}e^{-\eta e\phi(\mathbf{r})/\hbar}(x-i \eta y)^{n}, \tag{4}\]
for \(n=0,1,2,...,N-1\). Here, \(\Theta\left[\eta\right]\) is the Heaviside theta function. In brief, Aharonov and Casher showed that electrons in a magnetic field with total flux \(\left|\left|\Phi\right|\right|=N\Phi_{0}\) have \(N\) zero-energy eigenstates that are spin polarized. In particular, the wavefunctions can be written as products of analytic functions of \(z\) or \(\bar{z}\) and exponentials of the scalar function \(\phi(\mathbf{r})\). This analysis is quite general since it does not assume any particular form of the magnetic field except that it is compactly supported. So the degeneracy of the zero modes is _not_ mandated by a spatial symmetry. However, chiral symmetry \(\sigma_{z}\mathcal{H}\sigma_{z}=-\mathcal{H}\) is crucial to the existence of these modes as mass terms \(m\sigma_{z}\) would lift them away from zero energy.
## III Zero modes in a doubly-periodic magnetic field with no net magnetic flux
The above analysis suggests that a magnetic field with zero flux cannot induce any zero mode. This is only true in the space of normalizable wavefunctions in the entire plane for which the preceding analysis applies [9]. However, in certain situations, the relevant domain for the wavefunctions is not the entire plane. The spectrum of Dirac and Pauli operators in the presence of a magnetic field have been studied in various different domains and with fields of different regularities [31]. Of particular interest to us are systems with a magnetic field that is periodic in two independent directions with primitive lattice vectors \(\mathbf{L}_{1}\) and \(\mathbf{L}_{2}:B\left(\mathbf{r}+n_{1}\mathbf{L}_{1}+n_{2}\mathbf{L}_{2} \right)=B\left(\mathbf{r}\right),\) where \(n_{1},n_{2}\) are integers. We can write the magnetic field as a Fourier series
\[B(\mathbf{r})=\sum_{\mathbf{G}}\tilde{B}_{\mathbf{G}}e^{i\mathbf{G}\cdot \mathbf{r}}. \tag{5}\]
For physically-relevant fields, it is often enough to approximate the magnetic field with a finite number of Fourier harmonics. Therefore, we can assume that the magnetic field is _defined_ by its _finite_ Fourier series [32]. This restriction can be relaxed considerably, but such a generalization is of secondary importance for us here. We focus on the case of zero magnetic flux, \(\tilde{B}_{\mathbf{0}}=0\). Because of that, the vector potential can be written explicitly as
\[\begin{split} A_{x}(\mathbf{r})&=\sum_{\mathbf{G} \neq\mathbf{0}}\frac{iG_{y}}{|\mathbf{G}|^{2}}\tilde{B}_{\mathbf{G}}e^{i \mathbf{G}\cdot\mathbf{r}},\\ A_{y}(\mathbf{r})&=\sum_{\mathbf{G}\neq\mathbf{0}}- \frac{iG_{x}}{|\mathbf{G}|^{2}}\tilde{B}_{\mathbf{G}}e^{i\mathbf{G}\cdot \mathbf{r}}.\end{split} \tag{6}\]
Thus, we conclude that \(\mathbf{A}(\mathbf{r})\) has the same periodicity as \(B(\mathbf{r})\). It is worth emphasizing that this fact follows from the vanishing magnetic flux. If \(\tilde{B}_{\mathbf{0}}\) were not zero, there would have been a non-periodic component to the vector potential, such as \(A_{y}(\mathbf{r})=\tilde{B}_{\mathbf{0}}x+...\) This is exactly like a translationally-invariant constant magnetic field having magnetic vector potentials that are _not_ translationally-invariant. With \(\mathbf{A}(\mathbf{r})\) proven to be periodic when \(\mathbf{B}(\mathbf{r})\) carries no flux, it is clear that the Hamiltonian (1) is spatially periodic. It thus follows from Bloch's theorem that the eigenfunctions must have periodic norm. The appropriate solution space now becomes that of normalizable wavefunctions on a compact torus, not the entire plane.
We now adapt the argument of Ref. [8] to the restricted setting of a torus. Still writing the zero-energy eigenstates as in Eq. (3), where \(\phi(\mathbf{r})=-\sum_{\mathbf{G}\neq\mathbf{0}}\tilde{B}_{\mathbf{G}}e^{i \mathbf{G}\cdot\mathbf{r}}/|\mathbf{G}|^{2}\) is also a continuous periodic function, we still find that \(f_{+}(z)\) and \(f_{-}(\bar{z})\) are analytic functions of \(z\) and \(\bar{z}\) respectively. Now, by requiring that the wavefunction be continuous and have periodic norm, it follows that the wavefunction components \(e^{\pm e\phi(\mathbf{r})/\hbar}f_{\pm}(\mathbf{r})\) must be globally bounded as well. Since the exponential factors are globally bounded as \(\phi(\mathbf{r})\) is periodic, it must be the case that \(|f_{\pm}(z)|\) is globally bounded. By Liouville's theorem that globally-bounded, entire functions are constants, we find that \(f_{\pm}\) must be spatially uniform. Thus, there are only two independent zero-mode solutions with periodic norm, henceforth called Bloch zero modes:
\[\psi_{+}(\mathbf{r})=\frac{1}{A_{+}}\begin{pmatrix}e^{e\phi(\mathbf{r})/\hbar} \end{pmatrix}\text{ and }\psi_{-}(\mathbf{r})=\frac{1}{A_{-}}\begin{pmatrix}0\\ e^{-e\phi(\mathbf{r})/\hbar}\end{pmatrix}, \tag{7}\]
where \(A_{\pm}\) are normalization constants are given by
\[A_{\pm}=\left[\int_{\Omega}e^{\pm 2e\phi(\mathbf{r})/\hbar}d^{2}\mathbf{r}\right]^{ \frac{1}{2}}, \tag{8}\]
and \(\Omega\) is the unit cell. These same solutions were also studied in Ref. [20]. Eq. (7) shows that even when the net magnetic flux is zero, there are still two zero modes, but these modes are only normalizable within a unit cell. Furthermore, on a torus, we have zero modes of both spin flavors, contrary to the original formulation where the zero modes are spin polarized. However, these two Bloch zero modes feature _spatial spin isolation_ because \(\psi_{+}\) is enhanced precisely where \(\psi_{-}\) is suppressed due to the different signs in the exponential. Examples are shown in Fig. 1.
These Bloch zero modes can be interpreted in the context of a band structure. In the absence a magnetic field, the energies of Hamiltoninan (1) form two linear branches: \(\mathcal{E}_{\pm}=\pm\hbar v_{F}|\mathbf{k}|\), where \(\mathbf{k}\) is the wavevector. These branches cross exactly at \(\mathbf{k}=\mathbf{0}\). In the presence of a periodic magnetic field, the spectrum consists of bands defined within a Brillouin zone. Generically, one would expect the degeneracy point at \(\mathcal{E}=0\) to be gapped out by a general periodic field without any symmetry. Our analysis proves the contrary, that the degeneracy point remains intact no matter the form of the magnetic field. Therefore, the bands near \(\mathcal{E}=0\) must at minimum form a doublet set. The Dirac velocity is, however, renormalized by the magnetic field. Using first-order perturbation theory, the renormalized velocity \(v_{\text{remorm}}\) is given by
\[\frac{v_{\text{remorm}}}{v_{F}}=\frac{|\Omega|}{A_{+}A_{-}}. \tag{9}\]
By the Cauchy-Schwarz inequality, \(|\Omega|\leq A_{+}A_{-}\). So the velocity is always renormalized downward as expected. In order for the velocity to vanish, \(A_{\pm}\rightarrow\infty\). However, as long as \(e^{\pm e\phi(\mathbf{r})/\hbar}\) is integrable, which we assume, this condition is never exactly satisfied. So, for physical magnetic fields, the bands can be made very narrow, but never exactly flat, at least to first order in perturbation theory.
It is straightforward to show the existence of Bloch zero modes in a variety of other settings. To start, let us consider a \(4\times 4\) Hamiltonian inspired by Bernal bilayer graphene of the form
\[\mathcal{H}_{2}=\hbar v_{F}\left(\begin{matrix}0&\Pi_{-}&0&0\\ \Pi_{+}&0&\gamma_{1}/\hbar v_{F}&0\\ 0&\gamma_{1}/\hbar v_{F}&0&\Pi_{-}\\ 0&0&\Pi_{+}&0\end{matrix}\right), \tag{10}\]
where \(\Pi_{\pm}=\left(-i\partial_{x}+\frac{e}{\hbar}A_{x}\right)\pm i\left(-i \partial_{y}+\frac{e}{\hbar}A_{y}\right)\) and \(\gamma_{1}\) is a constant. Again, we assume that \(B(\mathbf{r})\) is periodic and \(\phi(\mathbf{r})\) is a scalar potential defined as before. Then, we write the zero-energy eigenstates for Hamiltonian (10) as
\[\psi(\mathbf{r})=\begin{pmatrix}e^{+e\phi(\mathbf{r})/\hbar}f_{1,+}(\mathbf{r} )\\ e^{-e\phi(\mathbf{r})/\hbar}f_{1,-}(\mathbf{r})\\ e^{+e\phi(\mathbf{r})/\hbar}f_{2,+}(\mathbf{r})\\ e^{-e\phi(\mathbf{r})/\hbar}f_{2,-}(\mathbf{r})\end{pmatrix}. \tag{11}\]
We find the following conditions: \(\partial_{z}f_{1,-}=0\) and \(\partial_{\bar{z}}f_{2,+}=0\), which imply that \(f_{1,-}\) and \(f_{2,+}\) must be constants. If \(f_{1,-}\neq 0\) and \(f_{2,+}\neq 0\), then the remaining two functions satisfy, for complex constants \(c_{1}=\frac{\gamma_{1}}{2\hbar v_{F}}f_{2,+}\) and \(c_{2}=\frac{\gamma_{1}}{2\hbar v_{F}}f_{1,-}\), \(\partial_{\bar{z}}f_{1,+}=c_{1}\) and \(\partial_{z}f_{2,-}=c_{2}\). This implies that there are functions \(\mathcal{F}_{1,+}=f_{1,+}-c_{1}\bar{z}\) and \(\mathcal{F}_{2,-}=f_{2,-}-c_{2}z\) satisfying \(\partial_{\bar{z}}\mathcal{F}_{1,+}=0\) and \(\partial_{z}\mathcal{F}_{2,-}=0\). So, \(\mathcal{F}_{1,+}\) and \(\mathcal{F}_{2,-}\) are holomorphic with respect to \(z\) and \(\bar{z}\) respectively. Consequently, the original functions can be written as
\[f_{1,+}=c_{1}\bar{z}+\mathcal{F}_{1,+},\quad f_{2,-}=c_{2}z+\mathcal{F}_{2,-}. \tag{12}\]
Now imposing a global bound \(B_{1}\), we observe via the reverse triangle inequality that for \(f_{1,+}\)
\[|\mathcal{F}_{1,+}|-|c_{1}\bar{z}|\leq|c_{1}\bar{z}+\mathcal{F}_{1,+}|<B_{1}, \tag{13}\]
which implies that \(\mathcal{F}_{1,+}\) has at most linear growth, \(|\mathcal{F}_{1,+}|<B_{1}+|c_{1}||z|\). Now, because \(\mathcal{F}_{1,+}\) is entire, the generalized Liouville's theorem states that we can write \(\mathcal{F}_{1,+}=a_{0}+a_{1}z\). A similar reasoning applies to \(\mathcal{F}_{2,-}\). In the end, we can in general write
\[f_{1,+}=a_{0}+a_{1}z+c_{1}\bar{z},\quad f_{2,-}=b_{0}+b_{1}\bar{z}+c_{2}z. \tag{14}\]
Finally, imposing periodicity on \(|f_{1,+}|\) and \(|f_{2,-}|\) eliminates \(a_{1},c_{1},b_{1},c_{2}\). Thus, we arrive at the conclusion that there are
only two Bloch zero modes, which can be written explicitly as
\[\psi_{+}(\mathbf{r})=\frac{1}{A_{+}}\begin{pmatrix}e^{+e\phi(\mathbf{r})/\hbar} \\ 0\\ 0\\ 0\end{pmatrix}\text{ and }\psi_{-}(\mathbf{r})=\frac{1}{A_{-}}\begin{pmatrix}0\\ 0\\ 0\\ e^{-e\phi(\mathbf{r})/\hbar}\end{pmatrix}. \tag{15}\]
It is worth pointing out the formal similarity between Eq. (15) and Eq. (7). The only difference between the two is the number of internal degrees of freedom. The band structure of Bernal bilayer graphene under a periodic pseudo-magnetic field was studied in Ref. [23], wherein these same zero-modes were found.
The argument above can be extended to show that for any number of layers for chirally-stacked multilayer, there are also two zero modes given explicitly by formulas similar to Eq. (15). The argument is simple but tedious; essentially, it is just a recursion of the steps done in the bilayer graphene. So we leave it for Appendix A. Here, we present an alternative, much quicker, method to obtain the same result. In the absence of a magnetic field, it is well-known that the low-energy spectrum of chirally-stacked multilayer graphene is a polynomial two-band crossing of the form \(\mathcal{E}_{\pm}\propto|\mathbf{k}|^{\ell},\) where \(\ell\) is the number of layers [33]. The appropriate Hamiltonian describing only these two bands is also of chiral form
\[\mathcal{H}_{\text{eff},\ell}\propto\begin{pmatrix}0&\Pi_{-}^{\ell}\\ \Pi_{+}^{\ell}&0\end{pmatrix}. \tag{16}\]
There are two Bloch zero modes to this Hamiltonian (16) given by Eq. (7). This follows immediately by noting the properties \(\Pi_{-}e^{-e\phi(\mathbf{r})/\hbar}f_{-}=-2ie^{-e\phi(\mathbf{r})/\hbar} \partial_{z}f_{-}\) and \(\Pi_{+}e^{+e\phi(\mathbf{r})/\hbar}f_{+}=-2ie^{+e\phi(\mathbf{r})/\hbar} \partial_{z}f_{+}.\) So the exponentials can be pulled past the derivatives, which then annihilate the remaining constant, nulling the whole function as desired. Therefore, the two Bloch zero modes in Eq. (7) satisfy band degeneracy of any order, not just linear band crossings. As presented, this method does _not_ exclude the possibility that there may be more than two zero modes, but with some more work, one can probably eliminate that possibility as well.
It is worth mentioning briefly that the aforementioned considerations immediately imply that odd-layer \(ABA\) multilayer graphene in a periodic gauge field with zero flux also possesses zero modes. This is because odd-layer \(ABA\) multilayer graphene, due to a layer-exchange symmetry, can be decomposed into direct sum of chiral sectors which host zero modes [34]. As an example, we show this for \(ABA\) trilayer graphene. Then, the generalization to any number of layers should be straightforward. The Hamiltonian for an \(ABA\) trilayer is
\[\mathcal{H}_{3}=\hbar v_{F}\begin{pmatrix}0&\Pi_{-}&0&0&0&0\\ \Pi_{+}&0&\gamma_{1}/\hbar v_{F}&0&0&0\\ 0&\gamma_{1}/\hbar v_{F}&0&\Pi_{-}&0&\gamma_{1}/\hbar v_{F}\\ 0&0&\Pi_{+}&0&0&0\\ 0&0&0&0&\Pi_{-}\\ 0&0&\gamma_{1}/\hbar v_{F}&0&\Pi_{+}&0\end{pmatrix}. \tag{17}\]
Upon a unitary transformation, this Hamiltonian can be brought into a direct sum of a monolayer and a bilayer
\[\tilde{\mathcal{H}}_{3}=\hbar v_{F}\begin{pmatrix}0&\Pi_{-}&0&0&0&0\\ \Pi_{+}&0&0&0&0&0\\ 0&0&0&\Pi_{-}&0&0\\ 0&0&\Pi_{+}&0&\sqrt{2}\gamma_{1}/\hbar v_{F}&0\\ 0&0&0&\sqrt{2}\gamma_{1}/\hbar v_{F}&0&\Pi_{-}\\ 0&0&0&0&\Pi_{+}&0\end{pmatrix}. \tag{18}\]
So the zero modes analyzed before remain valid in this situation. In this particular example, the count of zero modes is four: two for the monolayer sector and two for the bilayer sector.
## IV Zero modes in a singly-periodic magnetic field
For a final generalization, we consider a magnetic field that is periodic along one direction only. A different, but closely related, problem was studied in Ref. [9] using doubly-periodic Weierstrass sigma functions. Here, we do _not_ assume that the magnetic field is also periodic in the second direction. Without loss of generality, let \(B(x,y)=B(x+n,y),\) where \(n\) is an integer, repeat in the \(x\)-direction but be a general function in the \(y\)-direction. Other cases can be similarly obtained via a rotation and scaling of coordinates. The dimensions of \(x,y\) are suppressed. We do not assume that this magnetic field has vanishing flux. We write the scalar potential as [35]
\[\begin{split}\phi(x,y)&=\int_{\mathbb{R}}dy^{\prime}\int_{- \frac{1}{2}}^{\frac{1}{2}}dx^{\prime}B(x^{\prime},y^{\prime})G(x-x^{\prime},y- y^{\prime}),\\ G(x,y)&=\frac{1}{4\pi}\ln\left[\cosh\left(2\pi y\right)-\cos \left(2\pi x\right)\right].\end{split} \tag{19}\]
A brief derivation of the Green's function is presented in Appendix B. It is straightforward to check that \(\phi(x,y)=\phi(x+n,y)\) as desired. It immediately follows that the corresponding \(\mathbf{A}(\mathbf{r})\) is also periodic in the \(x\)-direction. As a consequence, the Hamiltonian is defined on a cylinder. In the limit \(|y|\to\infty,\)\(\phi(\mathbf{r})\to\Phi\ln\left[\cosh\left(2\pi|y|\right)\right]/4\pi,\) where \(\Phi=\int_{\mathbb{R}}dy\int_{-\frac{1}{2}}^{\frac{1}{2}}dxB(x,y).\) So the exponential factors tend to the following limits: \(e^{\pm e\phi/\hbar}\to\left[\cosh\left(2\pi|y|\right)\right]^{\pm\Phi/2\Phi_{0}}.\) For large \(|y|,\)\(\cosh(2\pi|y|)\to e^{2\pi|y|}/2.\) So, we have \(e^{\pm e\phi/\hbar}\to e^{\pm\pi|y|\Phi/\Phi_{0}}.\) Because we insist the \(f_{\pm}\) functions have periodic norm in the \(x\)-direction, they must grow (unless they are constants) in the \(y\)-direction. Therefore, we must again choose \(e^{-en\phi(\mathbf{r})/\hbar}.\) We initially consider \(f_{\pm}\) functions with period \(1\) in the \(x\)-direction: \(f_{+}(z)=e^{2\pi miz}\) and \(f_{-}(z)=e^{-2\pi miz},\) where \(m\) are integers. We need to determine restrictions on \(m\) to ensure normalizability. If \(\Phi>0,\) then we have \(e^{-\phi(\mathbf{r})/\hbar}f_{-}(z)\to e^{-2\pi miz-(2\pi my+\pi|y|\Phi/\Phi_{0} )}\to 0\) for both positive and negative large \(y\) if \(|m|<\Phi/2\Phi_{0}.\) If \(\Phi<0,\) then we have \(e^{+\phi(\mathbf{r})/\hbar}f_{+}(z)\to e^{+2\pi miz-(2\pi my-\pi|y|\Phi/\Phi_{0} )}\to 0\) for both positive and negative large \(y\) if \(|m|<-\Phi/2\Phi_{0}.\) If we lift the requirement that \(f_{\pm}\) has period \(1\) in \(x\) but instead has integer period \(M>1,\) then everything stays the same in the
above analysis except for the replacement \(m\to m/M.\) This extension in the period is allowed by Bloch's theorem because Bloch eigenstates do _not_ need to be periodic, only their norms need to be. We can write this new \(m\) as \(m=qM+p,\) where \(q\in\mathbb{Z}\) and \(p\in\left[0,M-1\right].\) Using this, we can write eigenstates explicitly in Bloch form \(\psi_{k_{x},q}(\mathbf{r})=e^{-i\eta k_{x}x}u_{k_{x},q}(\mathbf{r}),\) where
\[u_{k_{x},q}(\mathbf{r})=\begin{pmatrix}\Theta\left[-\eta\right]\\ \Theta\left[+\eta\right]\end{pmatrix}e^{-\eta e\phi(\mathbf{r})/\hbar}e^{-2 \pi q\eta ix-(2\pi q+k_{x})y}, \tag{20}\]
where \(k_{x}=2\pi p/M=\left[0,2\pi\right)\) and \(u_{k_{x},q}(x,y)=u_{k_{x},q}(x+n,y).\) In the limit \(M\rightarrow\infty,\)\(k_{x}\) becomes a continuous variable. The indices are still subject to the constraint \(|q+k_{x}/2\pi|<|\Phi|/2\Phi_{0}.\)
As an example, we take \(B(x,y)=B_{0}+B_{1}\cos\left(2\pi x/L\right),\) where \(L\) is the period [36]. Strictly speaking, the preceding analysis does not apply to this magnetic profile because it is not compactly supported and the magnetic flux can be infinite. However, as we will show, Eq. (20) still produces the correct zero-energy solutions. The corresponding scalar potential is
\[\phi(x,y)=\frac{B_{0}y^{2}}{2}-\frac{B_{1}L^{2}}{4\pi^{2}}\cos\left(\frac{2\pi }{L}x\right). \tag{21}\]
Assuming \(B_{0}>0,\) the eigenstates are
\[\psi_{k_{x}}(\mathbf{r})=\frac{e^{-\frac{eB_{0}y^{2}}{2\hbar}+\frac{eB_{1}L^{ 2}}{4\pi^{2}\hbar}\cos\left(\frac{2\pi}{L}x\right)-ik_{x}x-k_{x}y}}{A_{k_{x}}} \begin{pmatrix}0\\ 1\end{pmatrix}, \tag{22}\]
where the normalization is given explicitly as
\[A_{k_{x}}=\left(\frac{\pi L^{2}\hbar}{B_{0}e}\right)^{\frac{1}{4}}\exp\left( \frac{k_{x}^{2}\hbar}{2eB_{0}}\right)\left(I_{0}\left[\frac{eB_{1}L^{2}}{2 \pi^{2}\hbar}\right]\right)^{\frac{1}{2}}, \tag{23}\]
where \(I_{0}(x)\) is the modified Bessel function. From the normalization factor, we see that the eigenstates exist for all real \(k_{x};\) this is because the Gaussian factor \(e^{-y^{2}}\) decays much faster than the remaining factors. If \(B_{1}=0,\) we recover exactly the lowest Landau level. This example illustrates a very general procedure. If the magnetic field has a constant component, then one can choose an axis along which the magnetic vector potential is non-periodic. Then, the remaining orthogonal direction can be periodic or not. If it is, then the final generalization just discussed can be applied, as was done in Ref. [25]. If it is not, then we are back to the original Aharonov-Casher setup.
## V Discussion and conclusion
In closing, it is worth emphasizing again that our arguments apply only to fermions described by the Dirac equation (or in the multilayer graphene case, described by the \(2\ell\times 2\ell\) Hamiltonians, where \(\ell\) is the number of layers). The complementary problem concerning the spectrum of Schrodinger fermions in a periodic magnetic field governed by the Hamiltonian \(\mathcal{H}\propto(\mathbf{p}+e\mathbf{A})^{2}\) has a very different structure and is considerably more complicated [37; 38; 39; 40; 41; 42; 43; 44]. Whereas Schrodinger fermions are historically relevant to physics in two-dimensional electron gases, Dirac fermions are prominent in modern two-dimensional materials [45; 46; 47]. Graphene is probably the most well-known member of this family. In addition to a real magnetic field, the same physics can be obtained in graphene by subjecting it to a strain field since such a field behaves effectively as a pseudo-magnetic vector potential necessarily with zero flux. Therefore, our analysis is especially relevant to graphene and its multilayer cousins. Beyond graphene, Dirac fermions can also be found at boundaries of topological insulators [48; 25]. Though these boundary spectra generically disperse linearly, they can have nonlinear dispersions as well like in topological crystalline insulators [49]. In these cases, subjecting the surfaces with topologically-nontrivial boundary states to a patterned periodic magnetic field should induce the indicated manifolds of zero modes that are localized on the boundaries.
We acknowledge funding from the U.S. Department of Energy under grant DE-FG02-84ER45118.
## Appendix A Zero Modes in \(\ell\)-Layer Chirally-Stacked Multilayer Graphene
In this section, we prove that there are two zero-modes for \(\ell\)-layer chirally-stacked multilayer graphene in doubly-periodic magnetic fields. This is analogous to the method presented for \(\ell=2\) in the main text. The Hamiltonian is
\[\mathcal{H}_{\ell}=\hbar v_{F}\begin{pmatrix}0&\Pi_{-}&0&0&\ldots&0&0&0&0\\ \Pi_{+}&0&\gamma_{1}/\hbar v_{F}&0&\ldots&0&0&0&0\\ 0&\gamma_{1}/\hbar v_{F}&0&\Pi_{-}&\ldots&0&0&0&0\\ 0&0&\Pi_{+}&0&\ldots&0&0&0&0\\ \vdots&\vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots&\vdots\\ 0&0&0&\ldots&0&\Pi_{-}&0&0\\ 0&0&0&\ldots&\Pi_{+}&0&\gamma_{1}/\hbar v_{F}&0\\ 0&0&0&0&\ldots&0&\gamma_{1}/\hbar v_{F}&0&\Pi_{-}\\ 0&0&0&0&\ldots&0&0&\Pi_{+}&0\end{pmatrix}_{2\ell\times 2\ell}, \tag{24}\]
where \(\Pi_{\pm}=\left(-i\partial_{x}+\frac{e}{\hbar}A_{x}\right)\pm i\left(-i\partial_{y }+\frac{e}{\hbar}A_{y}\right)=-i\left(\partial_{x}+\frac{e}{\hbar}A_{y}\right)+ \left(\pm\partial_{y}+\frac{e}{\hbar}A_{x}\right)=-i\left(\partial_{x}\mp \frac{e}{\hbar}\partial_{x}\phi\right)+\left(\pm\partial_{y}-\frac{e}{\hbar} \partial_{y}\phi\right).\) It is clear that for any function of the form \(e^{+e\phi(\mathbf{r})/\hbar}f_{+}(\mathbf{r})\) or \(e^{-e\phi(\mathbf{r})/\hbar}f_{-}(\mathbf{r})\), we have
\[\begin{split}\Pi_{+}\left[e^{+e\phi(\mathbf{r})/\hbar}f_{+}( \mathbf{r})\right]&=e^{+e\phi(\mathbf{r})/\hbar}\left(-i \partial_{x}+\partial_{y}\right)f_{+}(\mathbf{r})=-ie^{+e\phi(\mathbf{r})/ \hbar}\left(\partial_{x}+i\partial_{y}\right)f_{+}(\mathbf{r})=-2ie^{+e\phi( \mathbf{r})/\hbar}\partial_{\bar{z}}f_{+}(\mathbf{r}),\\ \Pi_{-}\left[e^{-e\phi(\mathbf{r})/\hbar}f_{-}(\mathbf{r})\right] &=e^{-e\phi(\mathbf{r})/\hbar}\left(-i\partial_{x}- \partial_{y}\right)f_{-}(\mathbf{r})=-ie^{-e\phi(\mathbf{r})/\hbar}\left( \partial_{x}-i\partial_{y}\right)f_{-}(\mathbf{r})=-2ie^{-e\phi(\mathbf{r})/ \hbar}\partial_{z}f_{-}(\mathbf{r}).\end{split} \tag{10}\]
Then, writing the zero-mode eigenfunctions as
\[\psi(\mathbf{r})=\left(e^{+e\phi(\mathbf{r})/\hbar}f_{1,+}\ e^{-e\phi(\mathbf{ r})/\hbar}f_{1,-}\ e^{+e\phi(\mathbf{r})/\hbar}f_{2,+}\ e^{-e\phi(\mathbf{r})/\hbar}f_{2,-}\ \ldots\ e^{+e\phi(\mathbf{r})/\hbar}f_{\ell,+}\ e^{-e\phi(\mathbf{r})/\hbar}f_ {\ell,-}\right)^{T}, \tag{11}\]
we are led to the following conditions:
\[\begin{split}\partial_{z}f_{1,-}&=0,\\ \gamma_{1}f_{1,-}&-2i\hbar v_{F}\partial_{z}f_{2,-}=0, \\ \gamma_{1}f_{2,-}&-2i\hbar v_{F}\partial_{z}f_{3,-}=0, \\ &\vdots\\ \gamma_{1}f_{\ell-1,-}&-2i\hbar v_{F}\partial_{z}f_{\ell,- }=0,\end{split} \tag{12}\]
and
\[\begin{split}-2i\hbar v_{F}\partial_{\bar{z}}f_{1,+}+\gamma_{1}f_ {2,+}&=0,\\ -2i\hbar v_{F}\partial_{\bar{z}}f_{2,+}+\gamma_{1}f_{3,+}& =0,\\ -2i\hbar v_{F}\partial_{\bar{z}}f_{\ell-1,+}+\gamma_{1}f_{\ell,+}& =0,\\ \partial_{\bar{z}}f_{\ell,+}&=0,\end{split} \tag{13}\]
A symmetry is clear: the \(-\) series does not couple to the \(+\) series. This is essential to the argument. Now, let us focus on the \(+\) series. The last condition requires that \(f_{\ell,+}\) be an entire function. Since it is bounded, it must be a constant. Then, we have \(\partial_{\bar{z}}f_{\ell-1,+}=\frac{\gamma_{1}}{2i\hbar v_{F}}f_{\ell,+}=c_{ \ell,+}\). This means that we can write \(f_{\ell-1,+}=\mathcal{F}_{\ell-1,+}+c_{\ell,+}\bar{z}\), where \(\mathcal{F}_{\ell-1,+}\) is holomorphic. Now, since \(f_{\ell-1,+}\) has periodic norm, we can write its bound as \(B_{\ell-1,+}\). Then, by using the reverse triangle inequality, we have
\[|\mathcal{F}_{\ell-1,+}|-|c_{\ell,+}\bar{z}|\leq|\mathcal{F}_{\ell-1,+}+c_{ \ell,+}\bar{z}|<B_{\ell-1,+}\rightarrow|\mathcal{F}_{\ell-1,+}|<B_{\ell-1,+}+| c_{\ell,+}z|. \tag{14}\]
So \(\mathcal{F}_{\ell-1,+}\) must be at most a linear function of \(z\). But since \(f_{\ell-1,+}\) has periodic norm, this implies that \(c_{\ell,+}=0\) and \(\mathcal{F}_{\ell-1,+}\) is actually a constant. Thus, we conclude that \(f_{\ell,+}=0\) and \(f_{\ell-1,+}\) is a constant. Now, using that, we obtain that \(\partial_{\bar{z}}f_{\ell-2,+}=\frac{\gamma_{1}}{2i\hbar v_{F}}f_{\ell-1,+}=c_ {\ell-1,+}\). Then repeating the line of reasoning above, we obtain that \(c_{\ell-1,+}=0\to f_{\ell-1,+}=0\) and \(f_{\ell-2,+}\) is a constant. This recursive procedure continues until we get to \(f_{1,+}\), where we can only conclude that it is a constant but not zero. Next, we study the \(-\) series. This is essentially the same process in reverse with \(z\mapsto\bar{z}\). From the first condition, \(\partial_{z}f_{1,-}=0\), we get that \(f_{1,-}\) is a constant. Then, \(\partial_{z}f_{2,-}=c_{1,-}\), which by the same argument above, implies that \(f_{1,-}=0\) and \(f_{2,-}\) is a constant. The same recursive argument then applies to all other terms showing that \(f_{1,-}=f_{2,-}=f_{3,-}=...=f_{\ell-1,-}=0\) and \(f_{\ell,-}\) is a non-zero constant. Therefore, we conclude that there are only two zero modes with periodic norm, which can be written explicitly as
\[\psi_{+}(\mathbf{r})=\frac{1}{A_{+}}\begin{pmatrix}e^{+e\phi(\mathbf{r})/ \hbar}/\hbar\\ 0\\ \vdots\\ 0\\ 0\end{pmatrix}\text{ and }\psi_{-}(\mathbf{r})=\frac{1}{A_{-}}\begin{pmatrix}0\\ 0\\ \vdots\\ 0\\ e^{-e\phi(\mathbf{r})/\hbar}\end{pmatrix}. \tag{15}\]
## Appendix B Green's Function of Two-Dimensional Periodic Laplacian
In this section, we provide a brief derivation of the Green's function of the two-dimensional periodic Laplacian. This is a textbook result [35]; we only provide it here to make the manuscript self-contained, and do not claim any originality in this
derivation. We seek the solution for \(G(x,x^{\prime},y,y^{\prime})\)
\[\frac{\partial^{2}G(x,x^{\prime},y,y^{\prime})}{\partial x^{2}}+\frac{\partial^{ 2}G(x,x^{\prime},y,y^{\prime})}{\partial y^{2}}=\sum_{p=-\infty}^{\infty} \delta^{2}\left(\mathbf{r}-p\hat{x}-\mathbf{r}^{\prime}\right)=\delta(y-y^{ \prime})\sum_{p=-\infty}^{\infty}\delta(x-p-x^{\prime}). \tag{10}\]
We perform Fourier transformation using the following convention:
\[G(x,x^{\prime},y,y^{\prime})=\sum_{n=-\infty}^{\infty}\tilde{G}_{n}(y,y^{ \prime})e^{2\pi ni(x-x^{\prime})}\text{ and }\tilde{G}_{n}(y,y^{\prime})=\int_{-\frac{1}{2}}^{\frac{1}{2}}dxG(x,x^{\prime},y,y^{\prime})e^{-2\pi ni(x-x^{\prime})}, \tag{11}\]
which leads to the following ordinary differential equation in reciprocal space:
\[\frac{\partial^{2}\tilde{G}_{n}(y,y^{\prime})}{\partial y^{2}}-4\pi^{2}n^{2} \tilde{G}_{n}(y,y^{\prime})=\delta(y-y^{\prime}). \tag{12}\]
This can be solved for \(y-y^{\prime}<0\) and \(y-y^{\prime}>0\) separately, and then matched at \(y-y^{\prime}=0\) for continuity in the function and discontinuity in the derivative of the function
\[\begin{split}\tilde{G}_{0}(y,y^{\prime})&=\frac{1 }{2}|y-y^{\prime}|+c,\\ \tilde{G}_{n\neq 0}(y,y^{\prime})&=-\frac{1}{4\pi|n|}e^ {-2\pi|n||y-y^{\prime}|},\end{split} \tag{13}\]
where we have exploited the symmetry \((y-y^{\prime})\rightarrow-(y-y^{\prime})\). Now, inverting the Fourier transform, we obtain
\[G(x,x^{\prime},y,y^{\prime})=\frac{1}{2}|y-y^{\prime}|+c-\sum_{n\neq 0}\frac{1 }{4\pi|n|}e^{-2\pi|n||y-y^{\prime}|}e^{2\pi ni(x-x^{\prime})}. \tag{14}\]
Using the following summation identity \(-\sum_{n\neq 0}\frac{1}{4\pi|n|}e^{-2\pi|n||y|}e^{2\pi nix}=\frac{\ln 2}{4\pi} -\frac{|y|}{2}+\frac{1}{4\pi}\ln\left[\cosh\left(2\pi|y|\right)-\cos\left(2\pi x \right)\right],\) we obtain \(G(x,x^{\prime},y,y^{\prime})=\frac{1}{4\pi}\ln\left[\cosh\left(2\pi|y-y^{ \prime}|\right)-\cos\left(2\pi|x-x^{\prime}|\right)\right]+c^{\prime}.\) For simplicity, we set \(c^{\prime}=0\). Because the hyperbolic cosine is an even function, we can drop the absolute value on \(y\) to write
\[G(x,x^{\prime},y,y^{\prime})=\frac{1}{4\pi}\ln\left[\cosh\left(2\pi\left(y-y^{ \prime}\right)\right)-\cos\left(2\pi\left(x-x^{\prime}\right)\right)\right]. \tag{15}\]
Obviously, this Green's function is singular at \((x+n,y)=(x^{\prime},y^{\prime})\). Direct calculation confirms that \(\Delta_{\mathbf{r}}G(x,x^{\prime},y,y^{\prime})=0\) everywhere else. For a function \(\phi(x,y)\) satisfying the Poisson's equation
\[\frac{\partial^{2}\phi(x,y)}{\partial x^{2}}+\frac{\partial^{2}\phi(x,y)}{ \partial y^{2}}=B(x,y), \tag{16}\]
we can write its formal solution has a convolution with the Green's function
\[\phi(x,y)=\int_{-\infty}^{\infty}dy^{\prime}\int_{-\frac{1}{2}}^{\frac{1}{2}} dx^{\prime}B(x^{\prime},y^{\prime})G(x,x^{\prime},y,y^{\prime}). \tag{17}\]
We do not worry much about the boundary condition of \(\phi(x,y)\). We only require that it be periodic in \(x\), which is clear since \(G(x+n,x^{\prime},y,y^{\prime})=G(x,x^{\prime},y,y^{\prime}):\)
\[\phi(x+n,y)=\int_{-\infty}^{\infty}dy^{\prime}\int_{-\frac{1}{2}}^{\frac{1}{2} }dx^{\prime}B(x^{\prime},y^{\prime})G(x+n,x^{\prime},y,y^{\prime})=\int_{- \infty}^{\infty}dy^{\prime}\int_{-\frac{1}{2}}^{\frac{1}{2}}dx^{\prime}B(x^{ \prime},y^{\prime})G(x,x^{\prime},y,y^{\prime})=\phi(x,y). \tag{18}\]
|
2305.06074 | iLab at SemEval-2023 Task 11 Le-Wi-Di: Modelling Disagreement or
Modelling Perspectives? | There are two competing approaches for modelling annotator disagreement:
distributional soft-labelling approaches (which aim to capture the level of
disagreement) or modelling perspectives of individual annotators or groups
thereof. We adapt a multi-task architecture -- which has previously shown
success in modelling perspectives -- to evaluate its performance on the SEMEVAL
Task 11. We do so by combining both approaches, i.e. predicting individual
annotator perspectives as an interim step towards predicting annotator
disagreement. Despite its previous success, we found that a multi-task approach
performed poorly on datasets which contained distinct annotator opinions,
suggesting that this approach may not always be suitable when modelling
perspectives. Furthermore, our results explain that while strongly
perspectivist approaches might not achieve state-of-the-art performance
according to evaluation metrics used by distributional approaches, our approach
allows for a more nuanced understanding of individual perspectives present in
the data. We argue that perspectivist approaches are preferable because they
enable decision makers to amplify minority views, and that it is important to
re-evaluate metrics to reflect this goal. | Nikolas Vitsakis, Amit Parekh, Tanvi Dinkar, Gavin Abercrombie, Ioannis Konstas, Verena Rieser | 2023-05-10T11:55:17Z | http://arxiv.org/abs/2305.06074v1 | # iLab at SemEval-2023 Task 11 Le-Wi-Di:
###### Abstract
There are two competing approaches for modelling annotator disagreement: distributional soft-labelling approaches (which aim to capture the level of disagreement) or modelling perspectives of individual annotators or groups thereof. We adapt a multi-task architecture -- which has previously shown success in modelling perspectives-- to evaluate its performance on the SEMEVAL Task 11. We do so by combining both approaches, i.e. predicting individual annotator perspectives as an _interim step_ towards predicting annotator disagreement. Despite its previous success, we found that a multi-task approach performed poorly on datasets which contained distinct annotator opinions, suggesting that this approach may not always be suitable when modelling perspectives. Furthermore, our results explain that while strongly perspectivist approaches might not achieve state-of-the-art performance according to evaluation metrics used by distributional approaches, our approach allows for a more nuanced understanding of individual perspectives present in the data. We argue that perspectivist approaches are preferable because they enable decision makers to amplify minority views, and that it is important to re-evaluate metrics to reflect this goal.
## 1 Introduction
Many Natural Language Processing (NLP) tasks follow a supervised learning paradigm, i.e. classification of labelled data where multiple annotations are aggregated into a _hard label_ using averaging or majority voting. Hard labels are based on the assumption that each instance in a dataset has _one singularly correct response_--often referred to as 'ground truth'.
However, this assumption is _highly unrealistic_ for social computing tasks, such as such as toxicity and hate speech detection (e.g. Vidgen and Derczynski, 2021), where lived experiences, biases, opinions and annotator experience all play a role in the _subjective_ response an annotator might give. Hard labels especially disadvantage minority groups (Blodgett, 2021). For example, in abusive language classification, where a minority is disproportionately affected (such as minoritised people who have faced online harassment), an aggregated majority label can obscure the perspective of the most vulnerable groups.
Thus, there is growing awareness that modelling multiple perspectives is necessary, particularly for inherently subjective tasks and those concerned with social issues (Abercrombie et al., 2022; Cabitza et al., 2023; Plank, 2022).
Le-Wi-DiThe SemEval 2023 shared task 'Learning With Disagreements' (Leonardellti et al., 2023) aims to capture and model annotator disagreement - going beyond the assumption of one aggregated 'ground truth' label. Participating teams are required to propose methods that consider _disaggregated annotations_, used to create a _soft label_, which represents the probabilistic distribution of the annotations. Soft labels can then be used to predict the _level of disagreement_ for each instance in a dataset.
The task presents a benchmark of four datasets, including a hard label and soft labels for each instance. The datasets were chosen specifically to represent tasks that are highly subjective (e.g. hate speech detection) and show high annotator disagreement. Participating teams are evaluated on how well their proposed model predicts both hard and soft labels, via F1 accuracy score and cross-entropy loss, respectively. The shared task prioritised the soft evaluation, i.e how well the model's probabilities reflect annotator agreement, rather than simply proposing models that would outperform the current state-of-the-art for the hard labels. For further details of the datasets, see section 3.
Our approachproposes a modified version of the multi-task model introduced by Davani et al. (2022), which aims to predict individual annota
tor judgments. By training our model to predict each annotator's judgement for each instance in a dataset, we can use the resulting predictions to infer the level of disagreement for that instance, without directly training for such a purpose.
The main benefit of our approach is that opinions present in the dataset are preserved beyond the simplistic form of a polarised agreement/disagreement distribution. Instead, we focus on representing individual opinions, also knows as 'perspectives' (Cabitza et al., 2023).This allows modelling of specific perspectives present in a dataset, potentially enabling the amplification of minority opinions.
## 2 Related work
### Modelling disagreement
Uma et al. (2021) provide an extensive survey, outlining four main approaches. The first aggregates annotations into hard labels (as in Task metric 1). The second removes items that display high disagreement and is thus unsuitable for this task. The third models the distribution of annotations for each item, i.e'soft labels' (as in metric 2). The final approach enables the model to optimise across different tasks through the use of either the use of multi-task learning, or a procedure called plankstyle weighing (for more information towards this method refer to Plank et al. (2014)).
For the purposes of this paper, the former is relevant, as multi-task learning, enables the resulting model to optimise across different tasks through the use of both hard and soft labels, providing predictions for the hard label, as well as degrees of confidence for each (Fornaciari et al., 2022). While these approaches make use of disagreement to enhance optimisation, they have not been used to preserve the different perspectives represented in the data.
### Modelling perspectives
We focus on a'strong perspectvist approach' which aims to _preserve diversity of perspectives_ throughout the whole classification pipeline (Cabitza et al., 2023). To contrast, weak perspectivist approaches (Cabitza et al., 2023) may consider several annotator viewpoints, but still reduce these viewpoints towards a single label. An example of a strong perspectivist approach is Davani et al. (2022), who predict individual annotator judgments using a multi-task approach, and treat each annotator as a sub-task, thus retaining individual perspectives. While research has shown some success when utilising single-task models to accurately capture distinct perspectives (Rodrigues and Pereira, 2018), our choice of model was informed by recent evidence that multi-task models (Fornaciari et al., 2021) can outperform single-task models.
However, one limitation (with respect to strong perspectivism) is that, for evaluation, they aggregate predicted annotations into one label, essentially falling back into the issues of hard labels. Our proposed solution aims to address this limitation through the use of both hard and soft labels, i.e. evaluating model performance on the disaggregated perspectives present in the dataset.
## 3 Data
The Le-Wi-Di1 shared task consists of the following four datasets, that have all been synthesised into a common json format.
Footnote 1: [https://le-wi-di.github.io/](https://le-wi-di.github.io/)
**HS-Brexit**(Akhtar et al., 2021): a dataset of English tweets on the topic of Brexit, labelled for hate speech by six annotators belonging to two different groups; three Muslim immigrants and three other individuals.
**ArMIS**(Almanea and Poesio, 2022): Arabic tweets annotated for misogyny and sexism. The three annotators for this dataset have different self-identified demographics of 'Moderate Female', 'Liberal Female' and 'Conservative Male'.
**ConvAbuse**(Cercas Curry et al., 2021): a dataset of English dialogues between users and two conversational agents. The dialogues have been been annotated for abusiveness by eight gender studies experts.
**MultiDomain Agreement (MD)**: (Leonardelli et al., 2021): English tweets on three topics: BLM, Election and Covid-19, labelled for offensiveness by crowdsourced annotators and specifically selected to elicit disagreement.
See Table 1 for descriptive statistics of the datasets. Disagreement is moderate for the ArMIS and ConvAbuse datasets2. While the MD dataset has (unsurprisingly) higher disagreement given the large pool of annotators (891 annotators total), the percentage of unseen annotators (i.e. annotators
that don't appear in all splits) is extremely high (91%) compared to the other datasets (0%). These factors could lead to sparsity of strongly distinguished perspectives present in the MD dataset. In ConvAbuse, the standard deviation of utterance lengths3 is higher due to the presence of many single token responses that can be present in a dialogue--such as '_yes_'.
Footnote 3: Calculated using the Spacy tokeniser package for English, and Arabert package for Arabic.
## 4 Methods
### System overview
We implement a multi-task model, which makes, for each instance, separate predictions for each annotator present in the dataset. Given the varied characteristics of our datasets, such as missing labels or large number of annotators, this approach allows for the evaluation of multi-task learning across a variety contexts. An overview of the model and the predicted output is shown in Figure 1.
For a given text sample in a dataset, \(\mathbf{x}\in\mathbf{X}\), our model \(p_{\theta}(\mathbf{y}|\mathbf{x})\) predicts the individual annotation of each annotator \(\mathbf{y}=(y_{1},\dots,y_{K})\), where \(K\) is the total number of unique annotators within the dataset. The predicted hard label of an instance is defined as aggregation of predictions into one label \(mode(\mathbf{y})=z\), where \(z\in\{0,1\}\), while soft labels as \(v_{0}\in[0,1]\) and \(v_{1}\in[0,1]\) that denote the possible probabilistic distributions of the annotations.
For the purposes of the shared task, we evaluate our models through F1 and cross entropy scores for the hard and soft labels respectively. We refer to the cross entropy loss of the shared task as _soft-label cross entropy_.We needed a different way to model our strong perspectivist approach, as the shared task metrics prioritise predicting the _level of disagreement_. As seen in Figure 1, our model treated each annotator per instance as a subtask (shown as a classification layer). We evaluate the predicted versus the true labels for each annotator using cross entropy loss, which we refer to as _individual cross entropy_, which is also the metric used to train our model. Optimising individual cross entropy should lead to more accurate individual predictions, resulting in a representative annotation matrix, which can, in turn, be used to calculate the soft-label cross entropy. Since the aim of the shared task was to capture disagreement, we prioritised minimising soft-label cross entropy (CE) loss scores over high performing F1 scores.
This was done by manually stopping the model's training procedure (for individual cross entropy loss) when the minimum soft-label cross entropy scores were achieved for each dataset. Thus, we do not optimize our model directly using the shared task evaluation metrics. However, these evaluation metrics can still be used as an indirect measure of our model's performance. Hence, we report the shared task metrics on our model, which was trained on minimising the individual cross entropy loss in order to capture perspectives. While this method is not optimised to predict disagreement when compared to models trained by minimising soft-label cross entropy scores, it allows for the prediction of individual annotator perspectives. Thus, we model individual perspectives as an interim step towards predicting disagreement.
We compare the performance and analyse the benefits of using a multi-task model against the organisers' baseline (aggregated labels), and two other models: a baseline neural model, and an SVM model (further described in subsection 4.3).
\begin{table}
\begin{tabular}{c c|c c c c} & & **HS-Brexit** & **ArMIS** & **ConvAbuse** & **MD** \\ \hline \hline Task & & Hate speech & Misogyny & Abusiveness & Offensiveness \\ & train & 784 & 657 & 2398 & 6592 \\ No. of instances & dev & 168 & 141 & 812 & 1104 \\ & test & 168 & 145 & 840 & 3057 \\ Utterance length & & \(18.623\pm 4.578\) & \(19.510\pm 12.042\) & \(27.322\pm 18.830\) & \(22.614\pm 14.777\) \\ \hline \hline \multirow{4}{*}{Annotator details} & Krippendorff’s \(\alpha\) (\(\downarrow\)) & 0.347 & 0.524 & 0.650 & **0.359** \\ & Total annotators & 6 & 3 & 8 & 819 \\ \cline{1-1} & Annotators / instance & 6 & 3 & 4 & 5 \\ \cline{1-1} & Unseen annotators & 0 & 0 & 0 & **91** \\ \end{tabular}
\end{table}
Table 1: Descriptive data and annotator statistics: utterance length in tokens with standard deviations, inter-annotator agreement measured with Krippendorff’s \(\alpha\) ((\(\downarrow\)) lower=higher disagreement), and ‘unseen annotators’, the percentage of annotators that are not represented by at least one instance in all of the train, dev and test sets.
The SVM model was used as a linear model baseline, due to its prior success compared to neural approaches for cases of abuse detection Niemann et al. (2020).
### Text encoding
We applied the following pre-processing steps. For the ConvAbuse dataset, we only processed the last sentence uttered by a user, as Cercas Curry et al. (2021) reported no significant performance improvements from adding dialogue context. We preprocessed the Arabic dataset following Antoun et al. (2020). For the SVM model, both English and Arabic datasets were tokenised using term frequency-inverse document frequency (TF-IDF).
### Model architectures
Baseline Linear modelWe trained an SVM model to perform binary classification with a linear kernel using a bag-of-words and TF-IDF approach.
The model outputs a distribution over the possible hard and soft labels, over which F1 and cross entropy scores were calculated.
Single-task (baseline) BERT ModelFor our transformer-based models, we used the pre-trained BERT-base-uncased Devlin et al. (2019) model from the HuggingFace Transformers library Wolf et al. (2020) for initialising English models. For the Arabic dataset, Transformer-based models were initialised with AraBERT, a variant of BERT pre-trained specifically for Arabic text which has shown comparable results to multilingual BERT in NLP-related tasks such as sentiment analysis and Named Entity Recognition Antoun et al. (2020). Outputs of both models were fed through a linear layer, with a softmax activation function, resulting in a probability distribution for a binary label. The model used ADAM optimisation Kingma and Ba (2014). For specific parameters see Appendix A. Like the SVM model, the model outputs a distribution over the possible hard and soft labels, over which F1 and cross entropy scores were calculated.
Figure 1: Representation of our multi-task architecture. As shown, we predict individual annotator perspectives (individual cross entropy, shown on the left of the figure) as an interim step to predicting the level of disagreement (the task metric of soft-label cross entropy, as shown on the right). For a full system description, refer to subsection 4.1
Multi-task BERT ModelOur model is a modified version of that proposed by Davani et al. (2022) as shown in Figure 1 and subsection 4.1. A pre-trained BERT model is used to encode the text, upon which we train separate classification layers for each annotator. Training parameters are identical to our baseline transformer model (Appendix A). The embedded [CLS] token of the annotator label is fed into a classification layer to predict the each annotator's label. As stated, we evaluate the predicted versus the true labels using cross entropy loss. Instances for which a particular annotator did not provide an annotation for the text were ignored when calculating the loss for that instance.
## 5 Results
The task is evaluated using cross entropy loss (epsilon \(=~{}1^{-12}\)) and F1 scores (average ='micro') of the hard and soft labels respectively.
Our results are shown in Table 2 and detailed results for all teams can be found in Leonardelli et al. (2023).Regarding F1 scores, the SVM model outperformed both the single-task BERT model, as well as the multi-task BERT model on the HS-Brexit, ConvAbuse, and MD datasets, findings which align with Niemann et al. (2020).The single-task BERT model outperformed both other architectures in the ArMIS dataset. For cross entropy scores, the single-task BERT model outperformed both other architectures across all datasets. Our model performs best on the ConvAbuse dataset, followed by the HS-Brexit dataset across both metrics.
## 6 Further Analyses
For deeper analysis, we used different methods depending on the specifics of the datasets. For ConvAbuse and MD, we followed Davani et al. (2022) to deal with _missing annotator labels_ during the evaluation stage. Although not all annotators annotated every instance, our model still predicts labels for all annotators in the dataset (e.g. predicting eight annotator labels in ConvAbuse for all instances that only have four true annotator labels as shown in Table 1). Essentially, predictions of missing annotator labels might have negatively impacted the soft-label cross entropy comparisons by skewing distributions. This is especially the case for the MD dataset, where each instance was annotated by only five of the 891 annotators.
As such, we constrained the model to only predict labels for existing annotators of each instance, and reevaluated the soft-loss cross entropy and F1 scores. However, unlike Davani et al. (2022), our results degraded when constraining our model for both the ConvAbuse and MD datasets. Low performance in the MD dataset is further discussed in subsection 7.1.
We also investigated reasons for our poorest cross entropy scores, on the ArMIS dataset. We found that our model did not perform as well as expected in this scenario in which it should theoretically have performed well, i.e. with the annotators of the dataset self-identifying with a distinct ideological background (_conservative_, _moderate_, and _liberal_) and no missing annotator labels. We found that this may have been the result of the model's architecture. Through testing with different batch sizes, we found that our model performed better when we added an extra hidden layer of 384 units to the existing 768-unit one. This would indicate that while the model was indeed learning, the original architecture with a single hidden layer was not sufficient to adequately disentangle enough information to make accurate predictions.
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c|c} & \multicolumn{2}{c|}{**HS-Brexit**} & \multicolumn{2}{c|}{**ArMIS**} & \multicolumn{2}{c|}{**ConvAbuse**} & \multicolumn{2}{c}{**MD**} \\
**Models** & **F1 (\(\uparrow\))** & **CE (\(\downarrow\))** & **F1 (\(\uparrow\))** & **CE (\(\downarrow\))** & **F1 (\(\uparrow\))** & **CE (\(\downarrow\))** & **F1 (\(\uparrow\))** & **CE (\(\downarrow\))** \\ \hline Baseline (agg.) & **0.89** & 2.71 & 0.59 & 8.23 & **0.95** & 3.38 & 0.78 & 7.74 \\ TFIDF - SVM & 0.86 & 0.62 & 0.68 & 2.57 & 0.9 & 0.49 & **0.88** & 0.62 \\ Single-task BERT & 0.47 & **0.43** & **0.78** & **1.77** & 0.88 & **0.37** & 0.74 & **0.61** \\ Multi-Task BERT & 0.46 & 0.76 & 0.58 & 1.89 & 0.77 & 0.50 & 0.41 & 1.28 \\ \end{tabular}
\end{table}
Table 2: Model performance using F1 and cross-entropy (CE) scores for the four datasets of SemEval-2023 Task 11 Le-Wi-Di. \(\uparrow\), and \(\downarrow\) indicate that higher and lower scores represent better performance, respectively. Baseline provided by the organisers (aggregation). We highlight in bold the highest scores for each dataset respectively.
## 7 Discussion of Results & Limitations
### Performance evaluation
We observe that our multi-task BERT model performed relatively well on the HS-Brexit and ConvAbuse datasets, but scored the lowest on the ArMIS and MD datasets. We expected to have limited success with the MD dataset, as multi-task models have an inherent issue with sparse data when combined with a high numbers of subtasks Ruder (2017); Zhang and Yang (2022). Not only does the MD-dataset contain more than 800 individual annotators, but the number of instances each annotator appeared in varies drastically (range \(1-1988\), \(\text{mean}=65.63\pm 143.73\)). Furthermore, not all annotators appear in all splits, with 91% not represented in at least one of the training, development, and test sets. Individual annotator perspective modelling is therefore unfeasible for this dataset.
Such issues might be addressed through a clustering approach of labels. For example, Akhtar et al. (2019, 2020, 2021); Dethlefs et al. (2014) have proposed clustering methods based on a variety of features, including demographic similarities, as well as using inter and intra-annotator disagreement and similarities in labelling behaviour. However, as stated by Akhtar et al. (2021), a limitation of their approach is that it is necessary to know the demographic and cultural background of annotators, which is information that is not available for the MD dataset. We plan to investigate clustering methods in our future work.
As previously stated, our model performed the worst on the ArMIS dataset. While some strategies to improve these scores were discussed in section 6, we believe a possible explanation of these scores could be due to the size discrepancy of the datasets used in the original Davani et al. (2022) paper compared to the size of the ArMIS dataset. The original multi-task model for example, used datasets with \(\approx 30,000-60,000\) instances in each dataset. This shows that while a dataset may contain distinct annotator perspectives (e.g. evident in the ArMIS dataset both by self-declared ideological categories and a moderate inter-annotator disagreement of \(0.524\)), the multitask approach may not perform well on smaller datasets.
Another possible reason might have been our loss-weighing strategy, which sums the individual cross entropy across subtasks. Gong et al. (2019) explain that such multi-task approaches, where the model loss is constituted by the sum loss of subtasks, can lead to degradation of performance. This is due to a possible conflict arising through contrasting losses between subtask, or conflicting gradient signals Chen et al. (2018); Sener and Koltun (2018). This aligns with our experience during training, where summed loss remained relatively stable, while individual loss across subtasks widely fluctuated. Furthermore, our soft-label cross entropy and F1 scores slowly improved over time in spite of the stable summed loss, indicating that some learning across subtasks was indeed taking place.
### Capturing disagreement versus capturing perspectives
The single-task BERT baseline model outperformed the multi-task model across all evaluative metrics. Cabitza et al. (2023) explains that models trained through strong perspectivist approaches may be negatively impacted in terms of performance and evaluation metrics - as the more nuance present in data (such as disaggregated annotations), the more difficult the data is to model. While our model exhibits these weaknesses, there are _clear reasons_ to use this approach; i.e. to model perspectives as an interim step in predicting disagreement. Our approach is successful in ways that would not be accounted for by simply predicting disagreement without this interim step.
Disagreement only shows that different perspectives are present in the dataset, but not the underlying reasons as to _why_ disagreement may occur, _nor_ the clashing perspectives present in the dataset. Particularly for highly subjective tasks, modelling only the level of disagreement does not consider intersecting perspectives. In contrast, strong perspectivist approaches offer insights into the different opinions present amongst individuals or groups of annotators. Modelling perspectives does not erase these individual viewpoints. For example, research has shown that attributes such as gender Waseem and Hovy (2016), or political activism status Luo et al. (2020) of an annotator can elicit meaningful differences of opinions in a dataset.
Furthermore, it is well documented that aggregation can harm minorities present in a dataset by limiting their opinion's influence Prabhakaran et al. (2021). Gordon et al. (2022) explain that merely capturing disagreement can have a similar effect by presenting a simplified view of opposing perspectives in the data. This can be problematic,
as without a nuanced understanding of which perspectives exist within a dataset, model predictions might not generalise well to end users' perspectives Gordon et al. (2021).
Accurately predicting each annotator's perspective also captures their biases. However, bias is not an inherently negative trait. Though seldom explicitly stated, bias is an intrinsic attribute of annotators, datasets, and trained models Gordon et al. (2022). While (de)biasing models can lead to positive outcomes when attempting to make a model 'unlearn' harmful social biases Liang et al. (2020); Orgad and Belinkov (2022); Subramanian et al. (2021), Devinney et al. (2022) assert that incorporating bias stemming from marginalised groups while training, can lead to models that amplify the voice of those minorities. For example, in cases with datasets dealing with gender-based violence Cercas Curry et al. (2021), it might be preferable to capture and amplify the bias of the affected people.
Combining approaches such as ours with the clustering approaches mentioned in subsection 7.1, merits future research, especially since fully debiasing models seems improbable Gordon et al. (2022). As such, future research should attempt to utilise such multi-task models and strong persectivist approaches when dealing with subjective tasks, in order to get a deeper understanding about why disagreement occurs.
## 8 Conclusion
We evaluated the performance of a multi-task model on predicting disagreement in four datasets, evaluated with both hard and soft labels, through F1 and cross-entropy loss respectively. Our model learned and predicted individual annotator perspectives for each instance.
Our model's findings did not outperform our single-task BERT baseline in terms of the shared tasks evaluation metric. This was due to model employing a strong persectivist approach, which prioritised capturing individual perspectives present in the dataset, over high performance. We argue that a strong persectivist approach is preferable to merely modelling disagreement as it allows to capture different opinions present in a dataset, and can be used to further amplify minority views.
Evaluation metrics for this edition of Le-Wi-Di are geared towards measuring the overall levels of disagreement present in the datasets. However, if we wish to model stronger versions of persectivism, we will need to develop new, more suitable metrics that can capture varying judgements in the kinds of different scenarios we have seen here.
## 9 Ethical considerations
ReproducibilityWe aim to maximise reproducibility by making all data manipulation and modelling architecture aspects as explicit as possible in line with reproducibility principles Belz et al. (2021). The code used to produce this study's results can be found online in our team's GitHub repository.
Data manipulation and misrepresentationFollowing concerns about possible mishandling of data in this study, an important point has to be made about the ConvAbuse dataset Cercas Curry et al. (2021). As explained in section 3, annotations ranged from \([-3,1]\) with \(1\) denoting no abuse, \(0\) ambivalence, and the rest indicating severity of abuse.While analysing the dataset of this challenge, labels were aggregated to a binary depending on whether abuse was detected (labels -3,-1 in ConvAbuse), while the rest being annotated to no abuse detected. This transformation was necessary for the purposes of the shared task, as it has been shown that comparing scores between datasets with binary annotations and datasets with multiple labels can lead to incomparable results Poletto et al. (2019).
Abuse detection, and handling of sensitive opinionsThere is also a larger conversation to be had about the use of a hard label in general, regarding issues such as bias, abuse, and such sensitive topics. Through a strong persectivist approach, an annotator's viewpoint might also reflect the perspective of a minority population Cabitza et al. (2023). It is important to be sensitive when dealing with such opinions without invalidating them, or minimising them through a distribution. In essence, if there is even a small amount of disagreement in whether an item is problematic or not, and the label should reflect that beyond a binary Blodgett (2021). We leave it to future research to explain how exactly these multiple labels could be appropriately incorporated into a model architecture.
Dual use of modelFinally, it is also important to explain that the model architecture proposed can also unfortunately be used for purposes beyond our original intention Hovy and Prabhumoye (2021). While our model can be used towards furthering
social justice aims through the amplification of minorities' perspectives, the model could also be used to manipulate perspectives from the dataset, in order to present stable results. For example, approaches have attempted to create perspective clusters that are more inclusive towards the data, [14]. Unfortunately, these findings seem to result in bias mitigation [22, 14], rather than bias erasure in both downstream tasks [13], as well as resulting word embeddings [1]. As biases are unavoidable, we advocate boosting of under-represented perspectives that might otherwise be lost. Such an approach would also make attempts to use this model to reproduce and platform problematic opinions transparent.
## 10 Acknowledgements
Gavin Abercrombie and Verena Rieser were supported by the EPSRC projects 'Gender Bias in Conversational AI' (EP/T023767/1) and 'Equally Safe Online (EP/W025493/1). Tanvi Dinkar and Verena Rieser were supported by 'AISEC: AI Secure and Explainable by Construction' (EP/T026952/1).
|
2306.08509 | Faraday and Kerr rotation due to photoinduced orbital magnetization in
two-dimensional electron gas | We study theoretically the Faraday and Kerr rotation of a probe field due to
the orbital magnetization of a two-dimensional electron gas induced by a
circularly polarized pump. We develop a microscopic theory of these effects in
the intraband spectral range based on the analytical solution of the kinetic
equation for linear and parabolic energy dispersion of electrons and arbitrary
scattering potential. We show that the spectral dependence of rotation angles
and accompanying ellipticities experiences a sharp resonance when the probe and
pump frequencies are close to each other. At the resonance, the Faraday and
Kerr rotation angles are of the order of $0.1^\circ$ per 1~kW/cm$^2$ of the
pump intensity in graphene samples, corresponding to a pump-induced synthetic
magnetic field of about 0.1~T. We also analyze the influence of the dielectric
contrast between dielectric media surrounding the two-dimensional electron gas
on the rotation angles. | M. V. Durnev | 2023-06-14T13:46:37Z | http://arxiv.org/abs/2306.08509v1 | # Faraday and Kerr rotation due to photoinduced orbital magnetization in two-dimensional electron gas
###### Abstract
We study theoretically the Faraday and Kerr rotation of a probe field due to the orbital magnetization of a two-dimensional electron gas induced by a circularly polarized pump. We develop a microscopic theory of these effects in the intraband spectral range based on the analytical solution of the kinetic equation for linear and parabolic energy dispersion of electrons and arbitrary scattering potential. We show that the spectral dependence of rotation angles and accompanying ellipticities experiences a sharp resonance when the probe and pump frequencies are close to each other. At the resonance, the Faraday and Kerr rotation angles are of the order of \(0.1^{\circ}\) per 1 kW/cm\({}^{2}\) of the pump intensity in graphene samples, corresponding to a pump-induced synthetic magnetic field of about 0.1 T. We also analyze the influence of the dielectric contrast between dielectric media surrounding the two-dimensional electron gas on the rotation angles.
## I Introduction
Optically induced magnetization and its manipulation in solids have recently attracted significant attention in solid-state physics [1; 2; 3; 4]. Absorption of circularly polarized photons results in efficient magnetization of electron and hole systems in the process of optical spin orientation through both the interband and intraband optical transitions [5; 6; 7; 8; 9; 10]. Besides the spin orientation, the circularly polarized light induces orbital currents of charge carriers, and hence, the orbital magnetic moment, known as the inverse Faraday effect (IFE) [11; 12]. The orbital magnetization due to the IFE is being actively studied in different systems, including metals and semiconductors [13; 14; 15; 16], ferromagnets [17], superconductors [18], metallic nanoparticles [19] and graphene [20].
To probe the light-induced orbital magnetic moment, one can use the pump-probe Faraday and Kerr spectroscopy - the method, which is widely employed to study the magnitude and dynamics of magnetization related to both spin and orbital magnetic moment [4; 7; 21; 22; 23; 24; 25; 26; 27]. In this method, one measures the rotation of the polarization plane of linearly polarized probe beam, which is reflected from or transmitted through the medium with pump-induced magnetization. While the theory of the pump-probe Faraday and Kerr effects due to spin magnetization has been developed for bulk and low-dimensional semiconductor systems [7; 28; 29; 30], consistent microscopic theory of these effects due to orbital magnetization is still missing. The naive mechanism of such a Faraday rotation could involve magnetic field induced by the orbital currents, however this magnetic field is extremely small and, hence, cannot be the major source of rotation. The third-order contribution to ac current induced by elliptically polarized electric field in graphene and responsible for the Faraday rotation, has been calculated in Ref. [31]. However, the calculations were based on a simplified relaxation model, which does not fully capture the specifics of electron scattering in two-dimensional systems.
Here, we study the Faraday and Kerr rotation due to the orbital magnetization induced by circularly polarized pump in a two-dimensional electron gas (2DEG). We show that the circularly polarized electric field of the pump modifies the high-frequency conductivity of 2DEG, resulting in the circular birefringence and dichroism. This, in turn, leads to rotation of the transmitted and reflected probe field. Moreover, the initially linearly polarized probe becomes elliptically polarized (acquires ellipticity), Fig. 1. We develop a microscopic theory of the pump-induced high-frequency conductivity of 2DEG due to intraband optical transitions and calculate the Faraday and Kerr angles as well as the corresponding ellipticities. The theory accounts for electron scattering by impurities and describes both non-absorbing and absorbing regimes of the pump and probe fields. We derive analytical expressions for the Faraday and Kerr angles and ellipticities valid for parabolic and linear energy dispersion of 2D electrons and arbitrary scattering potential. We also analyze the influence of the dielectric contrast between dielectric media surrounding 2DEG on the rotation angles.
We show that the spectral dependence of rotation angles and ellipticities experiences a sharp resonance, when probe and pump frequencies are close to each other. The width and the magnitude of resonance are determined by
Figure 1: Schematic picture of the pump-induced Faraday and Kerr rotation in the two-dimensional electron gas. Electric field of the circularly polarized pump acts as a synthetic magnetic field resulting in the rotation of the linearly polarized probe field. \(\theta_{F}\) and \(\theta_{K}\) are the Faraday and Kerr rotation angles, respectively.
a long energy relaxation time, rather than a short momentum relaxation time. At the resonance, and at \(\Omega\tau_{1}\sim 1\), where \(\Omega\) is the pump frequency, and \(\tau_{1}\) is the momentum relaxation time, the Faraday and Kerr rotation angles are of the order of \(0.1^{\circ}\) per 1 kW/cm\({}^{2}\) of the pump intensity in graphene samples. We also calculate a synthetic magnetic field, an effective magnetic field, which leads to the same rotation angles as the circularly polarized pump. In graphene samples, this synthetic magnetic field amounts to \(\sim 0.1\) T per 1 kW/cm\({}^{2}\) of the pump intensity at \(\Omega\tau_{1}\sim 1\).
## II Faraday and Kerr rotation by a 2D conducting medium
We consider 2DEG occupying the plane \(z=0\) and surrounded by dielectrics with refractive indices \(n_{1}\) at \(z<0\) and \(n_{2}\) at \(z>0\). The 2DEG is irradiated by normally incident pump and probe beams with electric fields \(\mathbf{E}_{\Omega}(t)=\mathbf{E}_{\Omega}\mathrm{e}^{-\mathrm{i}\Omega t}+\mathrm{c.c}\) and \(\mathbf{E}_{\omega}(t)=\mathbf{E}_{\omega}\mathrm{e}^{-\mathrm{i}\omega t}+\mathrm{c.c}\), respectively, see Fig. 1. In the absence of pump field, \(E_{\Omega}=0\), the probe field induces electric current in 2DEG \(\mathbf{j}(t)=\mathbf{j}_{\omega}\mathrm{e}^{-\mathrm{i}\omega t}+\mathrm{c.c}\), which oscillates at the probe frequency and is parallel to the probe electric field \(\mathbf{E}_{\omega}\). The current is related to the probe field as \(\mathbf{j}_{\omega}=\sigma\mathbf{E}_{\omega}\), where \(\sigma=e^{2}n_{e}\tau_{1}/m(1-\mathrm{i}\omega\tau_{1})\) is the high-frequency 2DEG conductivity, \(e\) and \(m\) are the electron charge and effective mass, respectively, \(n_{e}\) is the 2D electron concentration and \(\tau_{1}\) is the momentum relaxation time.
In the presence of the pump field, the third-order contributions to the current \(\mathbf{j}_{\omega}\) appear. These contributions in the isotropic 2DEG are described by the following equation with three complex parameters \(\gamma_{j}\)[32]:
\[\mathbf{j}_{\omega}=\gamma_{1}|\mathbf{E}_{\Omega}|^{2}\mathbf{E}_{\omega}+ \gamma_{2}\left[\mathbf{E}_{\Omega}^{*}(\mathbf{E}_{\Omega}\cdot\mathbf{E}_{\omega})+\mathbf{ E}_{\Omega}(\mathbf{E}_{\Omega}^{*}\cdot\mathbf{E}_{\omega})\right]\\ +\mathrm{i}\gamma_{3}\left[\mathbf{E}_{\omega}\times\left[\mathbf{E}_{ \Omega}\times\mathbf{E}_{\Omega}^{*}\right]\right]\,. \tag{1}\]
Here, \(\gamma_{1}\) describes the change of isotropic conductivity due to the pump radiation, whereas \(\gamma_{2}\) and \(\gamma_{3}\) give rise to the transverse current in the direction perpendicular to \(\mathbf{E}_{\omega}\) induced by linearly and circularly polarized pump, respectively. In this paper, we consider circularly polarized pump, and therefore, the \(\gamma_{3}\) contribution [33]. For circularly polarized pump, Eq. (1) yields the transverse current described by the off-diagonal conductivity \(\sigma_{xy}=-\sigma_{yx}=\gamma_{3}|\mathbf{E}_{\Omega}|^{2}P_{\mathrm{circ}}\), where \(P_{\mathrm{circ}}=\pm 1\) for right-hand and left-hand circular polarization, respectively. Note that, when the probe field is static, i.e. at \(\omega=0\), the \(\gamma_{3}\) contribution describes the appearance of a transverse direct current in the presence of a circularly polarized pump - the so-called photovoltaic or circular Hall effect [32; 34; 35].
Pump-induced transverse conductivity \(\sigma_{xy}=-\sigma_{yx}\) leads to circular birefringence and circular dichroism, i.e. different transmission and absorption of the right-hand and left-hand circularly polarized components of the probe field. The incident linearly polarized probe field is a superposition of circularly polarized fields \(\mathbf{E}_{\omega,\pm}^{(i)}=E_{\omega}^{(i)}\mathbf{o}_{\pm}\), where \(\mathbf{o}_{\pm}\) are circularly polarized unit vectors related to the unit vectors \(\mathbf{e}_{x}\parallel x\) and \(\mathbf{e}_{y}\parallel y\) as \(\mathbf{o}_{\pm}=(\mathbf{e}_{x}\pm\mathrm{i}\mathbf{e}_{y})/\sqrt{2}\). The amplitude transmission and reflection coefficients of \(\mathbf{E}_{\omega,\pm}^{(i)}\) are given by [36]
\[t_{\pm}=\frac{t_{12}}{1+\alpha_{\pm}}\;,\quad r_{\pm}=\frac{r_{12}-\alpha_{\pm }}{1+\alpha_{\pm}}\;, \tag{2}\]
where \(r_{12}=(n_{1}-n_{2})/(n_{1}+n_{2})\) and \(t_{12}=r_{12}+1\) are the amplitude reflection and transmission coefficients for the light incident on the boundary between two dielectrics in the absence of the 2DEG layer, \(\alpha_{\pm}=2\pi\sigma_{\pm}/(c\bar{n})\), \(\sigma_{\pm}=\sigma_{xx}\pm\mathrm{i}\sigma_{xy}\), \(\bar{n}=(n_{1}+n_{2})/2\), and \(c\) is the speed of light in vacuum.
Pump-induced anisotropy of the transmission and reflection coefficients leads to the rotation of the linear polarization of the transmitted and reflected probe fields. We will further consider the low-intensity regime, when the pump-induced off-diagonal conductivity is much smaller then the diagonal one, i.e. \(|\sigma_{xy}|\ll|\sigma_{xx}|\), and \(\sigma_{xx}\approx\sigma\). In that case the differences \(t_{+}-t_{-}\) and \(r_{+}-r_{+}\) are much smaller than the corresponding sums, and the Faraday rotation angle \(\theta_{F}\) and the ellipticity \(\epsilon_{F}\) of the transmitted probe field are [37; 38; 7]
\[\epsilon_{F}-\mathrm{i}\theta_{F}\approx\frac{t_{+}-t_{-}}{t_{+}+t_{-}}\;. \tag{3}\]
Analogously, the Kerr rotation angle \(\theta_{K}\) and the accompanying ellipticity \(\epsilon_{K}\) of the reflected probe field are given by
\[\epsilon_{K}-\mathrm{i}\theta_{K}\approx\frac{r_{+}-r_{-}}{r_{+}+r_{-}}\;. \tag{4}\]
By substituting Eq. (2) to Eqs. (3) and (4), we obtain
\[\theta_{F}+\mathrm{i}\epsilon_{F}\approx\frac{2\pi\sigma_{xy}}{c\bar{n}(1+ \alpha)}\,, \tag{5}\]
and
\[\theta_{K}+\mathrm{i}\epsilon_{K}\approx\frac{2\pi t_{12}\sigma_{xy}}{c\bar{n}(1 +\alpha)(r_{12}-\alpha)}\;, \tag{6}\]
where \(\alpha=2\pi\sigma/(c\bar{n})\). Note that Eq. (6) is not valid when the difference \(r_{12}-\alpha\) is close to zero, since in this case the condition \(|r_{+}-r_{-}|\ll|r_{+}+r_{-}|\) does not hold. When, in addition to a small ratio \(|\sigma_{xy}/\sigma|\), the parameter \(\alpha\) is also small, i.e. \(|\alpha|\ll 1\) and \(|\alpha|\ll|r_{12}|\), it follows from Eqs. (5) and (6), that the ratio of the Faraday and Kerr angles is constant, \(\theta_{K}/\theta_{F}=t_{12}/r_{12}\). On the other hand, in the absence of dielectric contrast, when \(n_{1}=n_{2}=\bar{n}\), and \(r_{12}=0\), \(t_{12}=1\), the frequency dependences of the Faraday and Kerr angles differ, i.e. \(\theta_{F}\approx 2\pi\mathrm{Re}\{\sigma_{xy}\}/(c\bar{n})\), while \(\theta_{K}\approx-2\pi\mathrm{Re}\{\sigma_{xy}/\alpha\}/(c\bar{n})\).
In a typical pump-probe experiment, see e.g. Ref. [39], one measures the Faraday and Kerr rotation signals equal to the difference between the intensities of the transmitted and reflected beams, such as \(I_{\omega,x^{\prime}}^{(i)}-I_{\omega,y^{\prime}}^{(i)}\) and \(I_{\omega,\sigma+}^{(i)}-I_{\omega,\sigma-}^{(i)}\). Here, \((x^{\prime},y^{\prime})\) are the axes rotated by \(\pi/4\) with respect to the initial \((x,y)\) frame, and \(\sigma_{\pm}\) denotes right- and left-hand circular polarization. These signals are related to the rotation angles and ellipticities as
\[I_{\omega,x^{\prime}}^{(t)}-I_{\omega,y^{\prime}}^{(t)}=2\theta_{F}TI_{\omega}\;, \quad I_{\omega,\sigma+}^{(i)}-I_{\omega,\sigma-}^{(i)}=2\epsilon_{F}TI_{ \omega}\;, \tag{7}\]
and
\[I^{(r)}_{\omega,x^{\prime}}-I^{(r)}_{\omega,y^{\prime}}=2\theta_{K}RI_{\omega}\:, \quad I^{(r)}_{\omega,\sigma+}-I^{(r)}_{\omega,\sigma-}=2\epsilon_{K}RI_{\omega}\:, \tag{8}\]
where
\[T=\frac{n_{2}|\bar{t}|^{2}}{n_{1}}\:,\quad R=|\bar{r}|^{2}\:, \tag{9}\]
\(\bar{t}=(t_{+}+t_{-})/2\), \(\bar{r}=(r_{+}+r_{-})/2\), and \(I_{\omega}\) is the intensity of the incident probe field. Note that the dielectric contrast \(n_{1}\neq n_{2}\) is crucial for the experimental observation of the Kerr rotation signal, since the reflection coefficient \(R\) for the free-standing 2D layer is proportional to the parameter \(|\alpha|^{2}\), see Eq. (2), which might be small [39].
## III Pump-induced transverse conductivity
Now, we develop a microscopic theory of the transverse conductivity \(\sigma_{xy}(\omega,\Omega)\) induced by the circularly polarized pump field. The kinetics of 2D electrons driven by the pump and probe electric fields is described by the Boltzmann equation for the electron distribution function \(f(\mathbf{p},t)\)
\[\frac{\partial f}{\partial t}+e\left[\mathbf{E}_{\Omega}(t)+\mathbf{E}_{\omega}(t) \right]\cdot\frac{\partial f}{\partial\mathbf{p}}=\mathrm{St}f\:. \tag{10}\]
Here, \(\mathbf{p}\) is the electron momentum, \(e\) is the electron charge and \(\mathrm{St}f\) is the collision integral. The fields \(\mathbf{E}_{\Omega}(t)\) and \(\mathbf{E}_{\omega}(t)\) in Eq. (10) are electric fields experienced by the 2DEG, i.e. the sum of the incident and reflected fields at \(z=0\). Equation (10) is valid in the classical regime, when \(\hbar\omega\) and \(\hbar\Omega\) are much less than the mean electron energy. We solve Eq. (10) by expanding the distribution function \(f(\mathbf{p},t)\) in the series in the electric field amplitude as follows:
\[f(\mathbf{p},t)=f_{0}+\left[f_{1\omega}(\mathbf{p})\mathrm{e}^{-\mathrm{ i}\omega t}+f_{1\Omega}(\mathbf{p})\mathrm{e}^{-\mathrm{i}\Omega t}+\mathrm{c.c.}\right] \\ +f_{2}(\mathbf{p})+\left[f_{2,\omega+\Omega}(\mathbf{p})\mathrm{e}^{- \mathrm{i}(\omega+\Omega)t}+f_{2,\omega-\Omega}(\mathbf{p})\mathrm{e}^{-\mathrm{i }(\omega-\Omega)t}+\mathrm{c.c.}\right]\\ +\left[f_{3,\omega}(\mathbf{p})\mathrm{e}^{-\mathrm{i}\omega t}+ \mathrm{c.c.}\right]\:. \tag{11}\]
Here, \(f_{0}\) is the equilibrium distribution function, whereas the first-order corrections \(f_{1\omega}\propto E_{\omega}\) and \(f_{1\Omega}\propto E_{\Omega}\) determine Drude conductivity, responsible for ac electric currents oscillating at frequencies \(\omega\) and \(\Omega\), respectively. The second-order corrections are \(f_{2}\propto E_{\Omega}E_{\Omega}^{*}\), \(f_{2,\omega+\Omega}\propto E_{\omega}E_{\Omega}\) and \(f_{2,\omega-\Omega}\propto E_{\omega}E_{\Omega}^{*}\). The desired transverse current oscillating at \(\omega\) is determined by the third-order correction \(f_{3,\omega}\propto E_{\omega}E_{\Omega}E_{\Omega}^{*}\).
Considering the term \(e\left[\mathbf{E}_{\Omega}(t)+\mathbf{E}_{\omega}(t)\right]\cdot\partial f/\partial\bm {p}\) in Eq. (10) as a perturbation we obtain the following equations for the corrections to the distribution function:
\[-\mathrm{i}\omega f_{1\omega}+e\mathbf{E}_{\omega}\cdot\frac{\partial f_{0}}{ \partial\mathbf{p}}=\mathrm{St}\ f_{1\omega}\:, \tag{12a}\] \[e\left(\mathbf{E}_{\Omega}\cdot\frac{\partial f_{1\Omega}^{*}}{ \partial\mathbf{p}}+\mathbf{E}_{\Omega}^{*}\cdot\frac{\partial f_{1\Omega}}{\partial \mathbf{p}}\right)=\mathrm{St}\ f_{2}\:,\] (12b) \[-\mathrm{i}(\omega+\Omega)f_{2,\omega+\Omega}+e\left(\mathbf{E}_{ \omega}\cdot\frac{\partial f_{1\Omega}}{\partial\mathbf{p}}+\mathbf{E}_{\Omega}\cdot \frac{\partial f_{1\omega}}{\partial\mathbf{p}}\right)\\ =\mathrm{St}\ f_{2,\omega+\Omega}\:,\] (12c) \[-\mathrm{i}\omega f_{3,\omega}+e\mathbf{E}_{\omega}\cdot\frac{\partial f _{2}}{\partial\mathbf{p}}+e\mathbf{E}_{\Omega}\cdot\frac{\partial f_{2,\omega-\Omega}} {\partial\mathbf{p}}\\ +e\mathbf{E}_{\Omega}^{*}\cdot\frac{\partial f_{2,\omega+\Omega}}{ \partial\mathbf{p}}=\mathrm{St}\ f_{3,\omega}\:. \tag{12d}\]
Equation for \(f_{1\Omega}\) is obtained from Eq. (12a) by replacing \(\omega\) with \(\Omega\), and equation for \(f_{2,\omega-\Omega}\) is obtained from Eq. (12c) by replacing \(\Omega\) with \(-\Omega\) and making use of the relations \(\mathbf{E}_{-\Omega}=\mathbf{E}_{\Omega}^{*}\), \(f_{1,-\Omega}=f_{1\Omega}^{*}\).
In order to derive the \(\sigma_{yx}\) component of the conductivity tensor, we calculate the transverse electric current \(j_{\omega,y}=\sigma_{yx}E_{\omega,x}\) driven by the \(x\)-component of the probe field. The current reads
\[j_{\omega,y}=e\nu\sum_{\mathbf{p}}v_{y}f_{3,\omega}\:, \tag{13}\]
where \(\langle\ldots\rangle\) denotes averaging over the directions of \(\mathbf{p}\), \(\tau_{1\omega}=\tau_{1}/(1-\mathrm{i}\omega\tau_{1})\), and \(\tau_{1}^{-1}=-\left\langle\mathbf{v}\mathrm{St}f\right\rangle/\left\langle\mathbf{v}f\right\rangle\) is the energy-dependent momentum relaxation rate. Summation of Eq. (14) over \(\mathbf{p}\) and integration by parts yield
\[j_{\omega,y}=e^{2}\nu\sum_{\mathbf{p}}\left(f_{2}\mathbf{E}_{\omega}+f_{2,\omega-\Omega} \mathbf{E}_{\Omega}+f_{2,\omega+\Omega}\mathbf{E}_{\Omega}^{*}\right)\cdot\frac{ \partial(v_{y}\tau_{1\omega})}{\partial\mathbf{p}}. \tag{15}\]
We start with calculating \(j_{\omega,y}\) for parabolic energy dispersion of electrons \(\varepsilon(\mathbf{p})=|\mathbf{p}|^{2}/2m\). This dispersion is typical for low-energy electrons in III-V quantum wells, bilayer graphene, monolayers of transition metal dichalcogenides, etc. Calculating derivative in the right-hand side of Eq. (15), one obtains
\[j_{\omega,y} =e^{2}\nu E_{\omega,x}\sum_{\mathbf{p}}v_{x}v_{y}\tau^{\prime}_{1 \omega}f_{2}\] \[\quad+\frac{e^{2}\nu}{m}\sum_{\mathbf{p}}(\varepsilon\tau_{1\omega})^{ \prime}\left(f_{2,\omega-\Omega}E_{\Omega,y}+f_{2,\omega+\Omega}E_{\Omega,y}^{*}\right)\] \[+\frac{e^{2}\nu}{2}\sum_{\mathbf{p}}\tau^{\prime}_{1\omega}\left[f_{2, \omega-\Omega}\left(2v_{x}v_{y}E_{\Omega,x}-(v_{x}^{2}-v_{y}^{2})E_{\Omega,y}\right)\right.\] \[\quad\quad\quad\left.+f_{2,\omega+\Omega}\left(2v_{x}v_{y}E_{\Omega,x}^{*}-(v_{x}^{2}-v_{y}^{2})E_{\Omega,y}^{*}\right)\right]\:. \tag{16}\]
Here, \((\ldots)^{\prime}\) denotes derivative over energy, and we took into account that \(\mathbf{E}_{\omega}\parallel x\). The nature of the contributions to the ac current Eq. (16) is similar to the one discussed in Ref. [32] for a static current. The first and the third contributions, proportional to \(v_{x}v_{y}f_{2}\), \(v_{x}v_{y}f_{2,\omega\pm\Omega}\)
and \((v_{x}^{2}-v_{y}^{2})f_{2,\omega\pm\Omega}\), are related to the optical alignment of electron momenta by the oscillating electric field. The second term, proportional to \((e\tau_{1\omega})^{\prime}\), is related to the dynamic heating and cooling of 2DEG by the oscillating fields.
The first-order corrections to the distribution function are found from Eq. (12a) and read
\[f_{1\omega}=-e\tau_{1\omega}(\mathbf{E}_{\omega}\cdot\mathbf{v})f_{0}^{\prime}\:,\quad f _{1\Omega}=-e\tau_{1\Omega}(\mathbf{E}_{\Omega}\cdot\mathbf{v})f_{0}^{\prime}\:, \tag{17}\]
where \(\tau_{1\Omega}=\tau_{1}/(1-\mathrm{i}\Omega\tau_{1})\). Calculation shows that the first term in Eq. (16) proportional to the time-independent correction \(f_{2}\) vanishes for circularly polarized pump. Therefore, we do not consider this term in the following. Other second-order corrections are found by solving Eq. (12c) with \(f_{1\omega}\) and \(f_{1\Omega}\) given by Eq. (17), which yields
\[f_{2,\omega+\Omega}=\left\langle f_{2,\omega+\Omega}\right\rangle +\frac{1}{2}e^{2}E_{\omega,x}\tau_{2,\omega+\Omega}\left[(\tau_{1\Omega}+\tau _{1\omega})f_{0}^{\prime}\right\rangle^{\prime}\] \[\times\left[(v_{x}^{2}-v_{y}^{2})E_{\Omega,x}+2v_{x}v_{y}E_{\Omega,y}\right]\:. \tag{18}\]
Here, \(\left\langle f_{2,\omega+\Omega}\right\rangle\) is the zeroth angular harmonic of \(f_{2,\omega+\Omega}\), \(\tau_{2}^{-1}=-\left\langle v_{x}v_{y}\mathrm{St}f\right\rangle/\left\langle v _{x}v_{y}f\right\rangle\) is the energy-dependent relaxation rate of the second angular harmonic of the distribution function, and \(\tau_{2,\omega+\Omega}=\tau_{2}/[1-\mathrm{i}(\omega+\Omega)\tau_{2}]\).
We describe the relaxation of the zeroth angular harmonic of the distribution function \(\left\langle f(\mathbf{p},t)\right\rangle\) by the collision integral \(\mathrm{St}\left\langle f\right\rangle=-(\left\langle f\right\rangle-f_{0})/ \tau_{0}\), where \(\tau_{0}\) is the energy-independent relaxation time determined by the electron-electron scattering and energy-relaxation processes (e.g., caused by phonon scattering). Equation (12c) yields
\[\left\langle f_{2,\omega+\Omega}\right\rangle=\frac{e^{2}\tau_{0,\omega+\Omega }}{m}\left[\varepsilon(\tau_{1\Omega}+\tau_{1\omega})f_{0}^{\prime}\right]^{ \prime}E_{\omega,x}E_{\Omega,x}\:, \tag{19}\]
where \(\tau_{0,\omega+\Omega}=\tau_{0}/[1-\mathrm{i}(\omega+\Omega)\tau_{0}]\). The \(f_{2,\omega-\Omega}\) function is found from \(f_{2,\omega+\Omega}\) by replacing \(\Omega\) with \(-\Omega\) and using the relations \(\tau_{1,-\Omega}=\tau_{1\Omega}^{*}\) and \(\mathbf{E}_{-\Omega}=\mathbf{E}_{\Omega}^{*}\).
Finally, substituting Eqs. (18) and (19) into Eq. (16) for the current and calculating the sums, we obtain the transverse conductivity of the degenerate electron gas induced by the circularly polarized pump
\[\sigma_{xy}(\omega,\Omega)=F(\omega,\Omega)-F(\omega,-\Omega)\:, \tag{20}\]
where for parabolic spectrum
\[F^{(\mathrm{par})}(\omega,\Omega)=-\frac{\mathrm{i}\sigma e^{2}| \mathbf{E}_{\Omega}|^{2}P_{\mathrm{circ}}[2-\mathrm{i}(\omega+\Omega)\tau_{1}]}{2 m(1-\mathrm{i}\Omega\tau_{1})} \tag{21}\] \[\times\left[(\varepsilon_{F}\tau_{1\omega}^{\prime\prime}+2\tau_ {1\omega}^{\prime})\tau_{0,\omega+\Omega}-\varepsilon_{F}(\tau_{1\omega}^{ \prime}\tau_{2,\omega+\Omega})^{\prime}-2\tau_{1\omega}^{\prime}\tau_{2, \omega+\Omega}\right]\:.\]
Here, the relaxation times and its energy derivatives are taken at the Fermi energy \(\varepsilon_{F}\), \(\sigma=e^{2}n_{e}\tau_{1\omega}/m\) is the high-frequency conductivity, and \(n_{e}=\nu m\varepsilon_{F}/(2\pi\hbar^{2})\) is the electron density.
Similar calculations can be applied to 2DEG with linear energy dispersion, e.g. in graphene or HgTe/CdHgTe quantum wells of the critical thickness. Using \(\varepsilon(\mathbf{p})=v_{0}|\mathbf{p}|\) and performing calculations shown in App. A, one obtains \(\sigma_{xy}\) given by Eq. (20) with
\[F^{(\mathrm{lin})}(\omega,\Omega)=-\frac{\mathrm{i}\sigma e^{2}v _{0}^{2}|\mathbf{E}_{\Omega}|^{2}P_{\mathrm{circ}}[2-\mathrm{i}(\omega+\Omega)\tau_ {1}]}{4\varepsilon_{F}(1-\mathrm{i}\Omega\tau_{1})}\] \[\times\left[\left(\varepsilon_{F}\tau_{1\omega}^{\prime\prime}+ \tau_{1\omega}^{\prime}-\frac{\tau_{1\omega}}{\varepsilon_{F}}\right)\tau_{0, \omega+\Omega}-\varepsilon_{F}(\tau_{1\omega}^{\prime}\tau_{2,\omega+\Omega})^ {\prime}\right.\] \[\left.\quad-\tau_{1\omega}^{\prime}\tau_{2,\omega+\Omega}+\tau_ {1\omega}\left(\tau_{2,\omega+\Omega}^{\prime}+\frac{\tau_{2,\omega+\Omega}}{ \varepsilon_{F}}\right)\right]\:. \tag{22}\]
Here, the high-frequency conductivity and the electron density are given by \(\sigma=e^{2}v_{0}^{2}n_{e}\tau_{1\omega}/\varepsilon_{F}\) and \(n_{e}=\nu\varepsilon_{F}^{2}/(4\pi\hbar^{2}v_{0}^{2})\).
Note that at \(\omega=0\), Eqs. (20 - 22) describe the static transverse photoconductivity of 2DEG and agree with the second line of Eq. (16) in Ref. [32]. Conductivity given by Eqs. (21) and (22) is proportional to \(|\mathbf{E}_{\Omega}|^{2}\), which is the square of the pump field at \(z=0\). \(|\mathbf{E}_{\Omega}|^{2}\) is related to the intensity of the incident pump \(I_{\Omega}=cn_{1}[E_{\Omega}^{(i)}]^{2}/2\pi\) as \(|\mathbf{E}_{\Omega}|^{2}=2\pi T(\Omega)I_{\Omega}/(cn_{2})\), where \(T\) is given by Eq. (9).
## IV Discussion
Equations (5), (6) and (20 - 22) can be applied to calculate the photoinduced Faraday and Kerr rotation and ellipticity in different 2D systems, such as quantum wells, monolayer and bilayer graphene, transition metal dichalcogenide monolayers and other doped 2D materials. In this section we present results for two illustrative examples with linear and parabolic energy dispersion, monolayer and bilayer graphene, respectively. We also analyze the role of the dielectric contrast \((n_{2}-n_{1})/\bar{n}\) between the two dielectric media surrounding 2DEG on the rotation angles and ellipticities.
### 2D layer on a substrate
First, we consider the case of the 2D layer lying on a substrate by setting the refractive indices \(n_{1}=1\) and \(n_{2}=3\). In the discussion below Eq. (6), we showed that in case of a large dielectric contrast, the Kerr angle and ellipticity are related to the corresponding Faraday quantities as \(\theta_{K}/\theta_{F}\approx t_{12}/r_{12}\), and \(\epsilon_{K}/\epsilon_{F}\approx t_{12}/r_{12}\). Hence, for the chosen \(n_{1}\) and \(n_{2}\) we have \(\theta_{K}\approx-\theta_{F}\) and \(\epsilon_{K}\approx-\epsilon_{F}\), and in this subsection we discuss the Faraday angle and ellipticity only [40].
#### iv.1.1 Parabolic spectrum. Bilayer graphene.
Figure 2 shows the dependence of the calculated Faraday angle and the accompanying ellipticity for parabolic energy dispersion and a set of parameters relevant to bilayer graphene [41]. It follows from Eq. (21) that in case of the energy independent relaxation times \(\tau_{1}\) and \(\tau_{2}\), relevant for short-range scatterers, the transverse conductivity \(\sigma_{xy}\) vanishes. Hence, the curves in Fig. (2) are plotted for unscreened Coulomb scatterers corresponding to \(\tau_{1}=2\tau_{2}\propto\varepsilon\). We use the electron density \(n_{e}=10^{12}\) cm
and momentum relaxation time \(\tau_{1}(\varepsilon_{F})=0.1\) ps, which results in \(\varepsilon_{F}\approx 39\) meV and \(2\pi\sigma_{0}/(c\bar{n})\approx 0.088\), where \(\sigma_{0}=e^{2}n_{e}\tau_{1}/m\) is the static 2DEG conductivity. In the studied frequency range the transmission and reflection coefficients (9) lie in the range \(T=0.63-0.7\) and \(R=0.27-0.29\), respectively.
The dependence of rotation angles and ellipticities on the probe frequency experiences sharp resonances in the region, where the probe frequency \(\omega\) is close to the pump frequency \(\Omega\). At \(\Omega\tau_{1}\lesssim 1\) and pump intensity \(I_{\Omega}=1\) kW/cm\({}^{2}\), the Faraday angle at the resonance is \(\theta_{F}\sim 0.1^{\circ}\), and the corresponding ellipticity \(\epsilon_{F}\sim 0.1~{}\%\), see Fig. 2. Note that for such intensity, the inequality \(|\sigma_{xy}|\ll|\sigma_{xx}|\) still holds so that we are still in the perturbative regime. To study the shape of the resonances in more detail, we analyze the pump-induced conductivity, Eqs. (20 - 21), at \(\tau_{0}\gg\tau_{1}\) relevant for 2DEG at low temperature, and \(\Omega\tau_{0}\gg 1\). In this case we have a sharp resonance in the conductivity, which shape for Coulomb scatterers is given by
\[\sigma_{xy}(\omega)\approx\frac{2\mathrm{i}\sigma_{0}e^{2}\tau_{1}\tau_{0}| \mathbf{E}_{\Omega}|^{2}P_{\mathrm{circ}}}{m\varepsilon_{F}[1-\mathrm{i}(\omega- \Omega)\tau_{0}](1+\Omega^{2}\tau_{1}^{2})(1-\mathrm{i}\Omega\tau_{1})^{3}}~{}. \tag{23}\]
Equation (23) allows one to calculate the frequency dependence of the Faraday angle near the resonance. Substituting Eq. (23) to Eq. (5), one obtains
\[\theta_{F}(\omega) \approx\frac{4\pi\sigma_{0}}{c\bar{n}}\frac{e^{2}\tau_{1}\tau_{0}| \mathbf{E}_{\Omega}|^{2}P_{\mathrm{circ}}}{m\varepsilon_{F}}\] \[\times\frac{\Omega\tau_{1}(\Omega^{2}\tau_{1}^{2}-3)+(\omega- \Omega)\tau_{0}(3\Omega^{2}\tau_{1}^{2}-1)}{(1+\Omega^{2}\tau_{1}^{2})^{4}[1+( \omega-\Omega)^{2}\tau_{0}^{2}]}~{}. \tag{24}\]
It follows from Eq. (24), that depending on \(\Omega\tau_{1}\), the resonance shape varies between Lorentzian and Lorentzian multiplied by \((\omega-\Omega)\), see Fig. 2a. Interestingly, the resonance width is given by the relaxation rate of the zeroth angular harmonic \(\tau_{0}^{-1}\) rather than the momentum relaxation rate. The magnitude of the resonance is determined by the product of \(4\pi\sigma_{0}/(c\bar{n})\) and the dimensionless parameter \(e^{2}|\mathbf{E}_{\Omega}|^{2}\tau_{1}\tau_{0}/(m\varepsilon_{F})\) proportional to the intensity of the pump radiation.
We note, that strictly at resonance, when \(\omega=\Omega\), the developed theory is not applicable. In this case, one should consider a third-order response to the monochromatic electric field, since the pump and probe fields cannot longer be distinguished as in Eq. (1). This situation corresponds to the self-induced rotation of electric field, when the field modifies dielectric properties of the 2D layer and, at the same time, experience rotation due to this modification. Such a self-induced rotation has been considered for graphene within a simplified relaxation model in Ref. [31]. In App. B, we calculate the third-order photocurrent induced by a monochromatic electric field being a sum of large circularly polarized and small linearly polarized contributions, see Eq. (B).
#### iii.1.2 Linear spectrum. Single-layer graphene.
Figure 3 shows the dependence of the calculated Faraday angle and the accompanying ellipticity for linear energy dispersion and a set of parameters relevant to monolayer graphene [42]. For linear energy dispersion, the relaxation times are \(\tau_{1}=2\tau_{2}\propto\varepsilon^{-1}\) for short-range scatterers and \(\tau_{1}=3\tau_{2}\propto\varepsilon\) for Coulomb scatterers [31]. It follows from Eq. (22) that both types of scatterers contribute to the transverse conductivity. For the calculations we use \(n_{e}=3\times 10^{11}\) cm\({}^{-2}\) and \(\tau_{1}(\varepsilon_{F})=0.1\) ps, which results in \(\varepsilon_{F}\approx 64\) meV and \(2\pi\sigma_{0}/(c\bar{n})\approx 0.071\). In that case, the transmission and reflection coefficients of the probe beam lie in the range \(T=0.65-0.71\) and \(R=0.26-0.28\), respectively.
As in the case of a bilayer graphene, the rotation an
Figure 3: (a) Photoinduced Faraday rotation angle \(\theta_{F}\) and (b) accompanying ellipticity \(\epsilon_{F}\) of the two-dimensional electron gas with _linear_ spectrum for a large dielectric contrast \((n_{2}-n_{1})/\bar{n}\) between the surrounding media. Three curves correspond to three values of the pump frequency: \(\Omega\tau_{1}=0.1,~{}0.5,~{}1\). Sharp resonances at \(\omega\approx\Omega\) occur. The curves are calculated after Eqs. (5), (20) and (22) for the following parameters: \(\tau_{1}(\varepsilon_{F})=0.1\) ps, \(n_{e}=3\times 10^{11}\) cm\({}^{-2}\), \(\tau_{0}=5\) ps, \(v_{0}=10^{8}\) cm/s, \(\tau_{1}=2\tau_{2}\propto\varepsilon^{-1}\) (short-range scatterers), \(I_{\Omega}=1\) kW/cm\({}^{2}\), \(n_{1}=1\), \(n_{2}=3\) and \(P_{\mathrm{circ}}=1\).
Figure 2: (a) Photoinduced Faraday rotation angle \(\theta_{F}\) and (b) the accompanying ellipticity \(\epsilon_{F}\) of the two-dimensional electron gas with _parabolic_ spectrum for a large dielectric contrast between the surrounding media. Three curves correspond to three values of the pump frequency: \(\Omega\tau_{1}=0.1,~{}0.5,~{}1\). Sharp resonances at \(\omega\approx\Omega\) occur. The curves are calculated after Eqs. (5), (20) and (21) for the following parameters: \(\tau_{1}(\varepsilon_{F})=0.1\) ps, \(n_{e}=10^{12}\) cm\({}^{-2}\), \(\tau_{0}=5\) ps, \(m=0.03m_{0}\), \(\tau_{1}=2\tau_{2}\propto\varepsilon\) (Coulomb scatterers), \(I_{\Omega}=1\) kW/cm\({}^{2}\), \(n_{1}=1\), \(n_{2}=3\) and \(P_{\mathrm{circ}}=1\).
gles and ellipticities in a single-layer graphene experience sharp resonances at \(\omega\approx\Omega\). The photoconductivity \(\sigma_{xy}\) in the vicinity of resonance has the form
\[\sigma_{xy}(\omega)\approx-\frac{\sigma_{0}e^{2}v_{0}^{2}(3-\mathrm{i}\Omega \tau_{1})\Omega\tau_{1}^{2}\tau_{0}|\mathbf{E}_{\Omega}|^{2}P_{\text{circ}}}{2 \varepsilon_{F}^{2}[1-\mathrm{i}(\omega-\Omega)\tau_{0}](1+\Omega^{2}\tau_{1} ^{2})(1-\mathrm{i}\Omega\tau_{1})^{3}}\;. \tag{25}\]
Interestingly, Eq. (25) holds both for short-range and Coulomb scatterers. Substituting Eq. (25) to Eq. (5), we obtain for the Faraday angle near the resonance:
\[\theta_{F}(\omega)\approx\frac{\pi\sigma_{0}}{c\bar{n}}\frac{e^{ 2}v_{0}^{2}\tau_{1}\tau_{0}|\mathbf{E}_{\Omega}|^{2}P_{\text{circ}}}{\varepsilon_{ F}^{2}}\\ \times\frac{\Omega\tau_{1}[\Omega^{4}\tau_{1}^{4}+6\Omega^{2}\tau _{1}^{2}-3+8\Omega\tau_{1}(\omega-\Omega)\tau_{0}]}{(1+\Omega^{2}\tau_{1}^{2}) ^{4}[1+(\omega-\Omega)^{2}\tau_{0}^{2}]}\;. \tag{26}\]
The magnitude of the resonance is determined by the product of \(\pi\sigma_{0}/(c\bar{n})\) and the dimensionless parameter \(e^{2}|\mathbf{E}_{\Omega}|^{2}\tau_{1}\tau_{0}/(m^{*}\varepsilon_{F})\) with the effective electron mass \(m^{*}=\varepsilon_{F}/v_{0}^{2}\) (\(m^{*}\approx 0.01\)\(m_{0}\) in our calculations).
### Free-standing monolayer graphene
In this section we consider a free-standing 2D layer by setting the refractive indices \(n_{1}=n_{2}=1\). In this case \(r_{12}=0\), \(t_{12}=1\), and as shown below Eq. (6), the Faraday and Kerr angles have different spectral dependences. Figure 4 shows the results of calculations for a free-standing monolayer graphene. The values of the rotation angles and ellipticities are larger for the free-standing layer than for the layer on a substrate, Figs. 2 and 3, for two reasons. First, the rotation angles and ellipticities are proportional to \(1/\bar{n}\), see Eqs. (5) and (6). Second, the pump field at \(z=0\), \(|\mathbf{E}_{\Omega}|^{2}=2\pi T(\Omega)I_{\Omega}/(cn_{2})\), is larger at a given pump intensity. Moreover, the values of the Kerr angle and ellipticity are significantly larger than the corresponding Faraday values, since \(\theta_{F}\propto\text{Re}\{\sigma_{xy}\}\), while \(\theta_{K}\propto\text{Re}\{\sigma_{xy}/\alpha\}\) at \(|\alpha|\ll 1\). Note that, however, the experimentally measured Kerr rotation signals, see Eq. (8), are still small due to the small reflection from the free-standing layer.
The calculated Faraday rotation angles for graphene samples are \(\sim 0.1^{\circ}-1^{\circ}\) per 1 kW/cm\({}^{2}\) of the pump intensity, see Figs. 2, 3 and 4. Similar values of the Faraday angles were measured in monolayer and multilayer graphene in the terahertz and far-infrared frequency range at external magnetic field \(B_{z}\sim 1\) T in Refs. [42; 43]. The rotation angles can be further increased in high-mobility 2DEG in GaAs/AlGaAs quantum wells with larger values of \(\tau_{1}\), see, e.g., Ref. [44].
### Synthetic magnetic field induced by pump
The action of the circularly polarized pump on 2DEG can be described in terms of a synthetic magnetic field \(B_{\text{syn}}\). This field equals to an external magnetic field, which rotates the polarization plane by the same angle as the pump. The Faraday angle in the presence of external magnetic field is given by Eq. (5) with the Hall conductivity \(\sigma_{xy}(B_{z})\), which results in \(\theta_{F}\sim(\omega_{c}\tau_{1})2\pi\sigma_{0}/(c\bar{n})\), where \(\omega_{c}=eB_{z}/mc\) is the cyclotron frequency. By comparison with Eqs. (24) and (26) at \(\Omega\tau_{1}\sim 1\), one can estimate the synthetic magnetic field from \(\omega_{c}\tau_{1}\sim e^{2}|\mathbf{E}_{\Omega}|^{2}\tau_{1}\tau_{0}/(m\varepsilon _{F})\), which yields
\[B_{\text{syn}}\sim\frac{ec|\mathbf{E}_{\Omega}|^{2}\tau_{0}}{\varepsilon_{F}}\;. \tag{27}\]
Note that the value of \(B_{\text{syn}}\) is quite universal, since it does not depend on the electron mobility and energy dispersion. It depends, however, on the energy relaxation time \(\tau_{0}\) and, hence, should increase with decreasing temperature.
Synthetic magnetic field induced by the pump with intensity \(I_{\Omega}=1\) kW/cm\({}^{2}\) at \(\varepsilon_{F}=50\) meV and \(\tau_{0}=10\) ps is \(B_{\text{syn}}\sim 0.1\) T. This value increases with the growth of radiation intensity and may reach 1 T for several kW/cm\({}^{2}\) terahertz and far-infrared radiation, which is used for spectroscopy of electron gas in graphene [35; 34]. Note that \(B_{\text{syn}}\) is significantly (several orders of magnitude) larger than the actual magnetic field induced by the orbital currents being the source of the inverse-Faraday magnetization [41; 13].
## V Summary
To summarize, we have studied theoretically the pump-probe Faraday and Kerr rotation due to the orbital magnetization in the two-dimensional electron gas (2DEG). We have shown that the circularly polarized electric field
Figure 4: (a, b) Photoinduced Faraday and Kerr rotation angles \(\theta_{F}\) and \(\theta_{K}\) and (c, d) the accompanying ellipticities \(\epsilon_{F}\) and \(\epsilon_{K}\) of the two-dimensional electron gas in a _free-standing_ graphene. Three curves correspond to three values of the pump frequency: \(\Omega\tau_{1}=0.1,\ 0.5,\ 1\). Sharp resonances at \(\omega\approx\Omega\) occur. The curves are calculated after Eqs. (5), (6) and (22) for the following parameters: \(\tau_{1}(\varepsilon_{F})=0.1\) ps, \(n_{e}=3\times 10^{11}\) cm\({}^{-2}\), \(\tau_{0}=5\) ps, \(v_{0}=10^{8}\) cm/s, \(\tau_{1}=2\tau_{2}\propto\varepsilon^{-1}\) (short-range scatterers), \(I_{\Omega}=1\) kW/cm\({}^{2}\), \(n_{1}=n_{2}=1\) and \(P_{\text{circ}}=1\).
of the terahertz-range pump results in the transverse conductivity \(\sigma_{xy}(\omega,\Omega)\) of 2DEG, which is proportional to the pump intensity and depends on both the probe and pump frequencies \(\omega\) and \(\Omega\), respectively. This pump-induced anisotropy of conductivity results in the circular birefringence and dichroism for a probe field. We have derived analytical expressions for \(\sigma_{xy}(\omega,\Omega)\) and the corresponding Faraday and Kerr rotation angles for parabolic and linear energy dispersion of 2D electrons and arbitrary scattering potential. We have shown that at \(\omega\approx\Omega\) rotation angles are resonantly enhanced, reaching \(0.1^{\circ}-1^{\circ}\) for 1 kW/cm\({}^{2}\) of the pump intensity in graphene samples at \(\Omega\tau_{1}\sim 1\), where \(\tau_{1}\) is the momentum relaxation time. Similar values of the Faraday angles were measured in monolayer and multilayer graphene in the terahertz and far-infrared frequency range in an external magnetic field \(B_{z}\sim 1\) T [42; 43]. The calculated Faraday and Kerr angles are governed by the momentum and energy relaxation of 2D electrons, and hence, can elucidate mechanisms and rates of electron relaxation processes in pump-probe experiments.
###### Acknowledgements.
The author thanks S. A. Tarasenko and M. M. Glazov for fruitful discussions. The work was supported by the Russian Science Foundation (Project No. 21-72-00047).
## Appendix A Transverse photoconductivity of 2DEG with linear energy spectrum
Here, we calculate pump-induced transverse conductivity for electrons with linear energy dispersion \(\varepsilon=v_{0}p\). We start with the general equation for the current (15). Calculating derivative on the right-hand side of Eq. (15) one obtains
\[j_{\omega,y}=e^{2}\nu E_{\omega,x}\sum_{\mathbf{p}}v_{x}v_{y} \varepsilon\left(\frac{\tau_{1\omega}}{\varepsilon}\right)^{\prime}f_{2}\\ +\frac{e^{2}v_{0}^{2}\nu}{2}\sum_{\mathbf{p}}\frac{(\tau_{1\omega})^{ \prime}}{\varepsilon}\left(f_{2,\omega-\Omega}E_{\Omega,y}+f_{2,\omega+\Omega }E_{\Omega,y}^{\ast}\right)\\ +\frac{e^{2}\nu}{2}\sum_{\mathbf{p}}\varepsilon\left(\frac{\tau_{1 \omega}}{\varepsilon}\right)^{\prime}\left[f_{2,\omega-\Omega}\left(2v_{x}v_{ y}E_{\Omega,x}-(v_{x}^{2}-v_{y}^{2})E_{\Omega,y}\right)\right.\\ \left.+f_{2,\omega+\Omega}\left(2v_{x}v_{y}E_{\Omega,x}^{\ast}-( v_{x}^{2}-v_{y}^{2})E_{\Omega,y}^{\ast}\right)\right]\,. \tag{18}\]
The first contribution in Eq. (18) proportional to \(f_{2}\) vanishes for circularly polarized pump. The first-order corrections to the distribution function coincide with the ones given by Eq. (17), whereas the second-order correction \(f_{2,\omega+\Omega}\) has the form
\[f_{2,\omega+\Omega}=\langle f_{2,\omega+\Omega}\rangle+\frac{e^{2} E_{\omega,x}}{2}\tau_{2,\omega+\Omega}\varepsilon\left[\frac{(\tau_{1 \Omega}+\tau_{1\omega})f_{0}^{\prime}}{\varepsilon}\right]^{\prime}\\ \times\left[(v_{x}^{2}-v_{y}^{2})E_{\Omega,x}+2v_{x}v_{y}E_{ \Omega,y}\right]\,, \tag{19}\]
where
\[\langle f_{2,\omega+\Omega}\rangle=\frac{e^{2}v_{0}^{2}\tau_{0,\omega+\Omega} }{2\varepsilon}\left[\varepsilon(\tau_{1\omega}+\tau_{1\Omega})f_{0}^{\prime \prime}\right]^{\prime}E_{\omega,x}E_{\Omega,x}\,. \tag{20}\]
The \(f_{2,\omega-\Omega}\) function is obtained from Eqs. (19), (20) by replacing \(\Omega\) with \(-\Omega\) and using the relations \(\tau_{1,-\Omega}=\tau_{1\Omega}^{\ast}\) and \(\mathbf{E}_{-\Omega}=\mathbf{E}_{\Omega}^{\ast}\). Finally, substituting \(f_{2,\omega\pm\Omega}\) given by Eqs. (19-20) into Eq. (18) for the current and calculating the sums we obtain Eqs. (20) and (22) of the main text.
## Appendix B Transverse photoconductivity at coinciding pump and probe frequencies
In this section, we calculate third-order response similar to Eq. (1) but at coinciding pump and probe frequencies, \(\omega=\Omega\). Electric field at the 2DEG plane \(\mathbf{E}(t)=\mathbf{E}\mathrm{e}^{-\mathrm{i}\omega t}+\mathrm{c.c.}\) is a sum of large circularly polarized (pump) and small linearly polarized (probe) contributions:
\[E_{x}=\frac{E_{1}}{\sqrt{2}}+E_{2}\,,\quad E_{y}=\mathrm{i}P_{\mathrm{circ}}E_ {1}/\sqrt{2}\,, \tag{21}\]
where \(P_{\mathrm{circ}}=\pm 1\) and \(E_{2}\ll E_{1}\).
We search the electron distribution function \(f(\mathbf{p},t)\) in the form
\[f(t)=f_{0}+\left[f_{1}(\mathbf{p})\mathrm{e}^{-\mathrm{i}\omega t}+ \mathrm{c.c.}\right]+f_{2}(\mathbf{p})\\ +\left[\tilde{f}_{2}(\mathbf{p})\mathrm{e}^{-2\mathrm{i}\omega t}+ \mathrm{c.c.}\right]+\left[f_{3}(\mathbf{p})\mathrm{e}^{-\mathrm{i}\omega t}+ \mathrm{c.c.}\right]\,, \tag{22}\]
where corrections to the distribution function satisfy the following equations
\[-\mathrm{i}\omega f_{1}+e\mathbf{E}\cdot\frac{\partial f_{0}}{ \partial\mathbf{p}} =\mathrm{St}\ f_{1}\,, \tag{23a}\] \[e\left(\mathbf{E}\cdot\frac{\partial f_{1}^{\ast}}{\partial\mathbf{p}}+ \mathbf{E}^{\ast}\cdot\frac{\partial f_{1}}{\partial\mathbf{p}}\right) =\mathrm{St}\ f_{2}\,,\] (23b) \[-2\mathrm{i}\omega\tilde{f}_{2}+e\mathbf{E}\cdot\frac{\partial f_{1}} {\partial\mathbf{p}} =\mathrm{St}\ \tilde{f}_{2}\,,\] (23c) \[-\mathrm{i}\omega f_{3}+e\mathbf{E}\cdot\frac{\partial f_{2}}{\partial \mathbf{p}}+e\mathbf{E}^{\ast}\cdot\frac{\partial\tilde{f}_{2}}{\partial\mathbf{p}} =\mathrm{St}\ f_{3}\,. \tag{23d}\]
The transverse electric current is determined by the third-order correction \(f_{3}\) and reads
\[j_{\omega,y}=e\sum_{\mathbf{p}}v_{y}f_{3}=e^{2}\sum_{\mathbf{p}}\left(f_{2}\mathbf{E}+\tilde{ f}_{2}\mathbf{E}^{\ast}\right)\cdot\frac{\partial(v_{y}\tau_{1\omega})}{\partial\mathbf{p}}\,. \tag{24}\]
Taking derivative in the right-hand side for the case of linear dispersion and simplifying, we obtain
\[j_{\omega,y}=e^{2}v_{0}^{2}\sum_{\mathbf{p}}\left[\frac{\tau_{1\omega} }{\varepsilon}+\frac{\varepsilon}{2}\left(\frac{\tau_{1\omega}}{\varepsilon} \right)^{\prime}\right]f_{2}E_{y}\\ +\frac{e^{2}}{2}\sum_{\mathbf{p}}\varepsilon\left(\frac{\tau_{1\omega} }{\varepsilon}\right)^{\prime}f_{2}\left[2v_{x}v_{y}E_{x}-(v_{x}^{2}-v_{y}^{2})E_{ y}\right]\\ +e^{2}v_{0}^{2}\sum_{\mathbf{p}}\left[\frac{\tau_{1\omega}}{\varepsilon }+\frac{\varepsilon}{2}\left(\frac{\tau_{1\omega}}{\varepsilon}\right)^{\prime} \right]\tilde{f}_{2}E_{y}^{\ast}\\ +\frac{e^{2}}{2}\sum_{\mathbf{p}}\varepsilon\left(\frac{\tau_{1\omega}}{ \varepsilon}\right)^{\prime}\tilde{f}_{2}\left[2v_{x}v_{y}E_{x}^{\ast}-(v_{x}^{2}-v_{ y}^{2})E_{y}^{\ast}\right]\,. \tag{25}\]
By solving Eqs. (B3b) and (B3c) with the use of Eq. (17), we obtain
\[f_{2}=e^{2}\tau_{2}\text{Re}\left\{\varepsilon\left(\frac{\tau_{ 1\omega}f_{0}^{\prime}}{\varepsilon}\right)^{\prime}\right\}\left[(v_{x}^{2}-v_{ y}^{2})S_{1}+2v_{x}v_{y}S_{2}\right]\\ +e^{2}v_{0}^{2}\tau_{0}S_{0}\text{Re}\left\{\frac{(\varepsilon \tau_{1\omega}f_{0}^{\prime})}{\varepsilon}\right\}\,, \tag{20}\]
and
\[\tilde{f}_{2}=\frac{e^{2}\tau_{2,2\omega}}{2}\varepsilon\left( \frac{\tau_{1\omega}f_{0}^{\prime}}{\varepsilon}\right)^{\prime}\left[(v_{x}^{ 2}-v_{y}^{2})s_{1}+2v_{x}v_{y}s_{2}\right]\\ +\frac{e^{2}v_{0}^{2}\tau_{0,2\omega}s_{0}}{2\varepsilon}(\tau_{ 1\omega}f_{0}^{\prime})^{\prime}\,. \tag{21}\]
Here, \(S_{0}=|\mathbf{E}|^{2}\), \(S_{1}=|E_{x}|^{2}-|E_{y}|^{2}\), \(S_{2}=E_{x}E_{y}^{*}+E_{x}^{*}E_{y}\) are the Stokes parameters, and \(s_{0}=E_{x}^{2}+E_{y}^{2}\), \(s_{1}=E_{x}^{2}-E_{y}^{2}\), \(s_{2}=2E_{x}E_{y}\). By substituting Eqs. (B6), (B7) and (B1) in Eq. (B5) for the current, performing summation over \(\mathbf{p}\) and simplifying, we finally obtain
\[j_{\omega,y}=-\frac{\text{i}\sigma e^{2}v_{0}^{2}P_{\text{circ}}E _{1}^{2}E_{2}}{\varepsilon_{F}}\left\{\left(\frac{2\tau_{0}}{1+\text{i}\sigma \tau_{1}}-\tau_{0,2\omega}\right)A\right.\\ \left.-\frac{\tau_{2}A+\varepsilon_{F}\tau_{2}^{2}B}{1+\text{i} \omega\tau_{1}}+\frac{3}{2}\left(\tau_{2,2\omega}A+\varepsilon_{F}\tau_{2,2 \omega}^{\prime}B\right)\right\}\,, \tag{22}\]
where
\[A=\varepsilon_{F}\tau_{1\omega}^{\prime\prime}+\tau_{1\omega}^{\prime}-\frac{ \tau_{1\omega}}{\varepsilon_{F}}\,,\quad B=\tau_{1\omega}^{\prime}-\frac{ \tau_{1\omega}}{\varepsilon_{F}}\,. \tag{23}\]
Here, we only left contributions to the current proportional to \(E_{1}^{2}E_{2}\).
Note that, for a simplified relaxation model with relaxation times \(\tau_{0}=\tau_{1}=\tau_{2}\) and independent of energy, the current given by Eq. (22) coincides with Eq. (69) of Ref. [31].
|
2307.07513 | An empirical study of using radiology reports and images to improve ICU
mortality prediction | Background: The predictive Intensive Care Unit (ICU) scoring system plays an
important role in ICU management because it predicts important outcomes,
especially mortality. Many scoring systems have been developed and used in the
ICU. These scoring systems are primarily based on the structured clinical data
in the electronic health record (EHR), which may suffer the loss of important
clinical information in the narratives and images. Methods: In this work, we
build a deep learning based survival prediction model with multi-modality data
to predict ICU mortality. Four sets of features are investigated: (1)
physiological measurements of Simplified Acute Physiology Score (SAPS) II, (2)
common thorax diseases pre-defined by radiologists, (3) BERT-based text
representations, and (4) chest X-ray image features. We use the Medical
Information Mart for Intensive Care IV (MIMIC-IV) dataset to evaluate the
proposed model. Results: Our model achieves the average C-index of 0.7829 (95%
confidence interval, 0.7620-0.8038), which substantially exceeds that of the
baseline with SAPS-II features (0.7470 (0.7263-0.7676)). Ablation studies
further demonstrate the contributions of pre-defined labels (2.00%), text
features (2.44%), and image features (2.82%). | Mingquan Lin, Song Wang, Ying Ding, Lihui Zhao, Fei Wang, Yifan Peng | 2023-06-20T15:43:28Z | http://arxiv.org/abs/2307.07513v1 | # An empirical study of using radiology reports and images to improve ICU mortality prediction
###### Abstract
**Background:** The predictive Intensive Care Unit (ICU) scoring system plays an important role in ICU management because it predicts important outcomes, especially mortality. Many scoring systems have been developed and used in the ICU. These scoring systems are primarily based on the structured clinical data in the electronic health record (EHR), which may suffer the loss of important clinical information in the narratives and images.
**Methods:** In this work, we build a deep learning based survival prediction model with multi-modality data to predict ICU mortality. Four sets of features are investigated: (1) physiological measurements of Simplified Acute Physiology Score (SAPS) II, (2) common thorax diseases pre-defined by radiologists, (3) BERT-based text representations, and (4) chest X-ray image features. We use the Medical Information Mart for Intensive Care IV (MIMIC-IV) dataset to evaluate the proposed model.
**Results:** Our model achieves the average C-index of 0.7829 (95% confidence interval, 0.7620-0.8038), which substantially exceeds that of the baseline with SAPS-II features (0.7470 (0.7263-0.7676)). Ablation studies further demonstrate the contributions of pre-defined labels (2.00%), text features (2.44%), and image features (2.82%).
**Conclusions:** Our model achieves a higher average C-index than the traditional machine learning methods under the same feature fusion setting, suggesting that the deep learning methods can outperform the traditional machine learning methods in ICU mortality prediction. These results highlight the potential of deep learning models with multimodal information to enhance ICU mortality prediction. We make our work publicly available at [https://github.com/bionlplab/mimic-icu-mortality](https://github.com/bionlplab/mimic-icu-mortality).
**Keywords:** Mortality prediction, Deep learning, Multimodal fusion
## 1 Introduction
Predictive ICU scoring systems measure the disease severity to predict outcomes, typically mortality, of patients in the intensive care unit (ICU) [1]. Such measurements help standardize research and compare the quality of patient care across ICUs. For example, the Acute Physiology and Chronic Health Evaluation [2], Simplified Acute Physiology Score (SAPS) II [3], and Mortality Probability Model [4] were created explicitly for the use in the ICU, validated in the critically ill, and relied primarily on the physiologic data to predict mortality. These scoring systems have been primarily based on structured clinical data including the risk factors used by the scoring system such as demongraphsics, vital signs and lab tests, which are frequently documented in the electronic health record (EHR).
The recent development of machine learning offers great potential to improve ICU mortality prediction [5, 6, 7, 8]. However, these studies used only structured, coded approaches for data entry, thus may result in the loss of significant clinical information typically contained in the narratives and images [9, 10]. To overcome this issue, many studies focus on mining unstructured clinical notes for patient mortality prediction [11, 12, 13]. However, most of these works were not compared with the current scoring system, making it challenging to compare these models fairly.
Moreover, the practice of modern medicine usually relies on multimodal information. Consequently, many feature fusion strategies were proposed to enhance the performance of prediction algorithms, such as early fusion, late fusion, and joint fusion [14]. Early fusion combines multimodal features into a single vector by concatenating or averaging [15, 16, 17]. Late fusion combines the predictions of multiple models to make the final decision [18, 19, 20]. Joint fusion combines the features from the intermediate layer of the neural network with the features of other modalities. The loss during the training process will propagate back to the feature extraction neural network, thereby creating a better feature representation through training iterations [21, 22, 23, 14, 23]. Despite these encouraging findings, we note that most competitive approaches studied the classification tasks. Thus, the integration of text and images in the survival analysis framework remains an important yet, to date, insufficiently studied problem.
We, therefore, sought to overcome these limitations by (1) incorporating the potentials of natural language processing (NLP) and medical image analysis to identify the hidden features of critical illness among ICU patients in the radiology reports and chest
X-rays that may not be found in the structured EHR fields [24]; and (2) investigating deep learning models that may provide the superior discrimination of ICU mortality predictions compared to traditional machine learning models [25]. Specifically, we first build the clinical prediction models to predict ICU mortality using the SAPS-II risk factors such as demographics, vital signs, and lab tests. These measurements were obtained in the first 24 hours of ICU admission. We then enrich the model with multimodal features extracted from radiology reports and chest X-rays. The radiology imaging and reading were studied in the first 24 hours. We hypothesize that including free texts and images provides better predictions of ICU mortality than including clinical measurements alone. Experiments on the MIMIC-IV dataset [26] show that our multimodal models are substantially more accurate than the unimodal ones.
Our framework has several important strengths. First, we present a method to fuse multimodal data from EHR for ICU mortality prediction. Second, we demonstrate that our survival analysis model outperforms the existing clinical standards (SAPS-II). Third, we make our work publicly available for reproduction by others.
## 2 Methods
### Task
We first formulate the survival analysis task, which predicts a patient's survival probability in ICU as a function of their features. We have \(n\) patients \((x_{i},y_{i},\delta_{i})\). Each patient record consists of \(d\) potential covariants \(x_{i}\in R^{d}\), and the time \(T_{i}\) when the death occurred or the time \(C_{i}\) of censoring. Since death and censoring are mutually exclusive, we use the indicator \(\delta_{i}\in\{0,1\}\) and the observed survival time \(y_{i}\), defined as below.
\[y_{i}=min(T_{i},C_{i})=\begin{cases}T_{i}&\text{if }\delta_{i}=1\\ C_{i}&\text{if }\delta_{i}=0\end{cases} \tag{1}\]
The goal is to estimate the survival probability \(S_{i}(t)=Pr_{i}(T>t)\) of a patient who was not dead beyond time \(t\).
In this study, we use one of the most popular survival analysis models, the Cox model [27], where the survival function is assumed to be
\[S_{i}(t|x_{i})=S_{0}(t)^{e^{\psi(x_{i})}}. \tag{2}\]
In this model, \(S_{0}(t)\) is the baseline survival function that describes the risk for individuals with \(x_{i}=\mathbf{0}\), and \(\psi(x_{i})=x_{i}\beta\) is the relative risk based on the covariants. Note that \(S_{0}(t)\) is shared by all patients at time \(t\). It is NOT associated with any individual covariants. The effect of the covariate values \(x_{i}\) on the survival function is to raise it to a power given by the relative risk.
In the Cox model, \(\psi(x_{i})\) has the form of a linear function, but we can also extend it to a non-linear risk function of a neural network, called the DeepSurv-based model. The DeepSurv-based model has three steps: features extraction, multimodal feature fusion, and survival analysis. The main difference between our model and the DeepSurv model in [28] is that our deep network performs multimodal feature fusion. When there
is only a single modality as input, our model is equivalent to the DeepSurv model. The detail of the neural network via feature fusion is as described in the next section.
### Neural network via feature fusion
The practices of physicians rely heavily on the synthesis of data from multiple sources. This includes, but is not limited to, structured laboratory data, unstructured text data, and imaging pixel data. Therefore, automated predictive models that can successfully utilize multimodal data may lead to better performance.
In this paper, we expand \(\psi(x_{i})\) by introducing a deep neural network with the fusion features from multiple sources: SAPS-II risk factors \(x_{saps}\), text features \(x_{text}\), and imaging features \(x_{img}\) (Figure 1). The extracted text features \(x_{text}\) and image features \(x_{img}\) are respectively passed to two separate Multilayer Perceptron (MLP) modules where the output dimensions are equal. We then use the two hidden features by element-wise averaging. Finally, we concatenate it to \(x_{saps}\).
\[x_{i}=Avg(\text{DNN}_{img}(x_{img}),\text{DNN}_{text}(x_{text}))\oplus x_{saps} \tag{3}\]
In terms of fusion strategy, our approach is similar to "early fusion" which refers to the process of combining features from multiple input modalities into one single feature vector before feeding into the survival model [14]. The difference is that our loss is propagated back to the DNNs during training, thus creating better feature selections for each training iteration. In addition, our approach is not "joint fusion" because the parameters of the features are not updated during the training iteration.
Figure 1: Multimodal feature fusion network.
### Feature Extraction
#### 2.3.1 SAPS-II score and risk factors
SAPS-II was designed to measure the severity of disease for patients aged 18 or more admitted to ICU [3]. Twenty-four hours after admission to the ICU, the measurements have been completed and resulted in an integer point score between 0 and 163. The score is calculated from 15 routine physiological measurements including information about previous health status, and some information obtained at admission. These measurements are: Age, Heart rate, Blood pressure, Temperature, \(\text{PaO}_{2}/\text{FiO}_{2}\), Blood urea nitrogen, Urine output, Sodium, Potassium, Bicarbonate, Bilirubin, White blood count, Glasgow Coma Scale, Chronic disease, and Admission type.
#### 2.3.2 Text Features
In this work, we investigate three sets of text features.
**Common thorax diseases from radiology reports**. The first set of features consists of 13 pre-defined diseases commonly found in radiology reports (Atelectasis, Cardiomegaly, Consolidation, Edema, Enlarged Cardiomediastinum, Fracture, Lung lesion, Lung opacity, Pleural effusion, Pleural other, Pneumonia, Pneumothorax, Support devices) and Normal [27, 29, 30]. These labels were extracted from radiology reports using NegBio [31] and can be obtained from the MIMIC-CXR website1.
Footnote 1: [https://physionet.org/content/mimic-cxr-jpg/2.0.0/](https://physionet.org/content/mimic-cxr-jpg/2.0.0/)
**Transformer-based features**. The second set of features are text embeddings extracted by the BERT model - taking advantage of the pre-training on large-scale biomedical and clinical text corpora. Utilizing clinical texts in survival analysis is difficult because they are largely unstructured. The above lung diseases may fail to capture the textual information comprehensively since their labels are still limited in scope. In this work, we use the BERT-based hidden layer representations as text features. For an input report that contains \(m\) tokens, the BERT model will produce a \(d\)-dimension embedding vector for each token, resulting in an \(m\times d\) representation vector of the report in the latent space. We apply average pooling over the token embeddings from the last layer of the BERT model to obtain an aggregate latent report representation.
**GCN-based features**. We build a graph convolutional neural network (GCN) to model the inner correlations among radiology concepts. The graph was manually defined by domain experts (Figure 1 in Zhang et al. [32]. Disease findings are defined as nodes in the graph, and correlated findings are connected to influence each other during graph propagation. We take the \(m\times d\) hidden representation vectors from the last layer of the BERT model. To initialize GCN node features, we apply a 1-dimension convolution over the text features with the kernel size \(k\) and the number of output channels equal to the number of graph nodes. In this way, the graph nodes are initialized by aggregating the hidden features of all the tokens in the report.
The GCN updates its node representations by message passing. We first calculate \(\hat{A}=D^{-1/2}\tilde{A}D^{-1/2}\) in a pre-processing step. \(\tilde{A}=A\) + \(I_{N}\) is the adjacency matrix
with added self-connections, where \(A\) is the graph adjacency matrix, \(I_{N}\) is the \(N\)-dimension identity matrix, \(D=\text{diag}\sum_{j}A_{ij}\) is the diagonal node degree matrix. Then based on [33], the graph convolution can be expressed as:
\[H^{l} =ReLu(\hat{A}H^{0}W^{0}+b^{0})\] \[Z =softmax(\hat{A}H^{1}W^{1}+b^{1})\]
where \(H^{l}\) is the states in the \(l\)-th layer, with \(H^{0}\) initialized using the aggregate report text hidden features, and \(W^{l}\) is a trainable layer-specific weights matrix.
#### 2.3.3 Image Features
For image feature extraction, we use the ChexNet, a DenseNet-121 model pretrained on CheXpert dataset [29, 34, 35]. For each input image, we extract the image features of dimension \(d_{img}\) from the global average pooling layer of DenseNet-121.
## 3 Experiments
### Study population and patient selection
We use the MIMIC-IV dataset (Medical Information Mart for Intensive Care IV) to evaluate the proposed model [26]. MIMIC-IV is a de-identified clinical database composed of 382,278 patients admitted in the ICUs at Beth Israel Deaconess Medical Center. Of those, we excluded patients who had no CXR studies before the measurements have been completed and resulted in the SAPS-II score. Therefore, there are in total 9,928 patients included in this study. Out of these patients, 2,213 patients (22%) were deceased in the ICU. Table 1 lists the information of the ICU admission group studied in this work. Details of the SAPS-II can be found in Table A1.
### Evaluation metrics
To assess the accuracy of our models, we use the C-index, defined as:
\[L_{s}=\frac{\sum_{i,j}I(T_{i}\geq T_{j})\cdot I(R_{i}\leq R_{j})}{\sum_{i,j}I(T _{i}\geq T_{j})\cdot d_{j}},\]
where \(I(c)=\begin{cases}1&\text{if c is true}\\ 0&\text{otherwise}\end{cases}\), \(d_{j}=\begin{cases}1&\text{if }T_{j}\text{ exist}\\ 0&\text{otherwise}\end{cases}\), \(j\in\{1,2,\cdots,N\}\), and \(j{>}i\). \(N\) is the number of sample.
Intuitively, the C-index measures the extent to which the model can assign logical risk scores. An individual with a shorter time-to-event \(T\) should have a higher risk score \(R\) than those with a longer time-to-event. C-index assigns a random model 0.5 and a perfect model 1.
### Implementation and Experimental Settings
We perform a grid search to find the optimal hyperparameters based on the metrics and use them for all configurations. The MLP layer for SAPS-II risk factors takes an input of 15 dimensions, and fully connects to 15 output dimensions. The MLP layer for labels features fully connects the 14-dimension inputs to the 14-dimension outputs. The MLP layer for report text features fully connects the 768-dimension inputs to the 32-dimension outputs, and the MLP layer for chest X-ray image features fully connects the 1024-dimension inputs to the 32-dimension outputs.
We use 200 bootstrap samples to obtain a distribution of the C-index and report the 95% confidence intervals (CI). For each bootstrap experiment, we sample \(n\) patients with replacement from the whole set of \(n\) patients. We then split the sampled set into training (70%), validation (10%), and test (20%) sets. We iterate the training process for 250 epochs with batch size 72 and early stop if the validation loss does not decrease. The dropout rate is 0.5. The learning rate is 0.001 with an Adam optimizer [36].
We obtained the SAPS-II scores using the scripts in the MIMIC-IV repository2. The text embeddings are extracted using BlueBERT [37], which was pre-trained on the PubMed abstracts and MIMIC-III notes. We use pycox3, scikit-survival [38], and
\begin{table}
\begin{tabular}{l r r} \hline \hline & **ICU Discharge** & **ICU morality** \\ \hline Patient, n & 7,715 & 2,213 \\ Age, mean (SD), y & 61.78 (18.20) & 69.63 (14.84) \\ Gender, male/female, \% & 45/55 & 46/54 \\ Race, \% & & \\ American Indian & 0.27 & 0.18 \\ Asian & 3.25 & 0.39 \\ Black/African American & 11.43 & 11.16 \\ Hispanic/Latino & 3.88 & 2.67 \\ White & 65.37 & 63.76 \\ Other & 5.22 & 4.34 \\ Unable to obtain & 0.73 & 1.04 \\ Unknown & 9.85 & 12.97 \\ Common thorax diseases, \% & & \\ Atelectasis & 19.90 & 21.69 \\ Cardiomegaly & 17.64 & 18.08 \\ Consolidation & 4.19 & 8.00 \\ Edema & 13.45 & 19.52 \\ EC & 3.54 & 4.07 \\ Fracture & 2.68 & 2.21 \\ Lung Lesion & 2.64 & 4.97 \\ Lung Opacity & 26.36 & 36.69 \\ Pleural Effusion & 17.04 & 28.29 \\ Pleural Other & 0.62 & 0.72 \\ Pneumonia & 0.70 & 9.31 \\ Pneumothorax & 2.44 & 2.94 \\ Support Devices & 34.50 & 43.83 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Information of the ICU admission group. EC: Enlarged Cardiomediastinum.
PyTorch to implement the framework. Intel Core i9-9960X 16 cores processor and NVIDIA Quadro RTX 5000 GPU are used in this work.
## 4 Results
We first compare the baseline ICU scoring model and our models with four different feature settings. The SAPS-II score is an integer point score between 0 and 163 directly obtained from the MIMIC-IV website. The SAPS-II risk factors model is trained using the 15 routine physiological measurements. The SAPS-II risk factors + GCN features model is enriched with the GCN-based features. The SAPS-II risk factors + Image features model is enriched with chest X-ray image features. The multimodal features model is trained using SAPS-II risk factors combined with text features and chest X-ray image features using early average fusion.
Table 2 shows that the ICU scoring model achieves an average C-index of 0.7470 (95% confidence interval, 0.7263-0.7676). The mean C-index of our model with SAPS-II risk factors achieves 0.7545 (0.7240-0.7849), which brings 0.75% improvements to the ICU scoring baseline model. When combining the SAPS-II risk factors with GCN-based text features and image features, the models obtain the average C-index of 0.7720 (0.7517-0.7923) and 0.7752 (0.7518-0.7985), respectively, yielding increases of 2.50% and 2.82%. Using the multimodal features, the performance of the model can further be boosted. We obtain the average C-index of 0.7829 (0.7620-0.8038), resulting in an improvement of 3.60% over the ICU scoring model. We also train the multimodal features model with SAPS-II risk factors combined with GCN features and chest X-ray image features using early average fusion. The average C-index is 0.7805 (0.7570-0.8040), which is slightly lower than the proposed multimodal features model.
Figure 2 shows more details on bootstrapping. The violin shape reflects the distribution of the C-index: the thicker, the higher frequency. We find that the average C-index associated with the multimodal features model is statistically higher than the other four settings.
Figure 3 shows the C-index results of our SAPS-II risk factors model and multimodal features model, marked in red and blue respectively. Both are trained on the entire dataset and tested on the patients with normal or abnormal chest X-rays. It is clear that our multimodal features model outperforms the SAPS-II risk factors model and the normal subjects can be more accurately predicted by our model. Figure 4 further breaks chest X-ray abnormalities into 13 pre-defined thorax diseases.
\begin{table}
\begin{tabular}{l c c} \hline \hline Model & C-index & (95\% CI) \\ \hline SAPS-II scores (ICU scoring baseline) & 0.7477 & (0.7238-0.7716) \\ SAPS-II risk factors & 0.7555 & (0.7220-0.7890) \\ SAPS-II risk factors + GCN features & 0.7745 & (0.7486-0.8004) \\ SAPS-II risk factors + Image features & 0.7757 & (0.7522-0.7992) \\ Multimodal features & **0.7847** & (0.7625-0.8068) \\ \hline \hline \end{tabular}
\end{table}
Table 2: C-index comparisons of the models using different sets of features.
## 5 Discussion
### Comparison of different types of text features.
First, we compare the results of our model using different types of text features. SAPS-II risk factors + labels, SAPS-II risk factors + transformer features, and SAPS-II risk factors + GCN features. They are trained using 15 routine physiological measurements combined with 14 thorax disease labels, transformer-based features, and GCN-based features, respectively. Table 3 lists the results of our model using these three feature settings. The mean C-indexes for these three settings are 0.7669 (0.7456-0.7882), 0.7714 (0.7488-0.7941), and 0.7720 (0.7517-0.7923), respectively. Models with
Figure 3: The C-index results of the models trained on the entire dataset and tested on the normal patients or patients with chest X-ray abnormalities.
Figure 2: C-index comparisons of the models using different sets of features. \(*\): \(p-\)value \(\leq\) 0.05; \(**\): \(p-\)value \(\leq\) 0.01.
transformer features or GCN features outperform the model that uses only labels. But there is no significant difference between the transformer and GCN features.
### Contribution of thorax diseases in survival analysis
Next, we analyze the multivariate association of chest X-ray abnormalities to ICU mortality based on Cox Proportion Hazards (CoxPH model) (Table 4). The p-values of these four findings: enlarged cardiomediastinum, fracture, pneumonia, pneumothorax are greater than 0.05, indicating that there is no statistically significant difference. In other words, these findings do not contribute to mortality prediction.
### Comparison of linear and deep survival models
We then compare the performances of the linear machine learning model and deep learning model: CoxPH [40] and DeepSurv-based model.
\begin{table}
\begin{tabular}{l c c} \hline \hline Model & C-index & (95\% CI) \\ \hline SAPS-II risk factors + labels & 0.7669 & (0.7456-0.7882) \\ SAPS-II risk factors + transformer features & 0.7714 & (0.7488-0.7941) \\ SAPS-II risk factors + GCN features & **0.7720** & (0.7517-0.7923) \\ \hline \hline \end{tabular}
\end{table}
Table 3: The C-index results of the models using different types of text features.
Figure 4: The C-index results of the models trained on the entire dataset and tested on the patients with different chest X-ray abnormalities.
Table 5 shows the results for both models with two feature settings. The average C-indexes of the CoxPH model with SAPS-II risk factors and SAPS-II risk factors + labels are 0.7510 (0.7300-0.7720) and 0.7617 (0.7414-0.7819), respectively, in comparison with 0.7545 (0.7240-0.7849) and 0.7669 (0.7456-0.7882) obtained by our DeepSurv-based model. The results demonstrate that deep learning models outperform CoxPH on high-dimensional features. The p-value for CoxPH and DeepSurv-based model using SAPS-II is 0.01 and p-value is 1.08e-6 when using SAPS-II + labels.
### Error Analysis
Error analysis (i.e., examining the reasons behind inaccurate predictions) revealed that the multimodal accounted for fewer errors. Table 3 demonstrates one example case of ICU mortality. According to physiological measurements, SAPS-II graded patient #1 the score of 38 and patient #2 36. However, patient #1 was decreased at hour 198, but patient #2 was deceased at hour 75. Hence, the SAPS-II incorrectly assigned the score. However, our multimodal approach correctly assigned a higher survival probability to patient #1 (0.9903) than to patient #2 (0.9562). In one bootstrap sample, we
\begin{table}
\begin{tabular}{l c c c} \hline \hline & Hazard ratio & 95\% CI & \(p\)-value \\ \hline \hline \multicolumn{1}{l}{Atelectasis} & 0.84 & 0.75–0.94 & ** \\ \multicolumn{1}{l}{Cardiomegaly} & 0.85 & 0.76–0.96 & ** \\ \multicolumn{1}{l}{Consolidation} & 1.33 & 1.14–1.55 & *** \\ \multicolumn{1}{l}{Edema} & 1.23 & 1.10–1.38 & *** \\ \multicolumn{1}{l}{Enlarged Cardiomediastinum} & 0.91 & 0.75–1.12 & 0.37 \\ \multicolumn{1}{l}{Fracture} & 0.96 & 0.72–1.28 & 0.77 \\ \multicolumn{1}{l}{Lung Lesion} & 1.37 & 1.13–1.67 & ** \\ \multicolumn{1}{l}{Lung Opacity} & 1.29 & 1.17–1.42 & *** \\ \multicolumn{1}{l}{Pleural Effusion} & 1.13 & 1.02–1.26 & * \\ \multicolumn{1}{l}{Pleural Other} & 0.64 & 0.41–1.00 & * \\ \multicolumn{1}{l}{Pneumonia} & 1.07 & 0.93–1.23 & 0.34 \\ \multicolumn{1}{l}{Pneumothorax} & 1.10 & 0.86–1.41 & 0.45 \\ \multicolumn{1}{l}{Support Devices} & 1.27 & 1.16–1.39 & *** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Multivariate associations of chest X-ray abnormalities to ICU-mortality. *: \(p-\)value \(\leq\) 0.05; **: \(p-\)value \(\leq\) 0.01; ***: \(p-\)value \(<\) 0.001.
\begin{table}
\begin{tabular}{l c c} \hline \hline Model & C-index & (95\% CI) \\ \hline \multicolumn{1}{l}{SAPS-II risk factors} & \\ \multicolumn{1}{l}{CoxPH} & 0.7510 & (0.7300-0.7720) \\ \multicolumn{1}{l}{DeepSurv-based} & **0.7545** & (0.7240-0.7849) \\ \multicolumn{1}{l}{SAPS-II risk factors + labels} & \\ \multicolumn{1}{l}{CoxPH} & 0.7617 & (0.7414-0.7819) \\ \multicolumn{1}{l}{DeepSurv-based} & **0.7669** & (0.7456-0.7882) \\ \hline \hline \end{tabular}
\end{table}
Table 5: The C-index results of the conventional machine learning models and the deep learning models trained and tested on the entire dataset.
observed a total of 40,529 such errors (patient #1 has normal chest x-ray, and SAPS-II gives wrong predictions but our multimodal method gives correct predicitons) with 1,802 distinct patients, out of which 527 patients have normal chest X-rays and 1,275 patients have abnormal chest x-rays. Figure 5 shows the distribution of thorax diseases among 1,275 patients. It shows that Lung Opacity (38.98%) contributes most to the ICU mortality prediction.
## 6 Conclusion
In this paper, we propose a deep learning method that combines text and image features that may not be found in the structured EHR field before, to improve the ICU mortality prediction. Experiments on the MIMIC-IV dataset show that our multimodal features model (using SAPS-II risk factors and early fusion of text and X-ray image features) obtains a superior average C-index of 0.7829 (0.7620-0.8038) than several baselines. This demonstrates that the additional information provided by the multimodal features can improve the ICU mortality prediction performance. We also investigate whether deep learning methods are more powerful than traditional machine learning methods in predicting ICU mortality. Through experiments,
Figure 5: Distribution of thorax diseases among patients where our multimodal model made more accurate predictions than SAPS-II.
our model achieves a better average C-index than the CoxPH model under the same feature fusion setting, proving the superior performance of deep learning methods in ICU mortality prediction.
There are several limitations to this work. First, we use a fusion strategy similar to "early fusion" to fuse the text and image features extracted by BlueBERT and ChexNet, respectively, but the parameters of them are not updated during the training iterations. In the future, we plan to use joint fusion to propagate the loss back to the feature extraction modules during training, which may improve the representation learning performance. Second, a knowledge graph is a popular tool for representing background knowledge, which can improve several aspects of the model. We will explore other domain knowledge and try different ways of incorporating the knowledge graph into ICU mortality prediction. Third, the longitudinal EHR data contain the information regarding the disease progressions that may help ICU mortality prediction, but are not utilized in this work. In the future, we can employ the longitudinal EHR to assist in predicting ICU mortality. Fourth, there is a risk of selction bias in this study. For instance, we only included in our analysis patients with imaging studies after ICU admission. Imaging studies are usually performed when a patient is sicker, for example, to confirm central line placement. This selection could lead to a sample that is not representative of the ICU population. However, selection bias is a common problem in machine learning [39], statistics [40], and epidemiology [41]; as a result, a number of techniques have been developed to correct it. In the future, we will investigate these techniques. Fifth, machine learning models are vulnerable to adversarial attacks [42]. For example, images can be attacked by adding a small perturbation to the original images. Texts can be attacked by adding a small number of words. These attacks are imperceptible to humans but mislead a model into producing incorrect outputs. Like selection bias, the adversarial attack is a common problem in the medical domain, where accurate diagnostic results are of paramount importance [43]. Previous studies suggest that if a model could eliminate noises in their learned feature representations, they would be more robust against adversarial perturbations [44]. We will study these techniques to improve the robustness of the model in the future.
While our work only scratches the surface of multimodal fusion for survival analysis, we hope it will shed light on the future directions for ICU mortality prediction.
**Supplementary information.**
**Abbreviations.**
* ICU: Intensive Care Unit
* EHR: Electronic Health Record
* SAPS: Simplified Acute Physiology Score
* MIMIC: Medical Information Mart for Intensive Care
* NLP: Natural Language Processing
* GP: Gaussian Process
* CNN: Convolutional Neural Network
* MLP: Multilayer Perceptron
* GCN: Graph Convolutional Network
* CI: Confidence intervals
## Declarations
Ethics approval and consent to participate.The dataset supporting the conclusions of this article is available in the Medical Information Mart for Intensive Care version IV (MIMIC-IV), which is a public de-identified database thus informed consent and approval of the Institutional Review Board was waived. Our access to the database was approved after completion of the Collaborative Institutional Training Initiative (CITI program) web-based training.
Consent for publication.N/A
Availability of data and materials.We made our codes publicly available at [https://github.com/bionlplab/mimic-icu-mortality](https://github.com/bionlplab/mimic-icu-mortality). The dataset we are using in this work is Medical Information Mart for Intensive Care IV (MIMIC-IV), which is also publicly available at [https://physionet.org/content/mimiciv/0.4/](https://physionet.org/content/mimiciv/0.4/).
Competing interests.The authors declare that they have no competing interests.
Funding.This work was funded by the National Library of Medicine under award number 4R00LM013001 and Amazon Machine Learning Grant.
Author's contributions.ML and SW implemented the methods, conducted the experiments and wrote the paper. YP advised on all aspects of the work involved in this project and assisted in the paper writing. YD, LZ, and FW advised on the overall direction of the project and edited the paper. All authors read and approved the final manuscript.
\begin{tabular}{c c c c} \hline & **Score** & **ICU Discharge** & **ICU Morality** \\ & & & \% & \% \\ \hline
40-69 & 2 & 42.28 & 28.11 \\
70-119 & 0 & 34.71 & 32.90 \\
120-159 & 4 & 0.93 & 2.94 \\ \(\geq\)160 & 7 & 1.04 & 4.84 \\ Systolic BP, mmHg & & & \\ \(<\)70 & 13 & 5.51 & 22.01 \\
70-99 & 5 & 62.49 & 59.51 \\
100-199 & 0 & 30.82 & 17.44 \\ \(\geq\)200 & 2 & 1.18 & 1.04 \\ Temperature \(\leq 39^{\circ}\)C & & & \\ No & 0 & 95.58 & 93.76 \\ Yes & 3 & 4.42 & 6.24 \\ PaO\({}_{2}\)/FiO\({}_{2}\), mm Hg & & & \\ \(<\)100 & 11 & 3.47 & 13.29 \\
100-199 & 9 & 7.57 & 13.92 \\ \(\geq\)200 & 6 & 13.66 & 17.40 \\ No ventilation & 0 & 75.29 & 55.40 \\ Blood urea nitrogen, mg/dL & & & \\ \(<\)28 & 0 & 70.28 & 47.40 \\
28-93 & 6 & 26.80 & 46.18 \\ \(\geq\)84 & 10 & 2.92 & 6.42 \\ Urine output, mL/day & & & \\ \(<\)500 & 11 & 7.66 & 29.60 \\
500-999 & 4 & 16.84 & 19.84 \\ \(\geq\)1000 & 0 & 75.50 & 50.56 \\ Sodium, mEq/L & & & \\ \(<\)125 & 5 & 2.20 & 2.58 \\
125-144 & 0 & 89.31 & 81.16 \\ \(\geq\)145 & 1 & 8.49 & 16.27 \\ Potassium & & & \\
3.0-4.9 & 0 & 82.31 & 69.95 \\ \(<\)3.0 or \(\geq\)5.0 & 3 & 17.69 & 30.05 \\ Bicarbonate, mEq/L & & & \\ \(<\)15 & 6 & 4.72 & 16.49 \\
15-19 & 3 & 18.09 & 26.25 \\ \(\geq\)20 & 0 & 77.19 & 67.25 \\ Bilirubin, mg/dL & & & \\ \(<\)4.0 & 0 & 96.64 & 90.24 \\
4.0-5.9 & 4 & 1.24 & 2.76 \\ \(\geq\)6.0 & 9 & 2.11 & 7.00 \\ White blood count, x10\({}^{3}\)/mm\({}^{3}\) & & & \\ \hline \end{tabular}
continued on next page |
2301.04394 | Bounds on Embeddings of Triangulations of Spheres | Borcea and Streinu showed that the upper bound of the number of congruence
classes of a minimally $d$-volume rigid $(d+1)$-uniform hypergraph on $n$
vertices in $\mathbb{R}^d$ increases exponentially in $n$ and $d$. We show that
this result also holds for triangulations of $\mathbb{S}^2$ in $\mathbb{R}^2$,
and then find a geometrically motivated bound linear in $n$ for bipyramids. By
the methods used to deduce this bound, we show that, in general, global
$d$-volume rigidity in $\mathbb{R}^d$ is not a generic property of a
$(d+1)$-uniform hypergraph. | Jack Southgate | 2023-01-11T10:39:44Z | http://arxiv.org/abs/2301.04394v3 | # Bounds on Embeddings of Triangulations of Spheres
###### Abstract
Borcea and Streinu [2012] showed that the upper bound of the number of congruence classes of a minimally \(d\)-volume rigid \((d+1)\)-uniform hypergraph on \(n\) vertices in \(\mathbb{R}^{d}\) increases exponentially in \(n\) and \(d\). We show that this result also holds for triangulations of \(\mathbb{S}^{2}\) in \(\mathbb{R}^{2}\), and then find a geometrically motivated bound linear in \(n\) for bipyramids. By the methods used to deduce this bound, we show that, in general, global \(d\)-volume rigidity in \(\mathbb{R}^{d}\) is not a generic property of a \((d+1)\)-uniform hypergraph.
## 1 Introduction
For any natural number \(d\), a \((d+1)\)-uniform hypergraph \(\Theta\) may be realised in \(\mathbb{R}^{d}\) as a framework by representing each of its vertices as a point in \(\mathbb{R}^{d}\). The hyperedges of \(\Theta\) in such a framework specify geometric \(d\)-simplices whose signed \(d\)-volumes may be measured. We say that two frameworks of \(\Theta\) are equivalent if, for each hyperedge of \(\Theta\), the signed \(d\)-volumes of the associated \(d\)-simplices are equal between the two frameworks. We say that two frameworks are congruent if this holds for every \((d+1)\)-tuple of vertices of \(\Theta\). We provide rigorous definition for these concepts in Section 2.
If \((\Theta,p)\) is a rigid framework in \(\mathbb{R}^{d}\) (here \(p\) represents the configuration, ie. the list of points in \(\mathbb{R}^{d}\)), then there are only finitely many frameworks \((\Theta,q)\) equivalent to \((\Theta,p)\), modulo those congruent to \((\Theta,p)\). Therefore it is reasonable to try and study how many such _congruence classes_ a framework admits as a function of \(d\) and constants of \(\Theta\) (for example the number of vertices, hyperedges and so on).
Borcea and Streinu [2012] put the following exponential upper bound in terms of \(d\) and \(n\) (the number of vertices) in the case of minimally \(d\)-volume rigid hypergraphs (ie. generically \(d\)-volume rigid hypergraphs on \(dn-(d^{2}+d-1)\) hyperedges):
**Theorem 1.1**.: _Let \((\Theta,p)\) be a generic framework in \(\mathbb{R}^{d}\) of the \((d+1)\)-uniform
minimally generically rigid hypergraph \(\Theta\) on \(n\) vertices. Then \((\Theta,p)\) has at most_
\[(d(n-d-1))!\prod_{i=0}^{d-1}\frac{i!}{(n-d-1+i)!} \tag{1}\]
_congruence classes._
They prove this similarly to their analogous result in Euclidean graph rigidity in Borcea and Streinu (2002), by setting the upper bound to be the degree of the signed \(d\)-volume measurement variety of \(\Theta\), the complexified, projectified version of which, in our case, is birationally equivalent to the \((d,n-1)\)-Grassmannian variety.
Since then, their Euclidean graph rigidity (upper and lower) bounds have been revisited several times, for instance by Steffens and Theobald (2010). Moreover, in specific cases, it has been improved, such as by Jackson et al. (2006) and Grasegger et al. (2020).
To the best of the author's knowledge, similar follow-up attention has not been paid to Theorem 1.1. In this paper, we will set out to consider it further.
We begin by proving a sharp constant lower bound, noting that a \((d+1)\)-uniform hypergraph is generically globally signed \(d\)-volume rigid in \(\mathbb{R}^{d}\) if every generic framework of it admits only one congruence class:
**Theorem 1.2**.: _Let \(d\geq 1\) and \(n\geq d+1\). There exists a generically minimally rigid \((d+1)\)-uniform hypergraph \(\Theta\) on \(n\) vertices that is generically globally signed \(d\)-volume rigid in \(\mathbb{R}^{d}\)._
In Section 4, we show that, for a \(3\)-uniform hypergraph describing triangulations of \(\mathbb{S}^{2}\), which is one hyperedge away from minimal rigidity, each hyperedge is redundant (in the sense that its inclusion or exclusion does not change the number of congruence classes). Therefore, generic frameworks of such hypergraphs in \(\mathbb{R}^{2}\) should have at most the number of congruence classes stated in Theorem 1.1, with \(d=2\). Meanwhile, we can also go backwards, finding a bound for triangulations of \(\mathbb{S}^{2}\) and tracing it back to a bound for minimally rigid hypergraphs.
For bipyramidal triangulations of \(\mathbb{S}^{2}\), we put forward the following improvement of the upper bound in Theorem 1.1:
**Theorem 1.3**.: _Let \(B_{n-2}\) be the \((n-2)\)-gonal bipyramid, for some \(n\geq 5\), and \((B_{n-2},p)\) a generic framework in \(\mathbb{R}^{2}\). Then \((B_{n-2},p)\) has at most \(n-4\) congruence classes._
As stated in Section 5 and shown in Appendix A, this bound is found by constructing a polynomial based off the geometry and combinatorics of frameworks of bipyramids.
By means of considering this polynomial, we show that global rigidity is not, in general, a generic property of \((d+1)\)-uniform hypergraphs:
**Theorem 1.4**.: _There exists a 3-uniform hypergraph that admits a generic framework with one and a generic framework with more than one congruence class._
Finally, we construct bounds for hypergraphs obtained by gluing two smaller hypergraphs together at a hyperedge.
### Acknowledgments
The author would like to thank Alex Rutar and Louis Theran, who both provided helpful comments on this paper. Moreover, this paper was written under the academic supervision of Louis Theran and many of the results in Sections 2, 5 and 6 were developed with his oversight.
## 2 \(d\)-Volume Rigidity
In this section, we give some definitions of \(d\)-volume rigidity and define the algebro-geometric objects used in Borcea and Streinu (2012). We also introduce pinned frameworks, which will serve as congruence class representatives for the configuration space of generic frameworks.
We will give a more-rigorous-than-strictly-necessary treatment to the definitions and Lemmas here. The reason for doing so is the relative scarcity of published literature on \(d\)-volume rigidity in comparison to Euclidean bar-joint rigidity, and therefore the desire to make sure the fundamentals are covered.
### Preliminary Definitions
Let \(d\in\mathbb{N}\), a _\((d+1)\)-uniform hypergraph_\(\Theta\) is a pair \((V,H)\) of a set of vertices \(V\) and a set of hyperedges \(H\subseteq\binom{V}{d+1}\). Write \(n=|V|\), \(m=|H|\), and label the vertices \(1,\ldots,n\) and the hyperedges by the ordered \((d+1)\)-tuples \(i_{1}\ldots i_{d+1}\), where \(1\leq i_{1}<\ldots i_{d+1}\leq n\). We will order the hyperedges lexicographically as above throughout.
We may realise a \((d+1)\)-uniform hypergraph \(\Theta=(V,H)\) in \(\mathbb{R}^{d}\) by pairing it with a _configuration_, a vector \(p=(p(1),\ldots,p(n))\in\mathbb{R}^{dn}\) where each \(p(i)\) is a point in \(\mathbb{R}^{d}\), to form a _framework_\((\Theta,p)\) in \(\mathbb{R}^{d}\). For every configuration \(p\in\mathbb{R}^{dn}\), there is a unique _configuration matrix_
\[C(p)=\begin{bmatrix}1&\ldots&1\\ p(1)_{1}&\ldots&p(n)_{1}\\ \vdots&\ddots&\vdots\\ p(1)_{d}&\ldots&p(n)_{d}\end{bmatrix}.\]
Moreover, for each \((d+1)\)-tuple \(i_{1}\ldots i_{d+1}\) in \((\Theta,p)\), we may specify the sub
matrix
\[C(i_{1}\ldots i_{d+1},p)=\begin{bmatrix}1&\ldots&1\\ p(i_{1})_{1}&\ldots&p(i_{d+1})_{1}\\ \vdots&\ddots&\vdots\\ p(1_{d})_{d}&\ldots&p(i_{d+1})_{d}\end{bmatrix}.\]
The _d-volume measurement map_ of \(\Theta\) is the polynomial map
\[f_{\Theta}:\mathbb{R}^{dn}\to\mathbb{R}^{m};p\mapsto(\det(C(h,p)):h\in H),\]
that lists the signed volumes of the \(d\)-simplices defined by the hyperedges of \((\Theta,p)\), as \(p\) varies in \(\mathbb{R}^{dn}\).
We say that two frameworks \((\Theta,p)\) and \((\Theta,q)\) in \(\mathbb{R}^{d}\) are _equivalent_ if
\[f_{\Theta}(p)=f_{\Theta}(q),\]
and _congruent_ if
\[f_{K_{n}^{d+1}}(p)=f_{K_{n}^{d+1}}(q),\]
where \(K_{n}^{d+1}=\left(V,\binom{V}{d+1}\right)\) is the _complete \((d+1)\)-uniform hypergraph_.
We say that the framework \((\Theta,p)\) in \(\mathbb{R}^{d}\) is _(d-volume) rigid_ if there exists \(\varepsilon>0\) so that
\[f_{\Theta}^{-1}(f_{\Theta}(p))\cap B_{\varepsilon}(p)=f_{K_{n}^{d+1}}^{-1}(f _{K_{n}^{d+1}}(p))\cap B_{\varepsilon}(p),\]
where
\[B_{\varepsilon}(p)=\{q\in\mathbb{R}^{dn}:d(p,q)<\varepsilon\},\]
for the Euclidean metric \(d:\mathbb{R}^{dn}\times\mathbb{R}^{dn}\to\mathbb{R}\), ie. if, for all frameworks defined by configurations sufficiently close to \(p\), equivalence to \((\Theta,p)\) yields congruence to \((\Theta,p)\); and _(d-volume) globally rigid_ if
\[f_{\Theta}^{-1}(f_{\Theta}(p))=f_{K_{n}^{d+1}}^{-1}(f_{K_{n}^{d+1}}(p)),\]
ie. if, for all configurations, equivalence to \((\Theta,p)\) implies congruence. Finally, if, for all \(\varepsilon>0\)
\[f_{\Theta}^{-1}(f_{\Theta}(p))\cap B_{\varepsilon}(p)\supsetneq f_{K_{n}^{d+1 }}^{-1}(f_{K_{n}^{d+1}}(p))\cap B_{\varepsilon}(p),\]
ie. we can continuously deform \(p\) to configurations yielding frameworks not congruent to \((\Theta,p)\) whilst maintaining equivalence, we say that \((\Theta,p)\) is _(d-volume) flexible_.
Define the _configuration space_ of \((\Theta,p)\) to be the space of configurations yielding frameworks equivalent to \((\Theta,p)\), modulo those yielding frameworks congruent to \((\Theta,p)\). We may express it as the following quotient space:
\[\mathcal{C}(\Theta,p)=f_{\Theta}^{-1}(f_{\Theta}(p))\diagup{f_{K_{n}^{d+1}}(f _{K_{n}^{d+1}}(p))}\]
Note that \((\Theta,p)\) is rigid if and only if \(\mathcal{C}(\Theta,p)\) is \(0\)-dimensional (if \(\mathcal{C}(\Theta,p)\) is also connected then the framework is globally rigid) and flexible otherwise.
Call the elements of \(\mathcal{C}(\Theta,p)\)_congruence classes_ of \((\Theta,p)\) (as they represent equivalence classes of configurations, where the equivalence relation is congruence of frameworks). Then the number of congruence classes of \((\Theta,p)\) is the number of frameworks that are equivalent to \((\Theta,p)\), up to congruence.
The configuration space of \((\Theta,p)\) is a semi-algebraic set (a subset of \(\mathbb{R}^{dn}\) defined as the solution of a set of polynomial equations). Therefore it has a finite number of connected components (see Basu and Pollack [2006] for example). As each of these connected components corresponds to the congruence class of an equivalent framework to \((\Theta,p)\), \((\Theta,p)\) has finitely many congruence classes if and only if \((\Theta,p)\) is \(d\)-volume rigid in \(\mathbb{R}^{d}\).
### \(d\)-Volume Preserving Affine Transformations
An affine transformation of \(\mathbb{R}^{d}\) is a map
\[f:\mathbb{R}^{d}\to\mathbb{R}^{d};x\mapsto Ax+b,\]
where \(A\in\mathbb{R}^{d\times d}\) and \(b\in\mathbb{R}^{d}\). Every affine transformation may be represented as an augmented matrix \(T\in\mathbb{R}^{(d+1)\times(d+1)}\) that acts on points of \(\mathbb{R}^{d}\) written in homogeneous co-ordinates as follows:
\[T\begin{bmatrix}1\\ x\end{bmatrix}=\begin{bmatrix}1&0^{t}\\ b&A\end{bmatrix}\begin{bmatrix}1\\ x\end{bmatrix}=\begin{bmatrix}1\\ b+Ax\end{bmatrix},\]
We say that \(T\) is \(d\)-volume preserving if \(\det(A)\) (equivalently \(\det(T)\)) is equal to \(1\). Indeed, then for any \(d+1\) points \(x^{1},\ldots,x^{d+1}\in\mathbb{R}^{d}\),
\[\left|T\begin{bmatrix}1\\ x^{1}\end{bmatrix}\right.\ \ \cdots\ \left.\begin{bmatrix}1\\ x^{d+1}\end{bmatrix}\right|=\det(T)\begin{bmatrix}1&\cdots&1\\ x^{1}&\cdots&x^{d+1}\end{bmatrix}=\begin{vmatrix}1&\cdots&1\\ x^{1}&\cdots&x^{d+1}\end{vmatrix}.\]
**Lemma 2.1**.: _The space of \(d\)-volume preserving affine transformations, \(\mathcal{V}(d,\mathbb{R})\), is \((d^{2}+d-1)\)-dimensional._
Proof.: The space \(\mathcal{V}(d,\mathbb{R})\) is an algebraic group, isomorphic to the semidirect product \(\operatorname{SL}(d,\mathbb{R})\ltimes\mathbb{R}^{d}\), the factors of which are themselves algebraic groups of dimensions \(d^{2}-1\) and \(d\) respectively. Hence
\[\dim(\mathcal{V}(d,\mathbb{R}))=(d^{2}-1)+d.\]
The next two propositions show the equivalence of rigidity in terms of the \(d\)-volume measurement map with rigidity in terms of affine \(d\)-volume preserving transformations of \(\mathbb{R}^{d}\).
**Proposition 2.2**.: _Let \(\Theta=(V,H)\) be a \((d+1)\)-uniform hypergraph. Two frameworks \((\Theta,p)\) and \((\Theta,q)\) in \(\mathbb{R}^{d}\) are equivalent if and only if there exists a set of \(d\)-volume preserving affine transformations \(\{T_{h}:h\in H\}\) such that \(T_{h}p(i)=q(i)\), for all vertices \(i\) in each hyperedge \(h\)._
A set of points \(P=\{p(1),\ldots,p(n)\}\subset\mathbb{R}^{d}\) is _affinely dependent_ if there exists a set of coefficients \(\{a_{1},\ldots,a_{n}\}\in\mathbb{R}\), not all equal to zero, so that
\[a_{1}\begin{bmatrix}1\\ p(1)\end{bmatrix}+\cdots+a_{n}\begin{bmatrix}1\\ p(n)\end{bmatrix}=\begin{bmatrix}0\\ 0\end{bmatrix},\]
or equivalently, if the affine span of \(P\) (the smallest affine subspace of \(\mathbb{R}^{d}\) containing \(P\)) is a proper affine subspace of \(\mathbb{R}^{d}\). Otherwise \(P\) is _affinely independent_.
Before beginning our proof, we note that affine transformations of \(\mathbb{R}^{d}\) are uniquely defined by their action on a set of \(d+1\) affinely independent points.
Proof of Proposition 2.2.: Suppose that \((\Theta,p)\) and \((\Theta,q)\) are equivalent. Then, for each \((d+1)\)-tuple \(i_{1}\ldots i_{d+1}\in H\):
1. If \(\{p(i_{1}),\ldots,p(i_{d+1})\}\) is affinely dependent, then the volume of \(i_{1}\ldots i_{d+1}\) in \((\Theta,p)\) is \(0\). Morover, there exists a choice of infinitely many \(d\)-volume preserving affine transformations \(T_{i_{1}\ldots i_{d+1}}\) so that \(T_{i_{1}\ldots i_{d+1}}C(i_{1}\ldots i_{d+1},p)=C(i_{1}\ldots i_{d+1},q)\).
2. If \(\{p(i_{1}),\ldots,p(i_{d+1})\}\) is affinely independent, then there exists a unique affine transformation of \(\mathbb{R}^{d}\), \(T_{i_{1}\ldots i_{d+1}}\) for which \(T_{i_{1}\ldots i_{d+1}}C(i_{1}\ldots i_{d+1},p)=C(i_{1}\ldots i_{d+1},q)\). Since \(\det(C(i_{1}\ldots i_{d+1},p))=\det(C(i_{1}\ldots i_{d+1},q))\), it follows that \(\det(T_{i_{1}\ldots i_{d+1}})=1\).
After running over all hyperedges of \(\Theta\), we end up with our set \(\{T_{h}:h\in H\}\).
Next, suppose that such a set exists \(\{T_{h}:h\in H\}\), then we know that \(\det(C(h,q))=\det(T_{h})\det(C(h,p))=\det(C(h,p))\), for all \(h\in H\). Hence \((\Theta,p)\) and \((\Theta,q)\) are equivalent.
If all the points of a framework \((\Theta,p)\) lie within a proper affine subset of \(\mathbb{R}^{d}\), we say that \((\Theta,p)\) is flat.
**Proposition 2.3**.: _Two non-flat frameworks of \(\Theta=(V,H)\) in \(\mathbb{R}^{d}\), \((\Theta,p)\) and \((\Theta,q)\), are congruent if and only if there exists a single \(d\)-volume preserving affine transformation \(T\) so that \(Tp(i)=q(i)\), for all vertices \(i\)._
Proof.: Since \((\Theta,p)\) and \((\Theta,q)\) are not flat, there exists a hyperedge \(h\in H\) so that \(\det(C(h,p))=\det(C(h,q))\neq 0\). Relabel \(V\) if necessary so that \(h=1\ldots(d+1)\).
Then, since \(f_{K_{n}^{d+1}}(p)=f_{K_{n}^{d+1}}(q)\), for every vertex \(j\geq d+2\), there are \(d+1\) hyperedges of the form \(1\ldots\hat{i}\ldots(d+1)j\), for \(1\leq i\leq d+1\). These define \(d\) independent hyperplanes, the intersection of which \(q(j)\) lies within. Therefore, the position \(q(j)=Tp(j)\), as both \(q(j)\) and \(p(j)\) are affinely dependent in the same manner on the vertices \(1,\ldots,d+1\).
Now, suppose that there exists a \(d\)-volume preserving affine transformation \(T\) so that \(Tp(i)=q(i)\), for all vertices \(i\in V\). Then \(TC(p)=C(q)\), so \(\det(C(i_{1}\ldots i_{d+1},p))=\det(C(i_{1}\ldots i_{d+1},q))\), for all \(i_{1}\ldots i_{d+1}\in\binom{V}{d+1}\). Therefore \(f_{K_{n}^{d+1}}(p)=f_{K_{n}^{d+1}}(q)\), ie. \((\Theta,p)\) and \((\Theta,q)\) are congruent.
### Flexes and Infinitesimal Rigidity
A _flex_ of \((\Theta,p)\) is a continuous path \(\gamma:[0,1]\to f_{\Theta}^{-1}(f_{\Theta}(p))\). We say that \(\gamma\) is _trivial_ if \(\gamma([0,1])\subset f_{K_{n}^{d+1}}^{-1}(f_{K_{n}^{d+1}}(p))\).
Flexes are continuous deformations of the vertices of a framework to equivalent frameworks. Trivial flexes are continuous deformations of the vertices of a framework to congruent frameworks (so called because they do not leave their congruence class).
**Proposition 2.4**.: _Suppose that \((\Theta,p)\) is a framework in \(\mathbb{R}^{d}\), then \((\Theta,p)\) is flexible if and only if \((\Theta,p)\) admits a non-trivial flex._
In order to prove this, we will use the Curve-Selection Lemma (Milnor (2016)). A helpful formulation being as follows:
**Lemma 2.5** (Curve-Selection Lemma).: _Let \(p\) and \(q\) be two points in a semi-algebraic set \(S\) and let \(U\) be an open neighbourhood of \(p\). If \(q\in S\cap U\), then there exists a path \(\gamma:[0,1]\to S\) so that \(\gamma([0,1])\subset S\cap U\), \(\gamma(0)=p\) and \(\gamma(1)=q\)._
Proof of Proposition 2.4.: Suppose that \((\Theta,p)\) is flexible. Then, for any \(\varepsilon>0\), there exists a configuration \(q\in(f_{\Theta}^{-1}(f_{\Theta}(p))\cap B_{\varepsilon}(p))\setminus(f_{K_{n }^{d+1}}^{-1}(f_{K_{n}^{d+1}}(p))\cap B_{\varepsilon}(p))\). Then, as \(f_{\Theta}^{-1}(f_{\Theta}(p))\) is a semi-algebraic set and \(B_{\varepsilon}(p)\) contains an open neighbourhood \(U\) of \(p\), we may apply the Curve-Selection Lemma to select a flex \(\gamma:[0,1]\to f_{\Theta}^{-1}(f_{\Theta}(p))\cap U\) so that \(\gamma(0)=p\) and \(\gamma(1)=q\). As \(\gamma([0,1])\not\subset f_{K_{n}^{d+1}}^{-1}(f_{K_{n}^{d+1}}(p))\), \(\gamma\) is not trivial.
Suppose that \((\Theta,p)\) admits a non-trivial flex \(\gamma\) and, for the sake of contradiction, that there exists \(\varepsilon>0\) such that \(f_{\Theta}^{-1}(f_{\Theta}(p))\cap B_{\varepsilon}(p)=f_{K_{n}^{d+1}}^{-1}(f_ {K_{n}^{d+1}}(p))\cap B_{\varepsilon}(p)\). Then, as \(\gamma([0,1])\not\subset f_{K_{n}^{d+1}}^{-1}(f_{K_{n}^{d+1}}(p))\), there exist \(0<\tau_{1}<\tau_{2}<1\) arbitrarily close to each other so that \(\gamma(\tau_{1})\in f_{K_{n}^{d+1}}^{-1}(f_{K_{n}^{d+1}}(p))\) and \(\gamma(\tau_{2})\not\in f_{K_{n}^{d+1}}^{-1}(f_{K_{n}^{d+1}}(p))\), therefore, as \((\Theta,\gamma(\tau_{1}))\) is flexible and congruent to \((\Theta,p)\), we have that \((\Theta,p)\) is flexible.
Proposition 2.4 links flexes to our previous notions of volume rigidity. For a framework \((\Theta,p)\), the following are therefore equivalent definitions of rigidity:
* There exists \(\varepsilon>0\) such that \(f_{\Theta}^{-1}(f_{\Theta}(p))\cap B_{\varepsilon}(p)=f_{K_{n}^{d+1}}^{-1}(f_ {K_{n}^{d+1}}(p))\cap B_{\varepsilon}(p)\);
* There exists \(\varepsilon>0\) such that, for all \(q\in B_{\varepsilon}(p)\), if there exists a set of \(d\)-volume preserving affine transformations of \(\mathbb{R}^{d}\), \(\{T_{h}:h\in H\}\), so that \(T_{h}C(h,p)=C(h,q)\), \(\forall h\in H\) then all the \(T_{h}\) are equal;
* Every flex of \((\Theta,p)\) is trivial.
The _rigidity matrix_ of the framework \((\Theta,p)\) in \(\mathbb{R}^{d}\), \(R(\Theta,p)\in\mathbb{R}^{m\times dn}\), is the Jacobian matrix of \(f_{\Theta}\) evaluated at \(p\). Therefore the rank of \(R(\Theta,p)\) is the dimension of the tangent space to the measurement variety of \(\Theta\) at \(f_{\Theta}(p)\)
(definition to follow). The rows of \(R(\Theta,p)\) are indexed by the hyperedges of \(\Theta\), whilst the columns are grouped into \(d\)-tuples, each indexed by the vertices of \(\Theta\). Denote the row indexed by hyperedge \(h\) by \(R(\Theta,p)_{h}\) and the column group indexed by vertex \(i\) by \(R(\Theta,p)^{i}\).
The \(1\times d\) submatrix \(R(\Theta,p)^{i}_{h}\) of \(R(\Theta,p)\) has as its \(k^{th}\) entry the signed \((d-1)\)-volume of the \(d\)-tuple of points in \(\mathbb{R}^{d-1}\): \(\{(p(j)_{1},\ldots,p(\hat{j})_{k},\ldots,p(j)_{d}):j\in h\setminus\{i\}\}\), \(R(\Theta,p)^{i}_{h}\) can thus be thought of as the vector based at \(0\) in the direction orthogonal to the affine span of the \(d\)-tuple of points in \(\mathbb{R}^{d}\): \(\{p(j):j\in h\}\) with length the volume of that span.
Define an _infinitesimal flex_\(\eta\) of \((\Theta,p)\) to be the infinitesimal velocity of a flex \(\gamma\) of \((\Theta,p)\):
\[\eta=\frac{d}{dt}\gamma\bigg{|}_{t=0}.\]
Say that \(\eta\) is _trivial_ if \(\gamma\) is _trivial_.
**Lemma 2.6**.: _The kernel of \(R(\Theta,p)\) is precisely the space of infinitesimal flexes of \((\Theta,p)\)._
Proof.: Firstly, suppose that \(\gamma\) is a flex of \((\Theta,p)\), then, for all \(t\in[0,1]\),
\[f_{\Theta}(\gamma(t))=f_{\Theta}(p),\]
differentiating this with respect to \(t\) gives
\[\frac{d}{dt}f_{\Theta}(\gamma(t)) =\frac{d}{dt}f_{\Theta}(p)\] \[R(\Theta,\gamma(t))\frac{d}{dt}\gamma(t) =0,\]
which, evaluated at \(t=0\), becomes
\[R(\Theta,p)\eta=0,\]
hence, the space of infinitesimal flexes lies within the kernel of \(R(\Theta,p)\).
Next, suppose that \(R(\Theta,p)x=0\), for some \(x=(x(1),\ldots,x(n))\in\mathbb{R}^{dn}\), then each \(x(i)\) is orthogonal to the span of each \(R(\Theta,p)^{i}_{h}\) (ie. the entries of \(R(\Theta,p)\) in column group \(i\) and row \(h\)). As the span of \(R(\Theta,p)^{i}_{h}\) is the line in \(\mathbb{R}^{d}\) orthogonal to the affine hyperplane spanned by \(\{p(j):j\in h\setminus\{i\}\}\), \(x(i)\) is _parallel_ to each \(d\)-hyperedge opposite \(i\) in \((\Theta,p)\). This is an equivalent definition of an infinitesimal flex.
### The Measurement Variety of a Hypergraph
Let \(\Theta=(V,H)\) be a \((d+1)\)-uniform hypergraph. Then we may define the _(d-volume) measurement variety_, \(M_{\Theta}\), of \(\Theta\) as the closure of the image of \(\mathbb{R}^{dn}\) under the \(d\)-volume measurement map:
\[M_{\Theta}=\overline{f_{\Theta}(\mathbb{R}^{dn})}.\]
The _complete measurement variety_ is the measurement variety of \(K_{n}^{d+1}\)
\[M_{K_{n}^{d+1}}=\overline{f_{K_{n}^{d+1}}(\mathbb{R}^{dn})}.\]
The following Lemma follows immediately from the definitions above.
**Lemma 2.7**.: _With notation as laid out above, \(M_{\Theta}=\pi_{H}(M_{K_{n}^{d+1}})\), where the map \(\pi_{H}:\mathbb{R}^{\binom{n}{d+1}}\to\mathbb{R}^{m}\) projects onto the co-ordinates indexed by hyperedges of \(\Theta\)._
The rigidity matrix \(R(\Theta,p)\) is the differential of \(f_{\Theta}\) evaluated at \(p\):
\[R(\Theta,p):T_{p}\mathbb{R}^{dn}\to T_{f_{\Theta}(p)}M_{\Theta},\]
therefore, its rank is the dimension of the neighbourhood of \(f_{\Theta}(p)\) in \(M_{\Theta}\).
Now, in order to find the dimensions of \(M_{\Theta}\) and \(f_{\Theta}\), we will prove the \(d\)-volume rigidity theoretic to Asimow and Roth's Theorem for Euclidean bar-joint rigidity of graph frameworks (see Asimow and Roth [1978]).
**Theorem 2.8**.: _Let \(\Theta=(V,H)\) be a \((d+1)\)-uniform hypergraph and let \(p\in\mathbb{R}^{dn}\) be a regular point of \(f_{\Theta}\). Then \((\Theta,p)\) is rigid if and only if \(\operatorname{rank}(R(\Theta,p))=dn-d^{2}-d+1\), moreover, this is the maximum rank that \(R(\Theta,p)\) may achieve, for any \(p\in\mathbb{R}^{dn}\)._
Before we prove Theorem 2.8 (in a similar manner to Asimow and Roth), we note some of its immediate consequences:
1. If \(m<dn-d^{2}-d+1\), then \(\Theta\) will always fail to be \(d\)-volume rigid in \(\mathbb{R}^{d}\);
2. If \((\Theta,p)\) is _flat_, then \((\Theta,p)\) will fail to be \(d\)-volume rigid in \(\mathbb{R}^{d}\).
Proof.: If \((\Theta,p)\) is flat, ie. \(f_{\Theta}(p)=0\in\mathbb{R}^{m}\), for some \(\Theta\) on more than three vertices, then \((\Theta,p)\) is flexible. Indeed, let \(\gamma\) be a flex of \((\Theta,p)\) so that \(\gamma(i)\in A\), where \(A\) is the affine span of all vertices of \((\Theta,p)\). If \(n>3\), then \(\gamma\) may be non-trivial whilst remaining in \(A\) for all \(t\in[0,1]\).
Recall from Lemma 2.1 that \(\dim(\mathcal{V}(d,\mathbb{R}))=d^{2}+d-1\). Consider the map
\[F_{p}:\mathcal{V}(d,\mathbb{R})\to\mathbb{R}^{dn};T\to(Tp(i):i\in V)\]
sending each \(d\)-volume preserving affine transformation \(T\) of \(\mathbb{R}^{d}\) to the configuration whose framework is the image of \((\Theta,p)\) under \(T\). Then \(F_{p}(\mathcal{V}(d,\mathbb{R}))=f_{K_{n}^{d+1}}^{-1}(f_{K_{n}^{d+1}}(p))\). As the kernel of this map is trivial, \(\dim(F_{p}(\mathcal{V}(d,\mathbb{R})))=dn-(d^{2}+d-1)\).
Now, \((\Theta,p)\) is rigid if and only if there exists \(\varepsilon>0\) such that \(f_{\Theta}^{-1}(f_{\Theta}(p))\cap B_{\varepsilon}(p)=f_{K_{n}^{d+1}}^{-1}(f_ {K_{n}^{d+1}}(p))\cap B_{\varepsilon}(p)\). By the Regular Value Theorem and our assumption of the regularity of \(p\), for small \(\varepsilon>0\), \(f_{\Theta}^{-1}(f_{\Theta}(p))\cap B_{\varepsilon}(p)\) is a connected \((dn-\operatorname{rank}(R(\Theta,p)))\)-dimensional manifold.
As \(f_{K_{n}^{d+1}}^{-1}(f_{K_{n}^{d+1}}(p))\cap B_{\varepsilon}(p)=F_{p}( \mathcal{V}(d,\mathbb{R}))\cap B_{\varepsilon}\subseteq f_{\Theta}^{-1}(f_{ \Theta}(p))\cap B_{\varepsilon}(p)\), equality follows from their dimensions matching, which happens if and only if \(\operatorname{rank}(R(\Theta,p))=dn-(d^{2}+d-1)\).
Finally, since \(\dim(f_{\Theta}^{-1}(f_{\Theta}(p)))\not<\dim(f_{K_{n}^{d+1}}^{-1}(f_{K_{n}^{d+1 }}(p)))\), we cannot have \(\operatorname{rank}(R(\Theta,p))\) exceeding \(dn-(d^{2}+d-1)\).
For a configuration \(p\) (resp. a framework \((\Theta,p)\)), we say that \(p\) (resp. \((\Theta,p)\)) are _generic_ if
\[f\in\mathbb{Q}[x]\setminus\{0\}\implies f(p)\neq 0.\]
**Corollary 2.8.1**.: _A generic framework \((\Theta,p)\) in \(\mathbb{R}^{d}\) is rigid if and only if it admits no non-trivial infinitesimal flexes._
Proof.: Suppose that \((\Theta,p)\) is flexible, then it admits a non-trivial flex, the infinitesimal velocity of which is a non-trivial infinitesimal flex.
Suppose that \((\Theta,p)\) is rigid. By the genericity of \(p\), the rank of \(R(\Theta,p)\) must be \(dn-(d^{2}+d-1)\), so the kernel of \(R(\Theta,p)\) is a \((d^{2}+d-1)\)-dimensional linear space. Since the space of trivial infinitesimal flexes is the kernel of \(R(K_{n}^{d+1},p)\), which is itself a \((d^{2}+d-1)\)-dimensional linear space, and is contained within the kernel of \(R(\Theta,p)\), we have equality of the two, ie. \((\Theta,p)\) admits no non-trivial infinitesimal flexes.
A _property_\(\mathcal{P}\) is a binary (true/false) valued function on some set \(X\). A property \(\mathcal{P}\) is _generic_ over \(X\) if it either holds for all generic points in \(X\) or for none of them (ie. it is constant over the generic points of \(X\)).
**Corollary 2.8.2**.: _Let \(\Theta\) be a \((d+1)\)-uniform hypergraph. The \(d\)-volume rigidity of a framework of \(\Theta\) is a generic property over \(\mathbb{R}^{dn}\)._
ie. Either all generic frameworks of \(\Theta\) in \(\mathbb{R}^{d}\) are \(d\)-volume rigid or none are.
Proof.: Suppose that \((\Theta,p)\) and \((\Theta,q)\) are two generic frameworks in \(\mathbb{R}^{d}\), with \((\Theta,p)\) rigid and \((\Theta,q)\) flexible. Let \(M(x)\) be a \((dn-(d^{2}+d-1))\times(dn-(d^{2}+d-1))\) minor of \(R(\Theta,x)\) (with \(x\) representing an indeterminate vector of length \(dn\)) so that \(M(q)=0\) but \(M(p)\neq 0\). Therefore a non-trivial polynomial with integer coefficients vanishes when evaluated at \(q\) but not at \(p\), both generic points, which contradicts our assumption that \(q\) is generic.
So if the generic frameworks of some hypergraph \(\Theta\) are all rigid in \(\mathbb{R}^{d}\), we may say that \(\Theta\) is rigid in \(\mathbb{R}^{d}\).
Recall the immediate consequences of Theorem 2.8. The first of these states that a necessary condition for the rigidity of \(\Theta\) is that \(m\geq dn-(d^{2}+d-1)\). This yields a natural class of hypergraphs that are rigid but have just \(dn-(d^{2}+d-1)\) hyperedges, so removing any hyperedge will make them flexible. We call such hypergraphs _minimally rigid_.
### Pinning
Let \((\Theta,p)\) be an affinely spanning framework in \(\mathbb{R}^{d}\). If necessary, relabel \(V\) so that \(1\ldots(d+1)\) is a hyperedge and that has non-zero \(d\)-volume in \((\Theta,p)\). In
order to study \((\Theta,p)\)_up to its congruent frameworks_, we introduce its _pinning_, ie. the unique framework \((\Theta,\tilde{p})\) congruent to \((\Theta,p)\) with configuration matrix
\[C(\tilde{p})=\begin{bmatrix}1&1&1&\dots&1&1&\dots&1\\ 0&1&0&\dots&0&\tilde{p}(d+2))_{1}&\dots&\tilde{p}(n)_{1}\\ 0&0&1&\dots&0&\tilde{p}(d+2))_{2}&\dots&\tilde{p}(n)_{2}\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\ddots&\vdots\\ 0&0&0&\dots&\det(C(1\dots(d+1),p))&\tilde{p}(d+2)_{d}&\dots&\tilde{p}(n)_{d} \end{bmatrix}. \tag{2}\]
The _standard pinning_ of \((\Theta,p)\), denoted \((\Theta,\overline{p})\) is obtained by squeezing \((\Theta,\tilde{p})\) by a factor of \(\det(C(1\dots(d+1),\overline{p}))\) along the \(x_{d+1}\)-axis. Its configuration matrix \(C(\overline{p})\) is obtained from \(C(\tilde{p})\) by dividing the bottom row by \(\det(C(1\dots(d+1),\tilde{p}))\).
Since \((\Theta,\overline{p})\) is the image of \((\Theta,p)\) under a single affine map, the two frameworks have exactly the same number of congruence classes, as we will show in Lemma 2.9.
The _pinned configuration space_ of \((\Theta,\overline{p})\) is the space of standard pinned configurations equivalent to \((\Theta,\overline{p})\):
\[\overline{\mathcal{C}}(\Theta,\overline{p})=\{\overline{q}:(\Theta,\overline{ q})\text{ equivalent to }(\Theta,\overline{p})\}.\]
Call the elements of \(\overline{\mathcal{C}}(\Theta,\overline{p})\) the _pinned congruence classes_ of \((\Theta,p)\).
**Lemma 2.9**.: _The congruence classes of \((\Theta,p)\) in \(\mathbb{R}^{d}\) and the pinned congruence classes of \((\Theta,\overline{p})\) in \(\mathbb{R}^{d}\) are in one to one correspondence._
Proof.: Define the map \(f:\mathcal{C}(\Theta,p)\to\overline{\mathcal{C}}(\Theta,\overline{p})\) as follows:
\[f([q])=\overline{q},\]
which is well-defined since all congruent frameworks have the same standard pinning. There is also a well-defined inverse, obtained by stretching \((\Theta,\overline{p})\) by a factor of \(\det(C(1\dots(d+1),p))\) along the \(x_{d+1}\)-axis.
As \(f\) and its inverse described above are continuous (being affine transformations), \(\mathcal{C}(\Theta,p)\) and \(\overline{\mathcal{C}}(\Theta,\overline{p})\) are homeomorphic, and so \((\Theta,p)\) and \((\Theta,\overline{p})\) can be considered interchangeably for our purposes.
## 3 A Lower Bound for Minimally Rigid Hypergraphs
In this section, we show that a generically globally rigid \((d+1)\)-uniform hypergraph on \(dn-(d^{2}+d-1)\) vertices may be found, for all \(d\geq 1\) and \(n\geq d+1\).
To this end, we introduce vertex splits, a family of constructive hypergraph operations.
For \(k\geq d+1\) so that \(\kappa=\frac{k-2}{d+1}\in\mathbb{N}\) and \(i\in V\) so that \(\deg_{\Theta}(i)\geq\kappa\), with \(\kappa\) hyperedges incident to \(i\) connected through codimension \(1\) (ie. they may be ordered \(h_{1},\dots,h_{\kappa}\) so that \(|h_{j}\cap h_{j+1}|=d\), and \(i\in h_{j}\cap h_{j+1}\), for all \(1\leq j\leq\kappa-1\)), we may perform a _\(k\)-vertex split_ at \(i\) by performing the following steps:
1. Delete \(k-d\) hyperedges in the star of \(i\) connected through codimension \(1\) (ie. the hyperedges \(h_{1},\ldots,h_{\kappa}\) described above) to get the hypergraph \(\Theta_{1}=(V,H^{\prime})\);
2. Define a new hypergraph \(\Theta_{2}=(U,G)\), where \(U=\{u\in h_{j}:1\leq j\leq\kappa\}\cup\{i^{*}\}\) as follows: The hyperedges \(h_{1},\ldots h_{\kappa}\) themselves correspond to a simplicial complex \(B\) homeomorphic to a \(d\)-dimensional ball (see Section 4 for a further discussion of this in the \(d=2\) case). Then let \(G\) be the set of hyperedges formed by taking the \(d\)-boundary of \(B\), and appending each element by \(i^{*}\);
3. Glue \(\Theta_{2}\) to \(\Theta_{1}\) by identifying each \(u\in U\setminus\{i^{*}\}\) with its corresponding vertex in \(V\) and taking the induced gluings up through the edges, \(2\)-hyperedges, up to the \((d-1)\)-hyperedges.
The hypergraph obtained by gluing \(\Theta_{2}\) to \(\Theta_{1}\), called \(\Theta^{*}\), is the result of performing a \(k\)-vertex split to \(\Theta\) at \(i\).
Notice that if \(\Theta\) has \(m\) hyperedges, then \(\Theta^{*}\) has \(m+d\) hyperedges. In particular, if \(\Theta\) has \(dn-(d^{2}+d-1)\) hyperedges, \(\Theta^{*}\) has \(d(n+1)-(d^{2}+d-1)\) hyperedges.
We now prove Theorem 1.2 by showing that performing \((d+1)\)-vertex splits preserves global rigidity.
Proof of Theorem 1.2.: We proceed by induction. The base case is that of the \((d+1)\)-uniform hypergraph \(\Theta^{d+1}=([d+1],\{1\ldots d+1\})\) consisting of a single hyperedge. Then \(\Theta^{d+1}\) is generically globally \(d\)-volume rigid by definition. Moreover, \(\Theta^{d+1}\) is minimally \(d\)-volume rigid, as \(dn-d^{2}-d+1=d(d+1)-d^{2}-d+1=1=m\).
Let \(n-1\geq d+1\) and assume that \(\Theta^{n-1}=(V^{n-1},H^{n-1})\) is generically globally \(d\)-volume rigid, after relabelling if necessary so that \(1\ldots(d+1)\in H^{n-1}\). Perform a \((d+1)\)-vertex split at vertex \(1\), removing \(\kappa=1\) hyperedges, \(1\ldots d+1\), say, to get \((\Theta^{n-1})_{1}=(V^{n-1},(H^{n-1})_{1})\), and gluing the hypergraph
\((\Theta^{n-1})_{2}=(\{1,\ldots,d+1,n\},G^{n-1})\), where \(G=\bigcup\limits_{j=1}^{d+1}\{1\ldots\hat{j}\ldots(d+1)n\}\), to it at vertices \(1,\ldots d+1\).
Notice that the \((d+1)\)-vertex split described above is equivalent to subdividing the \(d\)-simplex \(1\ldots(d+1)\), into \(d+1\) smaller simplices. Therefore, if \((\Theta^{*},p^{*})\) is a generic framework of \(\Theta^{*}\) in \(\mathbb{R}^{d}\), then
\[\det(C(1\ldots(d+1),p^{*}))=\sum_{h\in G}\det(C(h,p^{*})). \tag{3}\]
Now, suppose that \((\Theta^{*},p^{*})\) is equivalent to \((\Theta^{*},q^{*})\), then \(\det(C(h,p^{*}))=\det(C(h,q^{*}))\), for all \(h\in H^{n}\), and therefore, by equation 3, for all \(h\in H^{n-1}\), so \((\Theta^{n-1},p^{*})\) is congruent to \((\Theta^{n-1},q^{*})\). Finally, since it subdivides \(1\ldots d+1\), which has equal \(d\)-volume in \((\Theta^{n-1},p^{*})\) and \((\Theta^{n-1},q^{*})\), the positions of \(p^{*}(n)\) and \(q^{*}(n)\) satisfy the same affine dependency of the points
1) and \(q^{*}(1),\ldots,q^{*}(d+1)\) respectively. Therefore \((\Theta^{*},p^{*})\) is congruent to \((\Theta^{*},q^{*})\).
Therefore the lower bound for the number of congruence classes of a generic \((d+1)\)-uniform hypergraph framework in \(\mathbb{R}^{d}\) is 1 and this bound is strict.
## 4 Triangulations of \(\mathbb{S}^{2}\)
Let \(\Theta=(V,H)\) be a 3-uniform hypergraph. There is a unique simplicial complex associated to \(\Theta\), which we will denote by \([\Theta]\), the sets of 0-, 1- and 2- simplices of which are defined respectively as
\[[\Theta]_{0} =\{[i]:i\in V\}\] \[[\Theta]_{1} =\{[ij]:ij\subset h\in H\}\] \[[\Theta]_{2} =\{[h]:h\in H\}.\]
If \([\Theta]\) is a triangulation of \(\mathbb{S}^{2}\), then we call \(\Theta\) a _triangulation of \(\mathbb{S}^{2}\)_.
Since \(\mathbb{S}^{2}\) is a closed 2-dimensional manifold, each 1-simplex of \([\Theta]\) must be contained in precisely two 2-simplices, ie. each edge of \(\Theta\) must be contained in precisely two hyperedges. Meanwhile, each hyperedge, by definition, contains precisely three edges. Therefore
\[3m=2s, \tag{4}\]
where \(s\) is the number of edges of \(\Theta\). Meanwhile, the Euler characteristic of a triangulation of \(\mathbb{S}^{2}\) is constant, so
\[\chi(\Theta)=m-s+n=2. \tag{5}\]
Combining equations 4 and 5 yields \(m=2n-4\) and \(s=3n-6\).
Now, the second homology group of \([\Theta]\), \(H_{2}([\Theta],\mathbb{Z})\) is isomorphic to \(\mathbb{Z}\), as there is a unique (up to uniform scaling) vector \(c=(c_{[h]}:[h]\in[\Theta]_{2})\in\mathbb{Z}^{m}\) so that
\[\partial_{2}\left(\sum_{[h]\in[\Theta]_{2}}c_{[h]}[h]\right)=0,\]
by the above observation, \(c\in\{-1,1\}^{m}\), therefore, for every \([h]\in[\Theta]_{2}\), we may write
\[[h]=c_{[h]}^{-1}\left(\sum_{[h^{\prime}]\in[\Theta]_{2}\setminus\{h\}}c_{[h^{ \prime}]}[h^{\prime}]\right). \tag{6}\]
Now let \((\Theta,p)\) be any framework of the triangulation of \(\mathbb{S}^{2}\), \(\Theta=(V,H)\). There exists a unique map \([p]:[\Theta]\rightarrow\mathbb{R}^{2}\) defined by
\[[p]([i]) =p(i),\,\forall[i]\in[\Theta]_{0},\] \[[p]([ij]) =\mathrm{Conv}\{p(i),p(j)\},\,\forall[ij]\in[\Theta]_{1},\] \[[p]([ijk]) =\mathrm{Conv}\{p(i),p(j),p(k)\},\,\forall[ijk]\in[\Theta]_{2}.\]
Then, by equation 6, each triangle in \([p]([\Theta])\), ie. each hyperedge in \((\Theta,p)\) may be uniquely expressed as a signed sum of all the other hyperedges of \((\Theta,p)\), and so too may its area be.
As a consequence of this, if we remove any hyperedge from \(\Theta\) to get \(\Theta^{\prime}\), then the congruence classes of \((\Theta,p)\) and \((\Theta^{\prime},p)\) are in one-to-one correspondence (as the area of the removed hyperedge in \((\Theta,p)\) is uniquely determined by the hyperedges of \((\Theta^{\prime},p)\)). We therefore say that the missing hyperedge of \(\Theta^{\prime}\) is _globally linked_.
### Triangulations of \(\mathbb{S}^{2}\) are Generically Rigid
In this subsection, we prove Theorem 4.1.
**Theorem 4.1**.: _Let \(\Theta=(V,H)\) be a triangulation of \(\mathbb{S}^{2}\), then \(\Theta\) is generically rigid._
In order to do so, we note that, by the following Lemma of Steinitz, every triangulation of \(\mathbb{S}^{2}\) may be built up by performing a sequence of vertex-splitting operations starting at \(K_{4}^{3}\).
**Lemma 4.2**.: _Steinitz [2013] Let \(\Theta\) be as above, then there exists a sequence of triangulations of \(\mathbb{S}^{2}\)_
\[\Theta_{0}=K_{4}^{3},\Theta_{1},\ldots,\Theta_{N-1},\Theta_{N}=\Theta,\]
_so that \(\Theta_{i}\) is obtained from \(\Theta_{i-1}\) by a vertex split, for each \(1\leq i\leq N\)._
Lemma 4.2 may also be proved by induction, by showing how, at each triangulation of \(\mathbb{S}^{2}\) on more than three vertices, we may perform inverse operation to vertex splitting, vertex contraction, whilst preserving the property of being a triangulation of \(\mathbb{S}^{2}\).
**Lemma 4.3**.: _Let \((\Theta,p)\) be a rigid generic framework of \(\Theta=(V,H)\) triangulation of \(\mathbb{S}^{2}\) on \(n>4\) vertices (assuming such a framework exists). Remove \(l>1\) hyperedges of \(\Theta\), connected through codimension \(1\), to obtain \(\Theta^{\prime}=(V,H^{\prime})\). Then, \(\Theta=(V,H^{\prime})\) is a flexible framework, with a \((l-1)\)-dimensional space of non-trivial finite flexes._
Proof.: We will study how the rank of rigidity matrix changes upon removing hyperedges. First of all, \(\operatorname{rank}(R(\Theta,p))=2n-5\), as \((\Theta,p)\) is rigid. Since any \(h\in H\) is generically globally linked, if \(h\in H\setminus H^{\prime}\), and \(\Theta_{1}=(V,H\setminus\{h\})\), then \(\operatorname{rank}(R(\Theta_{1},p))=2n-5\).
Now, \(R(\Theta_{1},p)\) is full-rank, as it has \(2n-5\) rows, therefore, removing any further hyperedges from \(\Theta_{1}\) (and therefore further rows from \(R(\Theta_{1},p)\)) will reduce the rank by one for each hyperedge. Hence \(\operatorname{rank}(R(\Theta^{\prime},p))=2n-5-(l-1)=2n-l-k\).
Therefore, by the regularity of \(f_{\Theta^{\prime}}(p)\), \(M_{\Theta^{\prime}}\) is a \((2n-4-l)\)-dimensional manifold, and so \(\mathcal{C}(\Theta^{\prime},p)\) is a \((2n-(2n-4-l))=(l+4)\)-dimensional manifold. As a \(5\)-dimensional subspace of \(\mathcal{C}(\Theta^{\prime},p)\) accounts for the images of \(q\) under
trivial flexes, this leaves a \((l-1)\)-dimensional subspace of \(\mathcal{C}(\Theta^{\prime},p)\) arising from non-trivial finite flexes.
Proof of Theorem 4.1.: We will proceed by induction on \(n\geq 4\). We know that the only triangulation of \(\mathbb{S}^{2}\) on four vertices is \(K_{4}^{3}\), which is rigid. Now let \(n\geq 4\) and assume that every triangulation of \(\mathbb{S}^{2}\) on \(n\) vertices is rigid.
Let \((\Theta,p)\) be a generic framework of a triangulation \(\Theta=(V,H)\) of \(\mathbb{S}^{2}\) on \(n\) vertices, with its vertices labelled, as usual, \(1,\ldots,n\). Suppose that \(n\) is a vertex of degree greater than or equal to \(k-2\), and is contained in the hyperedges \((n-k+1)(n-k+2)n,(n-k+2)(n-k+3)n,\ldots,(n-2)(n-1)n\). Remove the hyperedges just listed to get \(H^{\prime}\), let \(\Theta^{\prime}=(V,H^{\prime})\). Add the vertex \(n+1\) and hyperedges \((n-k+1)(n-k+2)(n+1),(n-k+1)n(n+1),(n-k+2)(n-k+3)(n+1),\ldots,(n-1)n(n+1)\) to get \(\Theta^{*}=(V^{*},H^{*})\).
Extend \(p\) to \(p^{*}\) generic, to get the generic framework \((\Theta^{*},p^{*})\), we will show that \(\operatorname{rank}(R(\Theta^{*},p^{*}))=2(n+1)-5\). We begin by writing \(R(\Theta^{*},p^{*})\) in block matrix form:
\[R(\Theta^{*},p^{*})=\begin{bmatrix}R^{\prime}&B\\ 0&R(\operatorname{Star}(n+1),p^{*})\end{bmatrix}\in\mathbb{R}^{(2(n+1)-4)\times 2 (n+1)}, \tag{7}\]
where both matrices are block matrices, with \(A\in\mathbb{R}^{k\times 2n}\), \(L\in\mathbb{R}^{k\times 2}\), \(R^{\prime}\in\mathbb{R}^{(2n-k-4)\times 2(n-k)}\) and \(B\in\mathbb{R}^{(2n-k-4)\times 2(k+1)}\).
Suppose that \(\eta\in\operatorname{Ker}(R(\Theta^{*},p^{*}))\), let \(U=V(\operatorname{Star}(n+1))\), and let \(\pi_{V}\) and \(\pi_{U}\) denote projection of a vector in \(\mathbb{R}^{2n}\) onto the pairs of entries indexed by \(V\) and \(U\) respectively. Then, \(\pi_{V}(\eta)\in\operatorname{Ker}(R(\Theta^{\prime},p))\), a \((k+2)\)-dimensional space.
Suppose that \(\pi_{V}(\eta)\) is a trivial infinitesimal flex of \((\Theta^{\prime},p)\), then \(\pi_{U}(\eta)\) is a trivial infinitesimal flex of \((\operatorname{Star}(n+1),p^{*})\), as all but one of the vertices are flexed trivially, and the final vertex, \(n+1\), is uniquely determined by the position of all of its neighbours. Therefore, \(\eta\) extends to being a trivial infinitesimal flex of all of \((\Theta^{*},p^{*})\).
Now, for the sake of contradiction, suppose that \(\eta\) is a non-trivial infinitesimal flex of \((\Theta^{*},p^{*})\). Then there exists a vertex \(i\in U\setminus\{n+1\}\) so that vertex \(n+1\) may be contracted to \(i\) to obtain the triangulation \(\Theta^{\vee}\) of \(\mathbb{S}^{2}\) with \(\pi_{V}(\eta)\) a non-trivial infinitesimal flex of \((\Theta^{\vee},p)\). Indeed, if there did not exist such a vertex \(i\), then vertex \(n+1\) would be flexing trivially with respect to its neighbours, and we would be in the situation described in the above paragraph. This contradicts our inductive hypothesis, and so \((\Theta^{*},p^{*})\) is infinitesimally rigid.
Therefore, there exists a generic, infinitesimally rigid framework \((\Theta^{*},p^{*})\), where \(\Theta^{*}\) is obtained from \(\Theta\) by performing a \(k\)-vertex split at vertex \(n\). Then by Theorem 2.8, all generic frameworks of \(\Theta^{*}\) are rigid, and then by induction and Lemma 4.2, all triangulations of \(\mathbb{S}^{2}\) are generically rigid.
As all triangulations of \(\mathbb{S}^{2}\) are rigid, moreover redundantly rigid, with congruence classes of a generic framework of \(\Theta=(V,H)\) in one-to-one correspondence with the congruence classes of the same configuration paired with \(\Theta^{\prime}=(V,H^{\prime})\), where \(|H^{\prime}|=|H|-1\), we obtain the following corollary.
**Corollary 4.3.1**.: _Let \(\Theta\) be a triangulation of \(\mathbb{S}^{2}\) on \(n\) vertices. Then any generic framework of \(\Theta\) in \(\mathbb{R}^{2}\) admits at most_
\[\frac{1}{n-2}\binom{2n-6}{n-3}\]
_congruence classes._
## 5 Bounds for Bipyramids
In this section, we narrow in on a special family of triangulations of \(\mathbb{S}^{2}\), bipyramids.
The _(\((n-2)\)-gonal) bipyramid_, \(B_{n-2}=(V,H)\) is the \(3\)-uniform hypergraph that is homeomorphic, as a simplicial complex, to \(\mathbb{S}^{2}\) formed by gluing two \((n-2)\)-gonal based pyramids together at their bases, identifying those vertices on the new _equator_ of the graph and deleting the common bases. The labelling of vertices and hyperedges that we will use throughout this paper and in proofs is as follows:
\[V=[n]\] \[H=\{123,12(n-1),134,\ldots,1(n-2)(n-1),\] \[23n,2(n-1)n,34n,\ldots,(n-2)(n-1)n\},\]
and we call the vertices \(2,\ldots,n-1\) the _equatorial vertices_. See figure 1 for an illustration of this labelling.
Whilst the upper bound given above for general triangulations of \(\mathbb{S}^{2}\) increase exponentially in \(n\), Theorem 1.3 yields an upper bound that is linear in \(n\).
The proof of this follows from defining a polynomial \(f\) on \((B_{n-2},p)\) so that congruence classes of \((B_{n-2},p)\) yield roots of \(f\), and can be found in Appendix
Figure 1: The hexagonal bipyramid \(B_{6}\), with hyperedge \(123\)_behind_ all the other hyperedges
We can also state an updated lower bound for bipyramids on an even number of vertices.
**Corollary 5.0.1**.: _If \(n\) is even, any generic framework \((B_{n-2},p)\) in \(\mathbb{R}^{2}\) has at least two congruence classes._
Proof.: Since the degree of the polynomial \(f\) is even, and \(f(0)=0\), there must be some other real root of \(f\) corresponding to some real framework equivalent to \((B_{n-2},p)\).
### Global Rigidity of Hypergraphs
Connelly (2005) and Gortler et al. (2010) proved that the Euclidean rigidity of a graph \(G\) in \(\mathbb{R}^{d}\) is a _generic property_ of \(G\) (ie. it is true for all generic frameworks of \(G\) or for none of them). In this final subsection, we show that the analogous result does not hold in the case of \(d\)-volume rigidity, by way of a small (in terms of \(n\)) example.
Proof of Theorem 1.4.: Consider the pentagonal bipyramid \(B_{5}\), by Theorem 1.3, the defining polynomial of its configuration space is a cubic \(f=at^{3}+bt^{2}+ct\), where \(a,b,c\) are rational functions defined by the pinned configurations of seven points. The discriminant of \(f\), denoted \(\operatorname{disc}(f)\) is a polynomial function of the coefficients of \(f\) (and therefore a rational function of those same pinned configurations), and is, in this case, defined as \(\operatorname{disc}(f)=b^{2}-4ac\). The cubic \(f\) has one real root if and only if \(\operatorname{disc}(f)<0\), and three if and only if \(\operatorname{disc}(f)>0\). It therefore suffices to find \(\overline{p}\) and \(\overline{q}\) non-generic so that \(\operatorname{disc}(f(\overline{p}))<0\) and \(\operatorname{disc}(f(\overline{q}))>0\), for example those with configuration matrices
\[C(\overline{p}) =\begin{bmatrix}1&1&1&1&1&1&1\\ 0&1&0&\frac{1}{5}&\frac{1}{7}&\frac{1}{11}&\frac{1}{2}\\ 0&0&1&\frac{1}{13}&\frac{1}{19}&\frac{1}{17}&\frac{1}{2}\end{bmatrix},\] \[C(\overline{q}) =\begin{bmatrix}1&1&1&1&1&1&1\\ 0&1&0&\frac{1}{7}&\frac{1}{7}&\frac{1}{41}&\frac{1}{2}\\ 0&0&1&\frac{1}{19}&\frac{1}{17}&\frac{1}{13}&20\end{bmatrix}.\]
Then to perturb \(\overline{p}\) and \(\overline{q}\) slightly, to obtain the pinning of generic frameworks \(\tilde{p}\) and \(\tilde{q}\) respectively. Since \(\operatorname{disc}(f)\) is continuous, doing so would not have changed the signs, so we obtain two generic frameworks of \(B_{5}\), one globally rigid and one rigid, but not globally rigid.
## 6 Gluing Hypergraphs
Let \(\Theta_{1}=(V_{1},H_{1})\) and \(\Theta_{2}=(V_{2},H_{2})\) be two \(3\)-uniform hypergraphs, with \(i_{1}j_{1}k_{1}\in H_{1}\) and \(i_{2}j_{2}k_{2}\in H_{2}\). Define the hypergraph \(\Theta=(V,H)\) in terms of \(V\) and \(H\) as
\[V =V_{1}\sqcup V_{2}\diagup i_{1}\sim i_{2},\,j_{1}\sim j_{2},\,k_{ 1}\sim k_{2},\] \[H =H_{1}\cup H_{2},\]
then \(\Theta\) is the hypergraph formed by gluing together \(\Theta_{1}\) and \(\Theta_{2}\) at a common hyperedge. If \(n_{i}=|V_{i}|\) and \(m_{i}=|H_{i}|\) (for \(1\leq i\leq 2\)) and \(n=|V|\) and \(m=|H|\), then \(m=m_{1}+m_{2}-1\), so if \(\Theta_{1}\) and \(\Theta_{2}\) are minimally rigid, then \(\Theta\) will have one too many hyperedges to itself be minimally rigid. In order to glue together hypergraphs whilst preserving minimal rigidity, we define \(\Theta^{\prime}=(V,H^{\prime})\), where \(H^{\prime}=H\setminus\{ijk\}\), where \(ijk=i_{1}j_{1}k_{1}\sim i_{2}j_{2}k_{2}\).
Notice that, in \(\Theta^{\prime}\), the two sub-hypergraphs \(\Theta^{\prime}_{1}=(V_{1},H_{1}\setminus\{i_{1}j_{1}k_{1}\})\) and \(\Theta^{\prime}_{2}=(V_{2},H_{2}\setminus\{i_{2}j_{2}k_{2}\})\) lie on either side of the non-hyperedge separating triangle \(ijk\) of \(\Theta\). We may successively glue hypergraphs together in this fashion, ending up with several non-hyperedge separating triangles as in figure 2
Notice that, in figure 2, the copies of \(B_{4}^{3}\), with vertex sets \(U_{1}\) and \(U_{2}\) respectively, on either side of the two non-hyperedge separating triangles \(124\) and \(134\) behave _independently_ of each other: There are four congruence classes of the generic framework \((\Theta^{\prime},p)\), represented by
1. \((\Theta,p)\);
2. \((\Theta,q_{1})\), where \(\pi_{U_{1}}(\overline{q_{1}})=\pi_{U_{1}}(\overline{p})\) and \(\pi_{U_{2}}(\overline{q_{1}})\neq\pi_{U_{2}}(\overline{p})\);
Figure 2: On the left is a tetrahedron \(K_{4}^{3}\), on the right, we have glued an octahedron \(B_{4}^{3}\) to it at the common hyperedge \(124\), which is then deleted to form a triangulation of \(\mathbb{S}^{2}\). This process is repeated at hyperedge \(134\) to get \(\Theta^{\prime}\).
3. \((\Theta,q_{2})\), where \(\pi_{U_{1}}(\overline{q_{2}})\neq\pi_{U_{1}}(\overline{p})\) and \(\pi_{U_{2}}(\overline{q_{2}})=\pi_{U_{2}}(\overline{p})\);
4. \((\Theta,q_{3})\), where \(\pi_{U_{1}}(\overline{q_{3}})\neq\pi_{U_{1}}(\overline{p})\) and \(\pi_{U_{2}}(\overline{q_{3}}\neq\pi_{U_{2}}(\overline{p})\).
Here \(\pi_{U}\) denotes projection onto the co-ordinates \(\{(x_{u},y_{u}):u\in U\}\).
**Lemma 6.1**.: _Let \(\Theta_{1}=(V_{1},H_{1})\), \(\Theta_{2}=(V_{2},H_{2})\), \(\Theta=(V,H)\) and \(\Theta^{\prime}=(V,H^{\prime})\), with \(H^{\prime}=H\setminus\{ijk\}\), where \(ijk\) is a generically globally linked of both \(\Theta_{1}\) and \(\Theta_{2}\), be as above. Let \((\Theta^{\prime},p)\) and \((\Theta^{\prime},q)\) be two generic frameworks. Then \((\Theta^{\prime},p)\) and \((\Theta^{\prime},q)\) are equivalent (resp. congruent) if and only if \((\Theta_{1},\pi_{V_{1}}(p)\) and \((\Theta_{2},\pi_{V_{2}}(p)\) are equivalent (resp. congruent) to \((\Theta_{1},\pi_{V_{1}}(q))\) and \((\Theta_{2},\pi_{V_{2}}(q))\) respectively._
Proof.: Suppose \((\Theta^{\prime},p)\) is equivalent to \((\Theta^{\prime},q)\), then, \(\det(C(h,p))=\det(C(h,q))\), for all \(h\in H^{\prime}\), and therefore \((\Theta^{\prime}_{1},\pi_{V_{1}}(p))\) and \((\Theta^{\prime}_{2},\pi_{V_{2}}(p))\) are equivalent to \((\Theta^{\prime}_{1},\pi_{V_{1}}(q))\) and \((\Theta^{\prime}_{2},\pi_{V_{2}}(q))\) respectively. Since \(ijk\) is generically globally linked, we have that \((\Theta_{1},\pi_{V_{1}}(p))\) and \((\Theta_{2},\pi_{V_{2}}(p))\) are equivalent to \((\Theta_{1},\pi_{V_{1}}(q))\) and \((\Theta_{2},\pi_{V_{2}}(q))\) respectively
By an analogous argument, if \((\Theta^{\prime},p)\) is congruent to \((\Theta^{\prime},q)\), then \((\Theta_{1},\pi_{V_{1}}(p))\) and \((\Theta_{2},\pi_{V_{2}}(p))\) are congruent to \((\Theta_{1},\pi_{V_{1}}(q))\) and \((\Theta_{2},\pi_{V_{2}}(q))\) respectively.
Suppose that \((\Theta_{1},\pi_{V_{1}}(p))\) and \((\Theta_{2},\pi_{V_{2}}(p))\) are equivalent to \((\Theta_{1},\pi_{V_{1}}(q))\) and \((\Theta_{2},\pi_{V_{2}}(q))\) respectively. Then \(\det(C(h,p))=\det(C(h,q))\), for all \(h\in H_{1}\cup H_{2}\supset H\), so \((\Theta,p)\) is equivalent to \((\Theta,q)\).
Suppose that \((\Theta_{1},\pi_{V_{1}}(p))\) and \((\Theta_{2},\pi_{V_{2}}(p))\) are congruent to \((\Theta_{1},\pi_{V_{1}}(q))\) and \((\Theta_{2},\pi_{V_{2}}(q))\). Then there exist area-preserving affine transformations \(T_{1}\) and \(T_{2}\) so that \(\pi_{V_{1}}(q)=T_{1}\circ\pi_{V_{1}}(p)\) and \(\pi_{V_{2}}(q)=T_{2}\circ\pi_{V_{2}}(p)\). Since \(T_{1}\) and \(T_{2}\) agree on their actions on the affinely independent triple \(p(i),p(j),p(k)\), \(T_{1}=T_{2}=T\). Hence \(q=T\circ p\), so \((\Theta,p)\) and \((\Theta,q)\) are congruent.
**Theorem 6.2**.: _Let \(\Theta^{\prime}=(V,H^{\prime})\) is a triangulation of \(\mathbb{S}^{2}\) formed by gluing together \(s\) triangulations of \(\mathbb{S}^{2}\), \(\Theta_{i}=(V_{i},H_{i})\), \(1\leq i\leq s\), at common hyperedges, and then removing them. Let \((\Theta^{\prime},p)\) be a generic framework._
1. _Suppose that, for all_ \(1\leq i\leq s\)_,_ \((\Theta_{i},\pi_{V_{i}}(p))\) _has lower and upper bounds_ \(\ell_{i}\) _and_ \(u_{i}\) _respectively, for the number of congruence classes it admits. Then_ \((\Theta,p)\) _has lower and upper bounds_ \[\ell=\prod_{i=1}^{s}\ell_{i}\text{ and }u=\prod_{i=1}^{s}u_{i},\] _for the number of congruence classes it admits._
2. _Suppose that, for all_ \(1\leq i\leq s\)_,_ \((\Theta_{i},\pi_{V_{i}}(p))\) _admits_ \(N_{i}\) _congruence classes. Then_ \((\Theta,p)\) _admits_ \[N=\prod_{i=1}^{s}N_{i}\] _congruence classes._
Proof.: Parts 1 and 2 follow from Lemma 6.1, as well as our discussion of figure 2. Part 2 can also be obtained by setting \(\ell_{i}=u_{i}=N_{i}\), for all \(1\leq i\leq s\), in part 1. |
2305.19154 | Sparse species interactions reproduce abundance correlation patterns in
microbial communities | During the last decades macroecology has identified broad-scale patterns of
abundances and diversity of microbial communities and put forward some
potential explanations for them. However, these advances are not paralleled by
a full understanding of the dynamical processes behind them. In particular,
abundance fluctuations of different species are found to be correlated, both
across time and across communities in metagenomic samples. Reproducing such
correlations through appropriate population models remains an open challenge.
The present paper tackles this problem and points to sparse species
interactions as a necessary mechanism to account for them. Specifically, we
discuss several possibilities to include interactions in population models and
recognize Lotka-Volterra constants as a successful ansatz. For this, we design
a Bayesian inference algorithm to extract sets of interaction constants able to
reproduce empirical probability distributions of pairwise correlations for
diverse biomes. Importantly, the inferred models still reproduce well-known
single-species macroecological patterns concerning abundance fluctuations
across both species and communities. Endorsed by the agreement with the
empirically observed phenomenology, our analyses provide insights on the
properties of the networks of microbial interactions, revealing that sparsity
is a crucial feature. | José Camacho-Mateu, Aniello Lampo, Matteo Sireci, Miguel Ángel Muñoz, José A. Cuesta | 2023-05-30T15:57:08Z | http://arxiv.org/abs/2305.19154v6 | # Species interactions reproduce abundance correlation patterns in microbial communities
###### Abstract
During the last decades macroecology has identified broad-scale patterns of abundances and diversity of microbial communities and put forward some potential explanations for them. However, these advances are not paralleled by a full understanding of the underlying dynamical processes. In particular, abundance fluctuations over metagenomic samples are found to be correlated, but reproducing these through appropriate models remains still an open task. The present paper tackles this problem and points to species interactions as a necessary mechanism to account for them. Specifically, we discuss several possibilities to include interactions in population models and recognize Lotka-Volterra constants as successful ansatz. We design a Bayesian inference algorithm to obtain sets of interaction constants able to reproduce the experimental correlation distributions much better than the state-of-the-art attempts. Importantly, the model still reproduces single-species, experimental, macroecological patterns previously detected in the literature, concerning the abundance fluctuations across both species and communities. Endorsed by the agreement with the observed phenomenology, our analysis provides insights on the properties of microbial interactions, and suggests their sparsity as a necessary feature to balance the emergence of different patterns.
## I Introduction
Our understanding of the microscopic living world has been recently challenged by the advent of metagenomics [1; 2]. Indeed, DNA sequencing methods unveiled that a large fraction of microbial diversity was missing in laboratory cultures [3; 4; 5]. Moreover, the possibility to collect genetic material directly from its natural environment introduced a new dimension--the set of samples--along which the properties of the biome may vary. This has given rise to the production of the largest datasets ever, allowing microbial communities to be investigated at a much greater scale and detail than before.
To approach this new profusion of data, macroecology--the quantitative analysis of emergent broad-scale patterns--prevailed as a promising point of view [6; 7; 8; 9; 10; 11]. The framework paved the way to assess statistically the variation in abundance and diversity that, despite the complexity of the underlying microscopic behaviours, often portrays distinctive distributions and that sometimes may be explained in terms of basic ecological forces. Specifically, considerable progress has been achieved in the observation of statistical regularities of taxa populations across time [12], spatial samples [13], and species-abundance distributions [14].
Most remarkably, a recent paper by J. Grilli [15] provided an important step towards a macroecological study of microbial communities. Relying on the analysis of data from nine real biomes, the work characterizes some patterns of abundance variation in terms of three macroecological laws (see Fig.1): _i_) the fluctuations in the abundance of any given species across samples follow a gamma distribution; _ii_) the variances of these distributions for different species are proportional to the square of their means (Taylor's law [16]); and _iii_) the mean abundances across species follow a lognormal distribution. These macroecological patterns of species fluctuations and diversity have been parsimoniously explained using the Stochastic Logistic Model (SLM), which endows the traditional logistic equation with a (multiplicative) stochastic term [17; 18] embodying information about environmental variability [19; 20; 21; 15].
Beside the aforementioned patterns, the analysis of empirical data unveils also the existence of non-trivial pairwise correlations in species abundances [15]. In particular, the Pearson's correlation coefficients of all pairs of species in a biome display distributions ranging from anti-correlations to positive correlations, with a peak often located at negative values. These pairwise correlations are not accounted for by the SLM model because it treats species dynamics as independent from each other [22]. Describing correlations in species abundances calls, thus, for introducing some sort of interaction between species.
The existence of species interactions in microbiomes is well documented in a wealth of experimental results that manage to observe and measure them [23; 24; 25; 26]. Indeed, microbial interactions are a key ingredient behind community stability [27; 28], necessary for, e.g., the maintenance of health in human biomes [29; 30; 31] or the control of medical disorders [32; 33; 34; 35]. Though in natural environments species interactions are hard to measure, they can--in principle--be inferred from empirically measured correlations, though this strategy might be plagued with difficulties [36; 37; 38; 39]. In any case, there is a broad consensus on the crucial role of interactions in microbial ecosystems, thus pushing for their implementation in current
mechanistic models.
Interactions can be implemented in at least two ways: (i) _indirectly_, i.e. assuming the diverse environmental noise terms to be correlated with each other or (ii) _directly,_ i.e. introducing a coupling between species abundances, or (iii) using a combination of both. The first route assumes that correlations in the abundance of two species arise from similar or opposite responses of both species to changes in the environment (variation of nutrients, presence of chemicals, changes in temperature or pH, etc.). This approach has been recently used to reproduce an empirical macroecological pattern describing the decay (on-average) of species pairwise correlations as a function of phylogenetic distance [22]. The second one follows the long tradition in ecology after the seminal works of Lotka and Volterra.
In this paper, we analyze the previous two possibilities as well as their ability to explain empirically observed correlations. For this, we propose a generalized stochastic Lotka-Volterra model (SLVM) involving pairwise deterministic interactions and environmental (multiplicative) fluctuations. Our analyses reveal that direct interactions are most suitable to model the competition mechanisms detected in real biomes and their interplay with cooperative ones, and with other kinds of interdependencies, besides preserving Grilli's three empirical laws. Understanding the mechanisms behind the emergence of these patterns allows us to get an insight into the properties of interaction networks, pointing to their sparsity as a crucial ingredient.
## Modeling microbiomes
### Environmental noise vs. species interactions
A simple model that couples species in a parsimonious way is the Stochastic Lotka-Volterra Model (SLVM)
\[\dot{x}_{i}=\frac{x_{i}}{\tau_{i}}\left(1+\sum_{j=1}^{S}a_{ij}x_{j}\right)+x_ {i}\xi_{i},\quad i=1,\ldots,S, \tag{1}\]
where \(\tau_{i}\) is the time scale of basal population growth, and \(\xi_{i}\) is a zero-mean, multivariate Gaussian white noise (Ito interpretation) with correlations \(\langle\xi_{i}(t)\xi_{j}(t^{\prime})\rangle=w_{ij}\delta(t-t^{\prime})\). The matrix \(\mathbf{W}=(w_{ij})\) accounts for environmental fluctuations, whereas the off-diagonal terms of matrix \(\mathbf{A}=(a_{ij})\) describe direct, Lotka-Volterra-like interactions between species, and the diagonal terms \(a_{ii}=-1/K_{i}\) incorporate the carrying capacity of the environment for species \(i\).
When \(\mathbf{W}=w\mathbf{I}\) and \(\mathbf{A}\) is a diagonal matrix, (1) becomes the SLM. By turning on the off-diagonal terms of the noise correlation matrix \(\mathbf{W}\) ("indirect interactions") and/or of the Lotka-Volterra matrix \(\mathbf{A}\) ("direct interactions"), we can study the effect of correlated environmental noise and/or direct species interactions on species
Figure 1: **Inforgraphic of the population dynamics and the resulting macroecological patterns.** Panel **(a)** portrays, as an illustrative example, three individual-species (color coded) time courses at equally spaced times (longitudinal data), resulting from the integration of (1). The fluctuations around the mean abundance of each species (abundance fluctuation distribution, AFD) are well described by a gamma distribution, as shown in panel **(b)** (see Figs. S4 and S5 of the SI). For each species, this distribution is characterized by its mean value \(\bar{x}_{i}\) and its variance \(\sigma_{i}^{2}\). These two magnitudes are linked by Taylor’s law \(\sigma_{i}^{2}\propto\bar{x}_{i}^{2}\) (panel **(b)**). The mean abundances of all species are distributed as a lognormal (mean abundance distribution, MAD) (panel **(b)**). Further details about Taylor’s law and MAD are presented in Figs. S6 and S7 of the SI. Panel **(c)** illustrates the correlations between abundance fluctuations of pairs of species across samples (a point for each sample/realization). The top-left plot illustrates the case of two uncorrelated species whereas the top-right plot illustrates two positively correlated species. The bottom picture shows the distribution of Pearson’s coefficients (cf. (3)) of all pairs of species. Empirically, this distribution is found to generally cover the entire range \(-1\leq\rho_{ij}\leq 1\) and to exhibit a peak at negative values.
pairwise correlations. Ideally, though, the model should contain the "right proportion" of both terms.
At first, sight, adding environmental noise (\(\mathbf{W}\)) has an advantage over adding interactions (\(\mathbf{A}\)) in that Grilli's first law is preserved by construction (see Sec. 8A of the Supporting Information). The second law simply amounts to setting \(w_{ii}=w\) for all \(i\). As for the third law, it can be fulfilled if one chooses ad hoc the carrying capacities \(K_{i}\) as lognormal random variables [15]. Obviously, these latter choices do not explain the origin of the second and third laws, but at least render a model that is compatible with them. On the downside though, the fact that \(\mathbf{W}\), by definition, must be symmetric and positive definite severely constrains the kind of abundance correlations that (1) can generate.
If we introduce interactions while keeping \(\mathbf{W}=w\mathbf{I}\), in general the first and second laws do not hold exactly--although they may approximately do so. However, the presence of interactions strongly affects the average abundance of the species. While the SLM (with or without a noise correlation matrix) predicts a stationary population that fluctuates around its carrying capacity, in the presence of coupling, the mean values are the solution of the linear system (see Sec. 2 of the SI)
\[\sum_{j=1}^{S}a_{ij}\bar{x}_{j}=\frac{\tau_{i}w}{2}-1,\quad i=1,\ldots,S, \tag{2}\]
where \(\bar{x}_{j}\) denotes the average abundance of species \(j\). Therefore interactions shift these average abundances to the extent that, even if all carrying capacities were the same, the \(\bar{x}_{j}\) would split over a range of values. This may not be a full explanation of the third law yet, but it opens the possibility that its origin might lie on a particular structure of the network of interactions.
A quick test to decide which of these two approaches is most promising to model abundance correlations is to generate a large sample of random matrices (either \(\mathbf{W}\) or \(\mathbf{A}\)), and for each of them simulate the stochastic process (1), calculate abundance correlations between pairs of species, and compare the resulting distributions with those empirically obtained from the microbiome datasets with the same number of species (Fig. 1**a** and **c**). Each of these two samples must fulfill some constraints: matrices \(\mathbf{W}\) must all be symmetric and positive definite, and matrices \(\mathbf{A}\) must all lead to a _feasible_ (i.e. \(\bar{x}_{i}>0\) for all \(i\)) [40] and _asymptotically stable_ (i.e. small perturbations must die out [41; 42]) steady state (see Secs. 3 and 4 of the SI).
Figure 2 shows the distribution of Pearson's abundance correlation coefficients
\[\rho_{ij}=\frac{\operatorname{Cov}(x_{i},x_{j})}{\sqrt{\operatorname{Var}(x_ {i})\operatorname{Var}(x_{j})}}, \tag{3}\]
for all \(S(S-1)\) pairs of species \(i\neq j\), as obtained from a typical dataset and using each of these two matrix ensembles. The empirical distribution decays exponentially to the left and to the right, is a bit asymmetrical, and has a peak at slightly negative values of the correlation. The distributions obtained from the \(\mathbf{W}\) samples bear little resemblance to the former--they exhibit very little negative correlations, are strongly asymmetric, and show a peak at zero. On the contrary, distributions obtained from the \(\mathbf{A}\) samples have a wide range of sample to sample variability, and some of the realizations are very similar to empirical data, often peaking at negative correlation values.
These analyses suggest that environmental noise by itself seems incapable of generating correlations resembling those observed in real microbiomes, and so interactions have to be included in the model. In any case, the presence of correlated noise cannot be ruled out from these analyses, but in order to keep things simple, we henceforth take \(\mathbf{W}=w\mathbf{I}\) and focus on the effect of interactions in the model.
### Grilli's laws in the presence of interactions
The model described by (1) with \(\mathbf{W}=w\mathbf{I}\) and a non-trivial interaction matrix \(\mathbf{A}\) is not guaranteed to satisfy
Figure 2: Distributions of Pearson’s abundance correlation coefficients (c.f. (3)) as obtained in the model with (left panel) a few samples of the noise correlation matrix \(\mathbf{W}\) (each with a different gray shade) or (right panel) with random samples of the Lotka-Volterra matrix \(\mathbf{A}\). The black solid lines portray in each case the empirical distribution as obtained from the _Seawater_ microbiome (species which appear in less than 50% of the communities have been filtered out), while the blue ones represent the distribution of correlations as obtained from the model without interactions. In the left plot, colored circles show the results for a few samples of matrices \(\mathbf{W}\) (see ‘Material and methods’ for details of the sampling procedure); Lotka-Volterra constants are chosen as \(a_{ij}=-\delta_{ij}/K_{i}\), with carrying capacities \(K_{i}\) sampled from a lognormal distribution with mean 0.1 and standard deviation 0.5—as for the SLM [15]. The results shown in this figure are typical (the SI shows the results for a larger sample). In the right plot, colored circles represent correlations resulting from the SLVM with \(\mathbf{W}=w\mathbf{I}\) and Lotka-Volterra constants \(a_{ij}\) (\(i\neq j\)) sampled from a Gaussian distribution with zero mean and standard deviation 0.03. A random selection of 60% of such constants are set to zero (i.e. the connectance of the interaction matrix is \(C=0.4\)).
any of the three macroecological laws found by Grilli [15]--even if the carrying capacities \(K_{i}\) are sampled from a lognormal distribution, as in the SLM. However, as long as the interactions are a'small' perturbation to the SLM, one can reasonably expect them to hold, at least approximately. In particular, if one sets a fraction \(C\) ('connectance') of the off-diagonal interactions terms to zero, and chooses the rest randomly and independently from a zero-mean normal distribution with standard deviation \(\sigma\), a criterion for the weakness of interactions is that the resulting system remains feasible and asymptotic stable; in other words, \(\sigma\sqrt{SC}K_{max}\ll 1\) (see Sec. 5 of the SI). We will refer to this as the 'weak-interaction regime'.
Figure 3 shows the compliance with the three macroecological laws for different combinations of parameters within the weak-interaction regime (see Fig. 1**a** and **b** for
Figure 3: Grilli’s three macroecological laws as a function of the interaction parameters. Specifically, the figure shows the abundance fluctuation distribution (AFD) (panels **(a)**–**(c)**), Taylor’s law (panels **(d)**–**(f)**) and the mean-abundance distribution (MAD, panels **(g)**–**(i)**) for different values of the species number \(S\) (panels **(a)**, **(d)**, **(g)**), the connectance \(C\) (panels **(b)**, **(e)**, **(h)**), and standard deviation of the interaction constants \(\sigma\) (panels **(c)**, **(f)**, **(i)**). Results have been averaged over \(N=1000\) realizations of the SLVM ((1)) each one with a different random interaction matrix. Results including all realizations are depicted as a cloud of gray points, whereas averages are shown as colored bullets. The AFD obtained for a given realization contains the results for all species, represented in terms of rescaled logarithm abundances (\(z=\text{Var}(x)^{-1}/2\log(x/\bar{x})\)). Solid black lines correspond to gamma distributions. MAD plots **(g)**–**(i)** is obtained by properly rescaling the mean abundances, and are fitted by a normalized (zero mean, unit standard deviation) lognormal distribution (black solid line). Similarly, the black straight lines in panels **(d)**–**(f)** describe the relation \(\text{Var}(x_{i})\propto\bar{x}_{i}^{2}\) in logarithmic scale. All simulations are performed with \(S=50\), \(\tau_{i}=0.1\), \(w=0.1\), and carrying capacities sampled from a lognormal distribution (mean 0.1, standard deviation 0.5). Panels: **(j)**, **(k)**, **(l)** illustrate the limits of the weak-interaction regime across the set of parameters that characterize species interactions. The plots quantify the compliance with **(j)** a gamma AFD, **(k)** Taylor’s law, and **(l)** a lognormal MAD, within the region where the system is stable and feasible. Each pixel corresponds to a combination of values of the network connectance \(C\) (horizontal axis) and the standard deviation \(\sigma\) of the distribution of interactions (vertical axis). The color of the pixel quantifies the distance from the AFD to a gamma distribution **(j)**, the value of the exponent \(\gamma\) in the relationship \(\text{Var}(x_{i})\propto\bar{x}_{i}^{\gamma}\) **(k)**, and the distance of the MAD to a lognormal distribution **(l)**, averaged over a sample of \(N=100\) realizations. Gray areas mark the region of the parameter space where the resulting systems are neither stable nor feasible.
the sampling procedure). The first row illustrates that fluctuations of the abundance around the mean values still follow a gamma distribution (first law); the second row reveals that \(\text{Var}(x_{i})\propto\bar{x}_{i}^{2}\), according to Taylor's law (second law); and the third row shows that the mean abundances very closely follow a lognormal distribution (third law). Particularly noteworthy is the compliance with the third law, given that the mean abundances are no longer fixed by the carrying capacities (see (2)), which do follow a lognormal distribution.
Importantly, the gamma abundance-fluctuation distribution remains unaffected regardless of the values taken by the interaction parameters (Fig. 3**j**). Moving closer to the boundary of the weak-interaction regime we can see Taylor's law still holds, but the exponent gets modified as \(\text{Var}(x_{i})\propto\bar{x}_{i}^{\gamma}\). As this boundary is approached, the exponent decreases down to values around \(\gamma\approx 1.4\) (Fig. 3**k**) and, likewise, the distance between the distribution of mean abundances and a lognormal increases (Fig. 3**l**), although it is never very large.
It is worth mentioning that the SLVM ((1)) provides an alternative way to comply with the third law other than sampling the carrying capacities from a lognormal distribution and remaining in the weak-interaction regime. Even if we choose constant carrying capacities (\(K_{i}=K\) for all \(i\)), (2) allows us to seek interaction matrices that shift the mean abundances so as to follow a lognormal distribution. In Section 6 of the SI shows that such matrices do actually exist and yield stable and feasible communities. This finding brings species interactions in the long debate about the origin of heavy-tailed abundance distributions, something which, to the best of our knowledge, has been scarcely investigated. This is an issue that goes beyond the aim of the current work and will be explored in a forthcoming publication. Therefore, hereafter we focus on the weak-interaction regime with log-normally distributed carrying capacities.
### Interactions reproduce the distribution of correlations
An analysis of empirical data selected from the _EBI metagenomics_ platform [43] reveals that, on top of the three single-species, macroecological laws that we have discussed so far, microbiomes exhibit pairwise correlations. As a matter of fact, the distribution of all \(S(S-1)\) Pearson's coefficients (c.f. (3)) of a microbial community has a characteristic pattern (Fig. 1**c**). For all the microbiomes that we have considered, this distribution approximately covers the whole range of values (\(-1\leq\rho_{ij}\leq 1\)), and is very different from the residual narrow distribution peaked at zero that results from the approach with no species interactions [15; 19; 20] (see Fig. 2, as well as Fig. 4**a**). Worth noticing is the almost exponential decay to both sides of the interval, and the location of the maximum at neatly negative values of \(\rho_{ij}\).
In order to find a set of interaction matrices **A** which are capable of inducing this type of correlations, while at the same time preserving Grilli's three laws, we have adopted a Bayesian approach. We know from the previous analysis (Fig. 2 right) that matrices inducing correlations distributed similarly to the empirical ones do exist. Thus, we take the empirical distributions as given--within a Gaussian error--and wonder about the posterior probability distribution of interaction matrices **A**. Needless to say, this distribution cannot be computed analytically, so in order to sample matrices **A** out of the ensemble of possible solutions we need to perform a Markov-chain Monte Carlo (MCMC) simulation (see 'Materials and methods' for the details).
As an illustration of the results of this approach, Fig. 4**a** shows the distribution of Pearson's coefficients obtained for five biomes (See Fig. S17 of the SI contains the results for all available biomes in our dataset), along with the empirical ones. The figure is very eloquent as it reveals a very precise agreement in all cases. Remarkably, these results are obtained when the interaction matrix **A** is still very sparse, as Fig. 4**b** illustrates--a value for which the three macroecological laws hold in the presence of interactions (see Fig. 3). As a consistency test, we have checked that this is indeed the case for the particular matrices **A** obtained through the MCMC (See Figs. S18, S19, S20 of the SI).
It is worth mentioning that similar results to those of Fig. 4**a** can be obtained for matrices with a higher connectance (see bottom plot of Fig. 4**b**). However, these matrices usually produce a lower exponent in Taylor's (second) law, as well as a distribution of mean abundances that deviates from a lognormal distribution (third law). We have performed a different MCMC in which the empirical mean-abundance distribution is taken as given, and indeed we then can fix the first and third law, but not the exponent of the second. While we cannot rule out the possibility that highly connected matrices can achieve similar accuracy in reproducing the correlations without spoiling the macroecological laws, our results strongly suggest that a low connectance of the network of interactions might be an important feature of real microbiomes. This seems to be consistent with existing experimental evidence [23; 26].
## Discussion
The recent discovery of universal large-scale patterns in natural bacterial communities opens the way to empirically validate possible ecological models [15]. While the properties of species-abundance fluctuations constrain the models to include stochastic environmental fluctuations, the pattern of pairwise correlations can reveal the nature and structure of species interactions. This is a promising direction because it is well documented that species interactions play a fundamental role in the behaviour of microbial communities. As a matter of fact, they may underlie the critical features associated, e.g.,
with health disorders [32], such as Crohn's disease [33] or other forms of inflammatory bowel syndrome [34], and many current treatments rely on competition among bacteria [31, 35].
However, neither the SLM nor any other single-species approach is able to account for the abundance correlation patterns that these communities exhibit. It is true that the existence of a correlation between a pair of species does not necessarily imply a direct interaction between them--it may be caused by similar or opposite responses to environmental fluctuations or external driving forces. As a matter of fact, a recent work has shown that environmental filtering, i.e. the presence of correlated external noise, is the main driver of pairwise species correlations at small phylogenetic distances (i.e. with specific taxa) [22]. Nevertheless, species correlations emerge also between diverse taxa and hence, other ecological interactions must also be at play. Furthermore, the widespread presence of negative correlations renders this explanation incomplete.
In this paper, we have shown that a generalized Lotka-Volterra model comprising species pairwise interactions and stochastic environmental fluctuations can reproduce the empirical distribution of species correlations. This conclusion is also supported by a recent work where microbial communities are investigated within the framework of a consumer-resource model, with the conclusion that competition for resources can account for diverse
Figure 4: Abundance correlation distributions for real and simulated communities. In **(a)**, different colored bullets correspond to different biomes selected from the _EBI metagenomics_ platform [43] (namely Seawater, River, Lake, Glacier and Sludge communities). Black dashed lines portray the distribution of Pearson’s coefficients for the abundance correlation of all pairs of species resulting from the SLM. Gray curves show the same distributions as obtained from the SLVM (c.f. (1)), with the Lotka-Volterra interaction constants inferred using the Bayesian algorithm described in Material and methods. The top panel of **(b)** shows the Euclidean distance between the log-distributions of Pearson’s correlation coefficients of pairs of species, as obtained from simulations and from empirical data (_Seawater_ using only species appearing in at least 50% of the samples), as a function of the iterations of the MCMC. The bottom panel shows the connectance \(C\) of the inferred interaction matrices as a function of iterations. The inset illustrates the distributions after 2000 iterations—where the distance is still large—and after 10000 iterations—where the distance has dropped below 10% of the initial distance. Remarkably, this iteration corresponds to a network with a connectance \(C\approx 0.1\), suggesting that a small fraction of interactions is enough to reproduce the distribution of correlations. See Figs. S21 and S22 of the SI for similar results in the rest of the considered biomes.
statistical patterns across diverse microbiomes. [44].
Furthermore, we have shown that the model including species interactions complies with the three Grilli's macroecological laws as well as with the empirical pattern of abundance pairwise correlation, simultaneously. In particular, we have found that sparse interaction matrices are able to achieve all these goals, in agreement with the empirically reported sparsity of microbial interaction networks [23; 26]. More specifically, we have observed that the set of interaction matrices leading to correlations compatible with the empirical ones nearly overlaps with the set of matrices rendering the model feasible and asymptotically stable, provided one accepts that the exponent of Taylor's law can be less than 2. Whether this is a weakness of the model or a prediction that some communities may exhibit this extended version of Taylor's law remains to be explored.
We are aware that some modelling choices can be questioned. For instance, the use of pairwise interactions in a Lotka-Volterra-like fashion. Apart from its long tradition in theoretical ecology, recent works [45; 46] show that, within some limits, it is a reasonable choice. Nevertheless, it has been argued that higher-order interactions may be crucial in the correct assessment of community stability and the understanding of its conflicting relationship with species diversity [40; 47]. Microbiomes are extraordinarily complex communities where processes involving more than two species may be prevalent [48]. Therefore, including higher-order interaction terms in (1) is a generalization worth exploring.
Aside from these obvious limitations, our analysis offers many possibilities to reach a deeper understanding of microbial communities and their emerging ecological patterns. For instance, whereas the origin of the gamma abundance fluctuations distribution--and its cousin, Taylor's law--is related to the multiplicative nature of the noise (the larger the abundance the larger its fluctuations), we still lack a good explanation for the appearance of a lognormal mean-abundance distribution. Both, in the SLM and the present SLVM it has been imposed by purposely tailoring the carrying capacities of the species. But through (2), the SLVM offers the possibility that a special choice of the interaction constants--away from what we have termed weak-interaction regime--may induce a lognormal mean abundance distribution in some self-organized way. Preliminary analyses show that this can indeed happen (see Sec. 6 of the SI), placing the explanatory burden on the nature of the interaction networks. This launches network theory of species interactions in the long-lasting debate [49] about the origin of mechanistic processes behind the emergence of heavy-tailed species-abundance distributions--something that, to the best of our knowledge has been scarcely explored so far (see [50] for an exception). More generally, our work indicates that empirical correlations and fluctuation patterns can be reproduced by a large diversity of possible interaction matrices. In future works we will characterize such a matrix ensemble, with the ambitious goal of extrapolating its minimal structural features allowing one to understand and simulate in-silico complex microbial communities.
Perhaps the most important message of the present work is that direct interactions between species are as relevant in microbiomes as they are in other more traditional ecosystems--such as animal-plant communities or food webs. In this regard, our analysis brings the study of microbes closer to the well-established framework of community ecology, where generalized Lotka-Volterra models play a central role. This paves the way to testing theoretical laws in ecology through experiments performed in microbial communities. The test of the stability-diversity relationship carried out in Ref. [51] is an excellent example of this idea. In view of the usual scarcity of data for traditional ecosystems, the overwhelming amount of microbial data provided by metagenomics opens an avenue of unprecedented possibilities for ecology.
## Material and Methods
### Numerical solution of the SLVM
Equation (1) was solved numerically using an Euler-Maruyama integration scheme [52]. For each species, the solution depicts a noisy logistic trajectory, with the stationary mean population set by the interaction properties. In this framework, the population of a given species in different samples may be recovered, once the dynamics have reached the stationary state, by either selecting abundances at different times (longitudinal data) or considering the abundances of different realizations at the same time (cross-sectional data). Both ways lead to identical results (i.e., the system is ergodic). Further details are discussed in Sec. 1 of the SI.
### Environmental noise matrix sampling
To produce a random, positive definite, symmetric matrix \(\mathbf{W}\) we factor it as \(\mathbf{W}=\mathbf{U}\mathbf{\Lambda}\mathbf{U}^{t}\), where \(\mathbf{U}\) is an \(S\times S\) orthogonal matrix (\(\mathbf{U}\mathbf{U}^{t}=\mathbf{U}^{t}\mathbf{U}=\mathbf{I}\)) and \(\mathbf{\Lambda}\) is a diagonal matrix whose diagonal elements are random, non-negative real numbers (see S5 of the SI for a full account.). The orthogonal matrix \(\mathbf{U}\) can be generated by randomly sampling from a Haar distribution (generated using the Python function ortho_group from the SciPy package [53]).
### Bayesian approach
The posterior distribution of matrix \(\mathbf{A}\), given the correlation distribution \(\rho\), is obtained as
\[P(\mathbf{A}|\rho)=\frac{P(\rho|\mathbf{A})P(\mathbf{A})}{P(\rho)}. \tag{4}\]
In order to sample matrices \(\mathbf{A}\) from the posterior distribution \(P(\mathbf{A}|\rho)\) we apply a Metropolis-Hastings factor algorithm [52]. This amounts to replacing those samples by samples of a purposely tailored Markov chain. At each step \(n\) of this chain, a pair of species \((i,j)\) is randomly selected and its corresponding interaction constant gets modified as \(a_{ij}^{(n+1)}=a_{ij}^{(n)}+\eta\), where \(\eta\) is a random variable sampled from an uniform distribution in \((-\epsilon,\epsilon)\). This change is accepted with probability \(\min(1,H_{n})\)--otherwise rejected--where the Hasting factor (using Bayes's formula (4)) is obtained as
\[H_{n}=\frac{P(\rho|\mathbf{A}^{(n+1)})P(\mathbf{A}^{(n+1)})}{P(\rho|\mathbf{A }^{(n)})P(\mathbf{A}^{(n)})}.\]
The likelihood is computed as
\[P(\rho|\mathbf{A})\propto\exp\left\{-\frac{1}{2\Delta^{2}}\sum_{i}\log^{2} \left(\frac{\rho(x_{i})}{\hat{\rho}(x_{i})}\right)\right\},\]
where \(\rho(x)\) is the empirical distribution of Pearson's coefficients, \(\hat{\rho}(x)\) is the one computed using matrix \(\mathbf{A}\), and \(\Delta\) is the error in the empirical data (experiments have been performed by considering different values for this quantity in the range \(0.2\leq\Delta\leq 1\), with similar results). As for the prior \(P(\mathbf{A})\), we choose it to be zero if \(\mathbf{A}\) leads to an unstable or unfeasible community, and constant otherwise. Finally, \(\epsilon\) is selected so as to keep an acceptance ratio along the Markov chain of \(\sim\)30%.
###### Acknowledgements.
This work has been supported by (i) grants PGC2018-098186-B-I00 (BASIC), PID2021-128966NB-I00, and PID2020-113681GB-I00 of the Spanish Ministry and Agencia Estatal de Investigacion (MCIN/AEI/10.13039/501100011033), and by the European Regional Development Funds (ERDF) "A way of making Europe", and (ii) project B-FQM-366-UGR20 (ERDF) of the Consejeria de Conocimiento, Investigacion y Universidad, Junta de Andalucia and Universidad de Granada. We also thank Jacopo Grilli for a critical reading of the manuscript.
|
2303.14926 | Continuous Intermediate Token Learning with Implicit Motion Manifold for
Keyframe Based Motion Interpolation | Deriving sophisticated 3D motions from sparse keyframes is a particularly
challenging problem, due to continuity and exceptionally skeletal precision.
The action features are often derivable accurately from the full series of
keyframes, and thus, leveraging the global context with transformers has been a
promising data-driven embedding approach. However, existing methods are often
with inputs of interpolated intermediate frame for continuity using basic
interpolation methods with keyframes, which result in a trivial local minimum
during training. In this paper, we propose a novel framework to formulate
latent motion manifolds with keyframe-based constraints, from which the
continuous nature of intermediate token representations is considered.
Particularly, our proposed framework consists of two stages for identifying a
latent motion subspace, i.e., a keyframe encoding stage and an intermediate
token generation stage, and a subsequent motion synthesis stage to extrapolate
and compose motion data from manifolds. Through our extensive experiments
conducted on both the LaFAN1 and CMU Mocap datasets, our proposed method
demonstrates both superior interpolation accuracy and high visual similarity to
ground truth motions. | Clinton Ansun Mo, Kun Hu, Chengjiang Long, Zhiyong Wang | 2023-03-27T05:53:01Z | http://arxiv.org/abs/2303.14926v1 | Continuous Intermediate Token Learning with Implicit Motion Manifold for Keyframe Based Motion Interpolation
###### Abstract
Deriving sophisticated 3D motions from sparse keyframes is a particularly challenging problem, due to continuity and exceptionally skeletal precision. The action features are often derivable accurately from the full series of keyframes, and thus, leveraging the global context with transformers has been a promising data-driven embedding approach. However, existing methods are often with inputs of interpolated intermediate frame for continuity using basic interpolation methods with keyframes, which result in a trivial local minimum during training. In this paper, we propose a novel framework to formulate latent motion manifolds with keyframe-based constraints, from which the continuous nature of intermediate token representations is considered. Particularly, our proposed framework consists of two stages for identifying a latent motion subspace, _i.e._, a keyframe encoding stage and an intermediate token generation stage, and a subsequent motion synthesis stage to extrapolate and compose motion data from manifolds. Through our extensive experiments conducted on both the LaFAN1 and CMU Mocap datasets, our proposed method demonstrates both superior interpolation accuracy and high visual similarity to ground truth motions.
## 1 Introduction
Pose-to-pose keyframing is a fundamental principle of character animation, and animation processes often rely on key pose definitions to efficiently construct motions [6, 11, 30]. In computer animation, keyframes are temporally connected via interpolation algorithms, which derive intermediate pose attributes to produce smooth transitions between key poses. However, human motion is often complex and difficult to be effectively represented by sparse keyframe sequences alone. While this can be addressed by producing denser sequences of key poses, this approach is laborious for animators, thereby increasing the cost of keyframed animation processes. Even with Motion Capture (MoCap) workflows, artists must often resort to keyframing in order to clean artifacts, impose motion constraints, or introduce motion features irreplicable by motion capture performers.
Learning-based motion interpolation methods have recently been proposed as an acceleration of the keyframed animation process, by automatically deriving details within keyframe transitions as shown in Figure 1. Various machine learning methods have been explored to enable more realistic interpolation solutions from high quality MoCap databases, e.g. by using recurrent networks [14, 15, 40] or transformer-based approaches [10, 26, 31]. Guiding data-driven interpolation with real motions is particularly attractive for keyframe animation workflows, as realistic motions often require the greatest amount of keyframing, by virtue of their subtle motion details and physical constraints.
Naturally, as a sequence in-painting problem, motion interpolation can be formulated as a masked sequence-to-sequence task, which the recent popular transformer approach is expected to learn effectively [4, 38, 42, 43]. However, sequential learning of masked continuous attributes with transformers is largely impaired by the conventional masked tokens for intermediate data. A token is defined as an individual data element on the extendable axis of a sequence, namely the temporal axis for motions. In cur
Figure 1: An example of motion interpolation by our method (first row), given the keyframes of a hopping motion (in blue), compared with the ground truth (second row).
rent sequence modelling formulations, a token is usually represented by a one-hot vocabulary vector to specify individual words or masked elements, which poses a limitation on continuous attributes. Since continuous attributes can be assigned any real value, there exists no value by which a masking token can be defined without corresponding to an otherwise valid input. Previous approaches have employed transformer decoder-level mask tokens and linear interpolation (LERP)-based tokens have been explored to work around this issue [10, 16, 31]. However, these approaches have innate incompatibilities with the transformer architecture. Singular mask token representations, regardless of their point of introduction, result in discontinuous hidden representations, which are antithetical to the evaluation of continuous motion data. On the other hand, the use of LERP as a pre- or post-processing step necessarily introduces an accurate starting estimate to the solution, which transformer models are prone to becoming over-reliant on [24, 45]. To fully address these limitations, we propose a novel transformer-based framework that learns to model keyframe sequences into latent motion manifold representations for intermediate tokens, which reflects the smooth and continuous nature of human motion.
As illustrated in Figure 2, our proposed framework incorporates three stages with transformers to convert a keyframe sequence into a complete motion sequence: Stage-I is a _keyframe encoding stage_ to formulate the overall motion patterns from the keyframe sequence into keyframe context tokens as a guidance for further modelling; Stage-II is an _intermediate token generation stage_, where temporal indices are mapped into intermediate token representations with the keyframe context tokens, which serve as an implicit latent motion manifold constraint; and Stage-III, a _motion synthesis stage_, takes the obtained intermediate tokens by injecting them within the keyframe token sequence, and interpolating them to derive a refined motion sequence estimation.
With this framework, our transformer-based approach exhibits two key advantages over existing approaches that enable its high-quality motion interpolation: a) Manifold learning allows our framework to establish temporal continuity in its latent representation space, and b) The latent motion manifold constrains our transformer model to concentrate its attention exclusively towards motion keyframes, as opposed to intermediate tokens derived from non-keyframe poses, such as those derived from LERP, thereby forcing a necessary alignment between the known and unknown tokens adaptively.
In addition, we identify an adverse link between continuous features and normalisation methods with per-token re-centering. Specifically, layer normalisation (LayerNorm) [1], which is commonly used in transformer architectures, constrains the biases of token features based on their individual distributions. Though this is well-known to be effective with linguistic models [25, 42], continuous data inherently contain biases that should be leveraged at sequence level. Therefore, we introduce a sequence-level re-centering (Seq-RC) technique, where positional pose attributes of keyframes are recentred based on their distribution throughout a motion sequence, and root-mean-square normalisation (RMSNorm) [47] layers are then employed to perform magnitude-only normalisation. Though RMSNorm was initially proposed as only a speedup to LayerNorm, our observations demonstrate that Seq-RC leads to superior performance in terms of accuracy and visual similarity to MoCap sequences.
In summary, our paper's key contributions are threefold:
1. We propose a novel transformer-based architecture consisting of three cooperative stages. It constrains the evaluation of unknown intermediate representations of continuous attributes to the guidance of keyframe context tokens in a learned latent manifold.
2. We devise sequence-level re-centering (Seq-RC) normalisation to effectively operate with real scalar attributes with minimal accuracy loss.
3. Extensive comparisons and ablation results obtained on LaFAN1 and CMU Mocap strongly demonstrate the superiority of our method over the state-of-the-art.
## 2 Related work
In this section, we explore existing machine learned methods by which motion keyframes, or key tokens of other mediums, can be used to generate full sequences. We also review known methods for intermediate data prediction.
### Motion synthesis and completion
The necessities of motion interpolation techniques have existed since the early days of computer animation. A key advantage of computer animation over traditionally drawn animation is its ability to automatically evaluate a smooth motion from a keyframe representation. The widely accepted method for applying interpolation to motion keyframes is through the use of function curves (F-Curves), by which various mathematical functions can be defined between intervals. The common functions used today for keyframed motion representation are often based on Bezier spline curves [18, 37], though any function can technically be employed for this purpose. In addition, inverse kinematics-based constraints [34] have been used jointly with Markov chain methods [23] and decision graphs [21] to generate constrained keyframe or motion sequence transitions in interactive applications such as 3D video games.
More recently, deep learning methods have enabled effective motion completion from sparse keyframe representations. By formulating motion interpolation as a motion synthesis problem with keyframe constraints, a neural
network-assisted keyframed animation approach is emerging as a more effective solution to the current approaches. Recurrent neural networks have been able to derive realistic motion details from motion keyframe compositions [15, 22, 48] and real-time control schemes [40]. In addition, sequence masking approaches such as BERT-based and autoencoder-based methods [7, 10, 19, 26, 29, 31] have enabled full keyframe sequence analysis for completing motions with a more comprehensive context. However, these methods are severely affected by the tendency of transformer-based networks that toward a trivial local minimum, given an initial LERP starting point. This limitation in transformers has been thoroughly observed and documented as the result of early gradient instabilities in attention weights [24, 42, 45, 2].
Learned pose manifolds have become a prominent approach to synthesis plausible human poses, which restrain pose attributes to a specified space [32, 41]. These methods mainly focus on dense poses from complete motion sequences. Our work extends this concept for an incomplete and sparse scenario, by using the keyframes as constraints to derive an implicit motion manifold.
### Transformer-based temporal in-painting
Inter-frame video interpolation is a similar task to motion interpolation, due to its common goal of predicting transitions between frames on the temporal axis. Like motion in-painting methods, temporal transformers for video interpolation use blended inputs as masks for continuous attributes [28]. Alternatively, when interpolating with an individual interval, the mask tokens can be copied from the previous keyframe, with positional encoding being the sole difference between input tokens [27]. While the feature extraction process of video data can take full advantage of the global context afforded by using transformers, other mechanisms such as convolutions can be integrated with a transformer for video in-painting as well [36]. For skeleton-based motion data, existing studies suggest that graph convolutional networks can improve analytic and synthetic performance [8, 30, 26, 13]; however, the application of pure convolutional approaches for data synthesis is only valid when the interval length between keyframes is constant.
### Masked data modelling
Various mask-based machine learning techniques have been proposed to estimate missing values of incomplete sequences based on their known values. The use of masked token is well known to be highly effective for producing pre-trained linguistic transformer models [9, 25, 38, 42]. However, due to aforementioned limitations on token representations for continuous attributes, adapting transformer-based masked data modelling for computer vision tasks has largely focused on masking schemes. Discretised tokens for visual data is a proposed workaround [3, 44]; however, the information loss of tokenisation renders this technique unfeasible for precise mediums like motion data. Masked auto-encoders assign mask tokens to an encoded latent space [16, 44], and adopt them to masked sequences at the decoder level. This allows the transformer encoder to learn solely from known tokens; however, the monolithic definition of masked tokens results in a discontinuous sequence representation in latent space.
Additionally, masked convolutional networks have been a consistently effective approach for image-based data. The popular U-Net structure of convolutional models [33] involves a data bottleneck, which elicits behaviours for ignoring masked image regions [17, 39, 46]. However, the pooling technique of convolutional models restrains the learned features to a fixed scale, and struggles to extrapolate global features with increased data scales, e.g. higher image resolutions or longer videos. Conversely, transformer-based methods are particularly effective with long-range feature learning, and as such, we believe them to be the more suitable for keyframe-guided sequential motion learning.
## 3 Methodology
We formulate the motion interpolation task as a data imputation problem from a sparse representation. The primary objective of our solution is to define motion sequence representations as a function of motion keyframes, which we learn using a transformer-based model. As shown in Figure 2, our solution comprises of three cooperative stages to learn a latent motion manifold using transformers to convert a sequence \(K=\{x_{0},x_{t_{1}},x_{t_{2}},...,x_{N-1}\}\) at frames \(T_{K}=\{0,t_{1},t_{2},...,N-1\}\) into a complete motion sequence \(Y=\{y_{0},y_{1},y_{2},...,y_{N-1}\}\) with temporal indices \(T=\{0,1,2,...,N-1\}\) (i.e. \(T_{K}\subset T\)):
* Stage-I, a **keyframe encoding stage** formulates the keyframe sequence \(K\) into keyframe context tokens \(\Phi^{\text{key}}(K)=\{\phi_{0},\phi_{t_{1}},\phi_{t_{2}},...,\phi_{N-1}\}\), which serves as the motion encoding for further modelling.
* Stage-II, an **intermediate token generation stage** maps the temporal indices \(t\in T\backslash T_{K}\) into intermediate tokens \(\Phi^{\text{imd}}(t|\Phi^{\text{key}}(K))\) with the guidance of keyframe context, which constrains the latent space to obtain an implicit motion manifold.
* Stage-III, a **motion synthesis stage** injects the intermediate tokens between the keyframe tokens \(\Phi^{\text{key}}(K)\), and decodes the resulting token sequence with a transformer \(\Phi^{\text{syn}}\) to derive a motion sequence \(\hat{Y}=\{\hat{y}_{0},\hat{y}_{1},...,\hat{y}_{N-1}\}\) as an estimation of \(Y\).
### Keyframe poses to motion sequence context
Our pose representations are comprised of seven elements per joint: \(P_{t}\in\mathbb{R}^{J\times 3}\) values for global 3D positions produced through forward kinematics (FK), and \(q_{t}\in\mathbb{R}^{J\times 4}\) values for unit quaternion representations of local rotations, that is, \(x_{t}=[P_{t},q_{t}]\in\mathbb{R}^{J\times 7}\), where \(J\) represents the number of joints. In motion capture, local 3D positions for each joint are generally constant, determined by its positional offsets from its parent joint, and do not require explicit representation for the input. The sole exception to this is the root joint, where the global position is the local position.
In Stage-I, we encode the keyframe sequence \(K\) into a learned keyframe context token representation \(\Phi^{\text{key}}(K)\). It is used as a feature map for both the intermediate token generation and motion synthesis stages. First, we project the pose data \(x_{t}\), \(t\in T_{K}\), of each keyframe into a pose embedding vector \(x^{\prime}_{t}\in\mathbb{R}^{d}\) using a linear layer, where \(d\) is the token embedding dimension. Next, we adopt sinusoidal positional encoding (PE) \(PE_{pos}\)[42] of \(n\)-dimension to \(x^{\prime}_{t}\). Unlike natural language processing (NLP) approaches, we do not add PE to our token representations, but instead concatenate a fixed-length PE for two key reasons:
* PE acts as a sliding binary vector, and thus can represent \(2^{n}\) positions using \(n\) elements. For our task, we set \(n\) to 16, allowing our PE vector to produce \(2^{16}\) positions, which is sufficient for our purposes.
* Additive PE introduces minor disruptions in token representations. While the discrete data of NLP can benefit from slight variations in token representations, it acts as a hindrance in continuous data due to the necessity of precision.
### Sequence-level re-centering normalisation
Our Stage-I transformer \(\Phi^{\text{key}}\) includes a number of layers based on multi-head self-attentions, each followed by GELU-activated feed-forward networks (FFNs). We replace all instances of LayerNorm the transformer encoders with RMSNorm [47], which does not involve a re-centering step. This avoids the token-level re-centering present in LayerNorm, which we observed to be detrimental for our regression tasks. We believe that this is due to feature biases that are present in and crucial to continuous attributes, e.g. a pose with a high root position values will result in all joints having similarly high position values. Therefore, we introduce a sequence level re-centering scheme for normalisation at the transformer input-level only, re-centering the root positions of the input based on the mean root positions of all keyframes in the input sequence.
### Motion manifold with context-guided attention
In Stage-II, our intermediate token generation transformer \(\Phi^{\text{imd}}\) aims to learn motion manifolds navigable by temporal indices \(t\in\mathbb{N}\) by the guidance of the keyframing context from \(\Phi^{\text{key}}(K)\). In detail, we accomplish this through a keyframe context-guided attention mechanism, where the attention derives key and value mappings exclusively from linear transformations of \(\Phi^{\text{key}}(K)\). For each intermediate token, the query is simply a sinusoidal embedding of the token's temporal position.
We constrain our manifold implicitly using two mechanisms. Firstly, the intermediate tokens are entirely sourced as a product of \(\Phi^{\text{key}}(K)\) value transformations, which inherently limits the range of latent representations in the manifold. Secondly, the 1D convolutional layers in Stage-III entail feature dependencies between temporally adjacent tokens, which enables the disparately obtained \(\Phi^{\text{key}}\) and \(\Phi^{\text{ind}}\) tokens to converge towards coordination.
### Interpolated motion synthesis
In Stage-III, a transformer \(\Phi^{\text{syn}}\) is introduced to take the intermediate token representations \(m_{t}\) and estimate the
Figure 2: Overview of our transformer architecture and its three main components: (I) Encoding transformer \(\Phi^{\text{key}}\), (II) Intermediate token generation transformer \(\Phi^{\text{imd}}\), and (III) Motion synthesis transformer \(\Phi^{\text{syn}}\).
complete motion sequence \(\hat{Y}\). Since the primary role of keyframe context tokens \(\Phi^{\text{key}}(K)\) is to derive the intermediate token embeddings, an additional projection is performed using a single FFN \(\texttt{FFN}(\Phi^{\text{key}}(K))\) to reformulate the keyframing context tokens into keyframe tokens that adhere to the motion manifold. The intermediate tokens obtained from \(\Phi^{\text{imd}}\) are ready to be used directly. To this end, we can construct the resulting motion manifold \(\hat{M}\) with a token sequence \(\{\hat{m}_{0},\hat{m}_{1},...,\hat{m}_{N-1}\}\) as follows:
\[\hat{m}_{t}=\begin{cases}\texttt{FFN}(\Phi^{\text{key}}(K)_{t}),&\text{if }t \in K\\ m_{t},&\text{otherwise}\end{cases} \tag{1}\]
where \(\Phi^{\text{key}}(K)_{t}\) is the context token representation for the keyframe of index \(t\) in the sequence.
Before applying the transformer \(\Phi^{\text{syn}}\), we feed the tokens of \(\hat{M}\) through a 1D convolutional layer of kernel size 3. For each layer of \(\Phi^{\text{syn}}\), a self-attention function is followed by a FFN. The output token sequence of \(\Phi^{\text{syn}}\) is fed through another 1D convolutional layer before the final linear projection. The final output \(\hat{Y}_{t}\) consists of a root position estimation \(\hat{p}_{t,0}\in\mathbb{R}^{3}\) of \(p_{t,0}\) and local quaternions \(\hat{q}_{t}\in\mathbb{R}^{J\times 4}\). Note that \(p_{t,0}\) and \(q_{t}\) jointly can be used to compute the global position \(P_{t}\) and rotation \(Q_{t}\) information by FK.
### Loss functions
The stages in our method are trained jointly in an end-to-end manner using a set of loss functions. To determine the loss for a motion sequence of length \(N\), we use \(\ell_{1}\) distance for the following features obtained from \(\hat{Y}\) and \(Y\):
* **Local/root position loss**: With predicted and real coordinate values of the root joint at the \(t\)-th frame as \(\hat{p}_{t,0}\in\mathbb{R}^{3}\) and \(p_{t,0}\in\mathbb{R}^{3}\) respectively, \[L_{root}=\frac{1}{N}\sum_{t=0}^{N-1}||\hat{p}_{t,0}-p_{t,0}||_{1}.\] (2)
* **Local rotation loss**: With predicted (pre-normalised) and expected (unit-normalised) quaternion values of all joints at frame \(t\) as \(\hat{q}_{t,j}\in\mathbb{R}^{4}\) and \(q_{t,j}\in\mathbb{R}^{4}\) respectively, \[L_{quat}=\frac{1}{NJ}\sum_{t=0}^{N-1}\sum_{j=0}^{J-1}||\hat{q}_{t,j}-q_{t,j}|| _{1}.\] (3)
* **Global position loss**: With predicted and real FK-derived coordinate values of all joints at frame \(t\) as \(\hat{P}_{t}\in\mathbb{R}^{J\times 3}\) and \(P_{t}\in\mathbb{R}^{J\times 3}\) respectively, \[L_{FK_{p}}=\frac{1}{NJ}\sum_{t=0}^{N-1}||\hat{P}_{t}-P_{1}||_{1}.\] (4)
* **Global rotation loss**: With predicted and real FK-derived quaternion values of all joints at frame \(t\) as \(\hat{Q}_{t}\in\mathbb{R}^{J\times 4}\) and \(Q_{t}\in\mathbb{R}^{J\times 4}\) respectively, \[L_{FK_{q}}=\frac{1}{NJ}\sum_{t=0}^{N-1}||\hat{Q}_{t}-Q_{t}||_{1}.\] (5)
Although quaternions are unit-normalised in practice, we found that calculating \(L_{quat}\) with non-normalised predictions resulted in improved gradient stability during the training process and, in turn, greater training convergence.
In summary, our training loss function \(L\) is as follows:
\[L=\alpha_{l}(L_{root}+L_{quat})+\alpha_{g}(L_{FK_{p}}+L_{FK_{q}}), \tag{6}\]
where \(\alpha_{l}\) and \(\alpha_{g}\) are local and global feature loss scaling parameters, respectively. The accuracy of local attributes is best prioritised over that of global attributes, since normalised quaternions remain in use for deriving global features, which lead to gradient instability [10, 15].
## 4 Experiments and results
### Datasets and metrics
We benchmark our method in the motion interpolation task against both the state-of-the-art RNN-based network [15] and BERT-based network [10] in motion transition generation. To evaluate the effectiveness of each model, they are to complete the following motion interpolation task:
* **Input**: The model is provided with the keyframes \(K\) of an \(N\)-frame ground truth motion. While keyframes can be defined for any combination of frames, we place each keyframe evenly every 5, 15, or 30 frames for consistency, starting with the first frame, e.g., \(K=\{x_{0},x_{5},x_{10},...,x_{|K|}\}\) for 5-frame intervals.
* **Expected output**: Each model is to output an \(N\)-frame motion \(\hat{Y}=\{\hat{y}_{0},\hat{y}_{1},...,\hat{y}_{N-1}\}\) given the keyframes \(K\). This output is compared with the ground truth with the L2P and L2Q metrics used in state-of-the-art comparisons [10, 15]. These metrics measure the average \(\ell_{2}\) errors of all positional and rotational attributes respectively for each pose. In addition, we apply the Normalised Power Spectrum Similarity (NPSS) [12] metric to measure visual similarities between the estimated and actual motion outputs.
We source our motions for both training and evaluation from the Ubisoft La Forge Animation (LaFAN1) dataset [15] and the Carnegie-Mellon University Motion Capture (CMU Mocap) dataset 2. The CMU Mocap motions are resampled from their original 120 frames per second (FPS) down to 30 FPS considering the computational cost, and to match the frame rate of the LaFAN1 dataset. While LaFAN1 focuses largely on motions with visibly dynamic details such as locomotion and sports actions, CMU Mocap provides a larger variety of motions, many of which exhibit more minute details and ambiguous motion trajectories. From a data-level perspective, this suggests that root
position accuracy is more important in the LaFAN1 dataset compared to the CMU Mocap dataset. We employ both datasets to demonstrate our method's ability to adapt with different levels of motion dynamics.
For our model, the positional data of each motion sample is recentred around the \(XYZ\) means of the keyframed root positions. Given that unit quaternion values are restricted to a range of \([-1,1]\), while positional values can be of any scalar value, we rescale positional data such that \(L_{\text{root}}\approx L_{\text{quat}}\) with initial model weights. During training, we randomly select between \([\lfloor\frac{|Y|}{24}\rfloor,\lfloor\frac{|Y|}{4}\rfloor]\) keyframes for each sampled motion \(Y\), with the first and last frames being stipulated as keyframes. Each motion sample batch has a random length of \(|Y|\in[72,144]\).
### Implementation details
We implement our method with 8 layers for each transformer, with a token embedding size of \(d=512\), and an FFN size of \(4\times d\). Each multi-head attention layer is split between 8 attention heads. Since our method relies on coordination between Stage-I and Stage-II, the training stability of deeper models benefits greatly from larger batch sizes. For our 8-layer setting, we found that a batch size of 64 motions per epoch is sufficient for convergence.
We train our model using the Adam optimiser [20] for 50,000 epochs with a scheduled learning rate. Specifically, we employ both a warm-up and decay strategy for our learning rate using the following strategy [42]:
\[lr(e)=4\text{e-4}\times\text{min}(e^{-\frac{1}{2}},e\times 1000^{-\frac{3}{2}}), \tag{7}\]
where \(e\) represents the number of current training epoch. In addition, we linearly scale \(\alpha_{g}\in[0,1]\) for 1,000 epochs after warm-up, in order to avoid conflicting gradients caused by ambiguous quaternion polarity when backpropagating through FK functions, i.e. \(FK(Q)=FK(-Q)\) for any set of joint rotation quaternions \(Q\). We set \(\alpha_{l}=1\) throughout the training process.
### Comparison to the state-of-the-art
We benchmark the performance of our method against state-of-the-art models by evaluating their L2P, L2Q, and NPSS metrics with testing datasets. Table 1 compares
\begin{table}
\begin{tabular}{|c|c c c c c c c c c c c c c c c c c c c c|} \hline \multicolumn{2}{|c|}{Category} & \multicolumn{5}{c|}{CMU:backpath} & \multicolumn{5}{c|}{CMU:self} \\ \hline & \multicolumn{2}{c|}{**L2P**} & \multicolumn{2}{c|}{**L2Q**} & \multicolumn{2}{c|}{**NPSS**} & \multicolumn{2}{c|}{**L2P**} & \multicolumn{2}{c|}{**L2Q**} & \multicolumn{2}{c|}{**NPSS**} \\ Interval & 5 & 15 & 30 & 5 & 15 & 30 & 5 & 15 & 30 & 5 & 15 & 30 & 5 & 15 & 30 & 5 & 15 & 30 & 5 & 15 & 30 & 5 & 15 & 30 \\ \hline LERP & 0.234 & 0.899 & 1.718 & 0.035 & 1.598 & 0.135 & 0.834 & 1.365 & 1.102 & 0.544 & 1.051 & 0.695 & 1.044 & 0.083 & 0.351 & 1.087 & **0.950** & 0.941 & 0.196 & 0.473 & **0.060** & 0.396 & 1.296 \\ \hline
3.53 & 0.907 & 1.74 & 0.516 & 1.211 & 1.722 & 0.298 & 0.886 & 1.065 & 0.547 & 1.043 & 0.221 & 0.705 & 0.070 & 0.269 & 0.066 & 0.181 & 0.511 & 0.071 & 0.479 & 0.078 & 0.280 & 0.280 & 1.187 \\ \(\Delta\)-interpolator & **0.189** & 0.727 & 1.395 & **0.270** & 0.814 & 1.342 & **0.152** & 0.642 & 1.560 & 0.410 & 0.594 & 1.212 & 0.206 & 0.717 & 0.189 & 0.099 & 0.344 & 1.023 & 0.056 & 0.193 & 0.511 & 0.085 & 0.216 & 0.487 & 0.078 & 0.390 & 1.276 \\ \(\text{T}\)-completto & **0.009** & 0.892 & 1.624 & 0.499 & 1.777 & 2.297 & 1.072 & 1.064 & 2.054 & 0.181 & 0.282 & 0.912 & 0.558 & 0.116 & 0.291 & 0.769 & 0.133 & 0.222 & 0.461 & 0.124 & 0.458 & 0.110 & 0.283 & 0.635 \\ MAE & 2.95 & 0.697 & 1.566 & 0.729 & 1.191 & 1.514 & 0.408 & 1.463 & 0.405 & 0.476 & 1.163 & 0.405 & 0.479 & 0.272 & 0.620 & 0.815 & 0.290 & 0.585 & 0.082 & 0.091 & 0.241 & 0.188 & 0.267 & 0.147 & 0.716 \\ \hline Ours & 0.217 & **0.471** & **0.391** & 0.328 & **0.643** & **0.631** & 0.637 & **0.167** & **0.410** & **0.769** & **0.101** & **0.274** & **0.456** & **0.165** & **0.473** & **0.668** & **0.054** & **0.160** & **0.325** & **0.048** & **0.083** & **0.116** & **0.083** & **0.18** & **0.182** & **0.081** & **0.194** & **0.155** \\ \hline Category & \multicolumn{5}{c|}{CMU:s:sua} & \multicolumn{5}{c|}{CMU:sua} & \multicolumn{5}{c|}{CMU:sua} & \multicolumn{5}{c|}{CMU:sua} \\ \hline LERP & 0.272 & 1.691 & 2.087 & **0.362** & 1.265 & 1.910 & 0.278 & 2.114 & **0.147** & 0.495 & 0.946 & 0.316 & 1.167 & 0.205 & 0.873 & 1.242 & **0.706** & 0.366 & 0.720 & **0.095** & 0.327 & 0.312 & **0.028** & 0.133 & 0.506 \\ \hline BERT & 0.337 & 1.131 & 2.164 & 0.509 & 1.955 & 2.315 & 0.233 & 0.800 & 1.111 & 0.540 & 0.595 & 0.256 & 0.677 & 1.214 & **0.717** & 0.711 & 1.213 & 0.170 & 0.374 & 0.770 & 1.293 & 0.346 & 0.516 & 0.404 & 0.115 & 0.458 \\ \(\Delta\)-interpolator & 0.287 & 1.242 & 0.391 & 1.293 & 0.129 & 0.172 & 0.209 & 0.093 & 0.125 & 0.388 & 0.933 & 0.630 & 1.145 & 0.222 & 0.650 & 1.369 & 0.083 & 0.375 & 0.797 & 0.142 & 0.355 & 0.565 & 0.095 & 0.162 & 0.151 \\ \(\text{T}\)-completto & 3.418 & 1.171 & 2.306 & 0.485 & 1.273 & 2.709 & 0.193 & 0.279 & 0.224 & 0.334 & 0.533 & 0.924 & 0.217 & 0.723 & 1.263 & 0.288 & 0.739 & 0.790 & 0.167 & 0.368 & 0.058 & 0.066 & 0.136 & 0.326 \\ MAE & 0.279 & 0.864 & 1.930 & 0.595 & 1.888 & 2.058 & 0.368 & 0.904 & 2.623 & 0.281 & 0.508 & 0.820 & 0.662 & 0.903 & 1.222 & 0.612 & 0.893 & 1.536 & 0.118 & 0.390 & 0.554 & 0.154 & 0.315 & 0.459 & 0.068 & 0.138 & 0.336 \\ \hline Ours & **0.206** & **0.662** & **1.519** & 0.393 & **0.823** & **1.447** & **0.186
the performance of our architecture against the BERT-based motion in-painting transformer [10], the encoder-decoder-based \(\Delta\)-interpolator [31], the RNN-based approach TGcomplete[15], and the masked auto-encoder (MAE) architecture [16].
The quantitative performance of our model is greatly improved over all existing methods, as well as LERP, in a large majority of keyframing scenarios. The performance improvement of our method compared to LERP increases with the length of keyframe intervals, as using learning based methods provides the opportunity to reconstruct non-linear motion details. Note that for a short keyframe interval, a linear estimation (\(f(x+\Delta x)=f(x)+\hat{f}^{\prime}(x)\Delta x+o(\Delta x)\)) of a continuous motion (function) can be relatively accurate, which explains why similar performance is found between LERP and our method for the 5-frame interval setting. However, other existing methods are significantly worse than LERP. Our model notably outperforms both TGcomplete and MAE with its single token mask in every scenario. Thus, a clear motion interpolation improvement can be observed from our decoupling strategy with motion manifold technique, compared to the RNN model.
The BERT-based method [10] exhibits a clear disadvantage in its performance due to its reliance on LERP-based input mask tokens. By deriving the mask token embeddings from a sub-optimal estimation, self-attention mechanisms tend to converge towards reproducing the input token rather than composing more realistic poses, as it is close to a trivial local minimum to learn. Consequently, such models never learn to fully consider the keyframe tokens as their main source of information. Figure 3 highlights the near-identical latent manifolds of LERP output and BERT-based evaluation. We observe a similar behaviour with the \(\Delta\)-interpolator model, where LERP-based transformations are applied as a post-processing step [31]. While its \(\Delta\)-mode strategy allows the model to perform marginal improvements over LERP more frequently, it is still heavily reliant on the performance of LERP, which does not bode well with longer keyframe intervals. To this end, we can deduce that the realistic interpolation can be difficult with LERP-reliant solutions. Conversely, our manifold learning approach fully considers the continuous joint positions and rotations of the input keyframes, and is able to converge upon a significantly more optimal solution.
Table 2 documents an ablation study for each our architecture's contributions. Major improvements to the architecture's L2P, L2Q, and NPSS performance can be observed for the inclusion of manifold self-attention in Stage III, the replacement of LayerNorm with our sequence-level re-centering normalisation scheme, and concatenation over addition of PE. It should be noted that the Stages I and
Figure 4: Sample motion manifolds obtained by t-SNE.
Figure 5: Performance improvement of our architecture by joints in L2P + L2Q, compared to existing methods.
\begin{table}
\begin{tabular}{|l|c|c c c|} \hline \(|Y|\) & \# Params & 31 & 61 & 121 \\ \hline LERP & - & 0.0330 & 0.0360 & 0.0370 \\ TGcomplete & 15.6M & 0.5001 & 1.0272 & 1.9774 \\ BERT & 29.3M & 0.0541 & 0.0570 & 0.0596 \\ MAE & 54.9M & 0.0755 & 0.0793 & 0.0820 \\ Ours & 83.2M & 0.0793 & 0.0830 & 0.0850 \\ \hline \end{tabular}
\end{table}
Table 3: Inference time in seconds and parameter count for different motion lengths \(|Y|\). Keyframes of each evaluation were evenly placed every 15 frames, starting from the first frame.
Figure 3: An example of motion interpolation by each of the tested methods, compared with the ground truth motion (first row). The green curves indicate the offsets regarding the \(\ell_{1}\) distance between the interpolation and ground truth manifolds. Less turbulent offsets indicate more visually similar motion predictions.
II only variant of our model (i.e., (a) in Table 2) is structurally identical to the \(\Delta\)-interpolator model with \(\Delta\)-mode disabled [31]. In addition, we demonstrate the use of \(\Phi_{\text{key}}\) token representations over separately trained keyframe embeddings in Stage-III, which leads to improved convergibility of deeper architecture settings. We further demonstrate the importance of such deeper settings, which provide a significant boost to our model's evaluation accuracy.
Figure 4 visualises the latent motion manifolds in 2D and 3D spaces for our method, LERP and ground truth using t-SNE analysis. The manifold of our method is obtained from the inputs of Stage-III, and LERP and ground truth are from the pose data. It can be observed that the lower-dimensional curves (i.e., manifolds) represent the motion of higher-dimensional in a smooth manner, and ours is very close to the one associated with the ground truth, compared with LERP. This indicates the superiority of our method to derive a high-quality latent motion space. Figure 3 illustrates an example of hopping motion interpolated by different methods with quantitative metrics. The ground truth data is reduced to a motion manifold with t-SNE. The offset of the manifold from each interpolation method compared to the ground truth one is obtained by \(\ell_{1}\) distance for visualization. Particularly, the offset values are enlarged for observation purpose. Our method gains the best motion manifold with the least offset from the ground truth manifold.
Figure 5 dissects the L2P and L2Q improvements of our method into individual joints. We can clearly observe that the main improvements of our method over LERP exist within the global positions and rotations of foot joints, whereas improvements are more widely spread compared to the MAE and RNN-based methods. On average, our approach brings a L2P and L2Q benefit to all joints.
### Inference latency
The inference time of different motion interpolation methods was evaluated in our experiments, as the visual latency of keyframe adjustments is important for efficient animation work, as well as real-time applications. We implemented these methods with PyTorch on an AMD Ryzen 9 3950X processor and an NVIDIA GeForce RTX 3090 GPU. Table 3 shows that the parallelism provided by the transformer is highly beneficial to our method when interpolating complete motion sequences. Our method shows a similar order of time complexity to the LERP method, being consistently around \(2.5\times\) LERP inference time, and significantly faster than the RNN-based approach. In addition, our method is stable in terms of the inference time for different sequence lengths, whilst the RNN-based approach shows a significant increasing latency from 31-frame sequence to 121-frame sequence.
### Extension to motion completion
Though our model is designed for sparse keyframe interpolation, we can additionally perform motion completion as it can be defined as a specifically constructed keyframe set. Table 4 compares the performance of our model against linear interpolation and the state-of-the-art models for motion completion. With the benefit of the motion context, our model outperforms LERP, but not to the efficacy of the RNN-based [15] and transformer-based [31, 10] models.
### Limitations and future work
One limitation of our approach is that its maximum motion length is limited by the length of its training samples. Unlike most transformer-based solutions that can trivially employ relative positional encodings [35, 5], our method relies on continuous positional vectors, such as sinusoidal encodings, for its manifold representations, and thus cannot employ the same model to accept inputs of arbitrary length. Further research for a compatible relative position representation would allow our approach to be seamlessly applied in keyframed animation workflows for longer sequences.
## 5 Conclusion
This paper presents a three-stage transformer-based motion interpolation method. We begin by producing learned motion keyframe context tokens in Stage-I. With context-guided attention, Stage-II generates embeddings for intermediate tokens by inferencing an implicitly constrained latent motion manifold with the guidance of the keyframe context tokens. Stage-III takes both the keyframe tokens and intermediate tokens to compose the interpolated motion sequence. In addition, we introduce a novel sequence-level re-centreing technique to address the feature biases that are more prevalent in sequences of continuous attributes. We demonstrate that the superior interpolation accuracy of our approach compared with existing RNN and masked transformer methods. As our architecture is designed for any masked sequence-to-sequence task with continuous attributes, we believe that our architecture's applications extend beyond motion interpolation.
## Acknowledgment
This research was in part supported by Australian Research Council (ARC) grant #DP210102674.
\begin{table}
\begin{tabular}{|c|c c c|c c c|c c|} \multicolumn{1}{c}{} & \multicolumn{2}{c}{**L2P**} & \multicolumn{2}{c}{**L2Q**} & \multicolumn{2}{c}{**NVSS**} \\ \hline Interval & 5 & 15 & 30 & 5 & 15 & 30 & 5 & 15 & 30 \\ \hline LERP & 0.35 & 1.28 & 2.46 & 0.22 & 0.66 & 1.17 & 0.0021 & 0.0430 & 0.2663 \\ TG\({}_{\text{coarse}}\) & 0.22 & 0.64 & 1.25 & 0.17 & 0.45 & 0.68 & 0.0019 & 0.0247 & 0.1298 \\ BERT & 0.22 & 0.60 & 1.14 & 0.15 & 0.38 & 0.60 & 0.0106 & 0.2051 & 0.1270 \\ \(\Delta\)-interpolator & 0.16 & 0.53 & 1.05 & 0.12 & 0.33 & 0.59 & 0.0015 & 0.0238 & 0.1272 \\ \hline Ours & 0.30 & 0.71 & 1.26 & 0.21 & 0.40 & 0.63 & 0.0019 & 0.0284 & 0.1393 \\ \hline \end{tabular}
\end{table}
Table 4: Motion completion performance of our method, based on the Harvey et al. (2020) [15] setup. |
2303.11686 | Learning a 3D Morphable Face Reflectance Model from Low-cost Data | Modeling non-Lambertian effects such as facial specularity leads to a more
realistic 3D Morphable Face Model. Existing works build parametric models for
diffuse and specular albedo using Light Stage data. However, only diffuse and
specular albedo cannot determine the full BRDF. In addition, the requirement of
Light Stage data is hard to fulfill for the research communities. This paper
proposes the first 3D morphable face reflectance model with spatially varying
BRDF using only low-cost publicly-available data. We apply linear shiness
weighting into parametric modeling to represent spatially varying specular
intensity and shiness. Then an inverse rendering algorithm is developed to
reconstruct the reflectance parameters from non-Light Stage data, which are
used to train an initial morphable reflectance model. To enhance the model's
generalization capability and expressive power, we further propose an
update-by-reconstruction strategy to finetune it on an in-the-wild dataset.
Experimental results show that our method obtains decent rendering results with
plausible facial specularities. Our code is released
\href{https://yxuhan.github.io/ReflectanceMM/index.html}{\textcolor{magenta}{here}}. | Yuxuan Han, Zhibo Wang, Feng Xu | 2023-03-21T09:08:30Z | http://arxiv.org/abs/2303.11686v1 | # Learning a 3D Morphable Face Reflectance Model from Low-cost Data
###### Abstract
Modeling non-Lambertian effects such as facial specularity leads to a more realistic 3D Morphable Face Model. Existing works build parametric models for diffuse and specular albedo using Light Stage data. However, only diffuse and specular albedo cannot determine the full BRDF. In addition, the requirement of Light Stage data is hard to fulfill for the research communities. This paper proposes the first 3D morphable face reflectance model with spatially varying BRDF using only low-cost publicly-available data. We apply linear shiness weighting into parametric modeling to represent spatially varying specular intensity and shiness. Then an inverse rendering algorithm is developed to reconstruct the reflectance parameters from non-Light Stage data, which are used to train an initial morphable reflectance model. To enhance the model's generalization capability and expressive power, we further propose an update-by-reconstruction strategy to finetune it on an in-the-wild dataset. Experimental results show that our method obtains decent rendering results with plausible facial specularities. Our code is released here.
## 1 Introduction
3D Morphable Face Models (3DMM) [4, 22] have attracted much attention in the past two decades, as it provides a powerful and compact statistical prior of 3D face geometry and appearance with dense point-to-point correspondence to various downstream applications like face reconstruction [17, 25, 58, 61, 62], rendering [16, 59, 63, 64, 77], and animation [3, 8, 11, 23, 24, 74]. Existing works [58, 60, 61] have demonstrated promising results for improving the generalization capability and expressive power of 3DMM under the assumption that faces are Lambertian surfaces. However, it is still challenging to model non-Lambertian effects such as facial specularity in 3DMM, which can lead to a more realistic face model.
A few recent works [40, 56] involve non-Lambertian facial reflectance in the morphable face model. Using a Light Stage [14, 27, 43], they capture diffuse and specular albedo maps of tens of participants. Then, they model the diffuse and specular albedo by training a PCA model [56] or a deep generative network [40] on the acquired data. However, only the diffuse and specular albedo cannot determine the complete Bidirectional Reflectance Distribution Function (BRDF). Thus, other works [18, 19, 20] set the remaining reflectance parameters (_e.g._ specular exponent for the Blinn-Phong BRDF [5], roughness for the Torrance-Sparrow BRDF [65]) of all face vertices to a reasonable value to characterize specular shiness and obtain the complete BRDF. As shown in Figure 7, these spatially uniform parameters lead to unpleasing rendering results since face reflectance is inherently spatially varying [71]. Besides, the
requirement of Light Stage data is hard to fulfill since building a Light Stage is quite difficult, and no publicly available Light Stage dataset is sufficient to construct a 3DMM.
To overcome these limitations, we propose and train the first morphable face reflectance model with spatially varying BRDF from low-cost publicly-available data. Inspired by previous works [45, 46], we represent face reflectance as a Lambertian BRDF combined with the linear combination of several Blinn-Phong BRDFs corresponding to different predefined specular exponents. Thus, the reflectance parameters of each face vertex include an RGB color for the Lambertian BRDF and a set of weights for the Blinn-Phong BRDFs. As illustrated in Figure 2, our representation can naturally modulate specular intensity and shiness by adjusting the absolute and relative scales of the linear combination weights, respectively. Compared to previous works [40, 56] not modeling specular shiness, we define a complete BRDF by this representation in 3DMM. Compared to the traditional Blinn-Phong BRDF that models specular intensity and shiness in a nonlinear formulation [5], our linear representation (Equation (2)) is much easier to reconstruct the reflectance parameters from recorded images. With this linear reflectance representation, we develop an inverse rendering approach to estimate the spatially varying reflectance parameters for the 128 selected identities in Multi-PIE [28], a public dataset with face images captured under controlled camera views and light directions. Then, we learn a PCA model for the estimated reflectance parameters as our initial morphable face reflectance model.
Considering that the Multi-PIE dataset only contains 128 identities which is far from sufficient to capture the variability of human faces, we propose to finetune the initial model on a large-scale in-the-wild dataset, FFHQ [32], to improve its generalization capability and expressive power. As the inputs are in-the-wild images with unknown lighting information, it is not easy to reconstruct accurate reflectance from them. Our key observation is that, on the one hand, we already have an initial parametric reflectance model that can better formulate the reflectance reconstruction from in-the-wild images. On the other hand, the reconstructed reflectance from in-the-wild data could provide feedback to enhance the face prior knowledge in our morphable reflectance model. Based on this observation, we jointly reconstruct the face reflectance coefficients and update the parameters of our morphable face reflectance model (the mean and bases). Another challenge here is to predict high-order spherical harmonics (SH) lighting [48] for in-the-wild images, which is crucial for updating the high-frequency information of our non-Lambertian reflectance model [49]. To solve this problem, we build another PCA model for real-world environment lighting in SH coefficients space, which largely reduces the searching space of the high-order SH coefficients. During face reconstruction, we first predict the parameters of the PCA lighting model and then retrieve the high-order SH coefficients from it. Finally, the in-the-wild images are well reconstructed with our parametric reflectance model, and the model itself is also updated gradually in this process to achieve high generalization capability and expressive power.
In summary, our contributions include:
* We propose the first 3D morphable face reflectance model with spatially varying BRDF and a technique to train the model with low-cost publicly-available data.
* We apply linear shiness weighting into parametric face modeling to represent spatially varying specular shiness and intensity and ease the process of reconstructing reflectance from images.
* We propose an update-by-reconstruction strategy to finetune our face reflectance model on an in-the-wild dataset, improving its generalization capability and expressive power.
## 2 Related Work
3D Morphable Face ModelThe origin 3DMM, proposed by Blanz and Vetter [4], learns a PCA model to represent 3D face shape and texture from 200 scans. This seminal work has motivated substantial follow-ups in the past two decades [22]. Paysan et al. [47] propose the Basel Face Model (BFM), the first 3DMM available to the public. However, BFM and the original 3DMM can only model neutral faces. To handle expression variation, Li et al. [41] propose the FLAME model with additive expression bases trained from 4D scans. Cao et al. [9] build a bilinear expression model from a database with multi-expression scans of the same person. Another class of works attempts to better capture human face variation by scaling up the number of scans for 3DMM training. Dai et al. [13] and Booth et al. [6] learn large-scale 3DMM from 1.2k and 10k subjects, respectively. However, all of these previous works approximate face as Lambertian surface and ignore the modeling of non-Lambertian reflectance. Recently, some works [40, 56] build morphable models for diffuse and specular albedo using Light Stage scans [57, 27, 43]. However, specular shiness is ignored in their model. In addition, their requirement on Light Stage scans is hard to fulfill for the research community. Our method can represent both spatially varying specular intensity and shiness while only using low-cost publicly-available data as the training set.
More recently, some works [66, 67, 58, 60, 61] propose to learn 3DMM from large-scale 2D datasets by jointly performing face reconstruction and face model learning. These methods can learn a 3DMM that generalizes well across the population. Tewari et al. [58] learn linear face shape and texture models from videos by designing novel loss functions to handle depth ambiguity. A follow-up work [60] learns a complete 3DMM, including shape, expression, and
texture, from videos and neutral face images. Tran et al. [66, 67] learn a non-linear 3DMM from 2D image collections, using deep neural networks to model face shape and texture. Inspired by these works, we finetune our initial face reflectance model on an in-the-wild face image dataset to improve its generalization capability and expressive power.
Face Appearance CaptureExisting methods for face appearance capture [35] fall into two categories: the image-based method and the model-based method. The key idea of the image-based method is to capture a set of images to sample the light transport function, and then novel appearances can be obtained by linearly recombining these images. To fulfill this, Debevec et al. [14] construct the Light Stage to capture the light transport function by programmatically activating One-Light-At-a-Time (OLAT). Using the captured OLAT data as the training set, Mallikarjun et al. [44] propose a learning-based method to infer the whole light transport function from a monocular input face image, and Kumar et al. [36] build a statistical model for the light transport function at a fixed frontal viewpoint. By directly modeling the light transport function, these image-based methods can represent specularities, sub-surface scattering, and other high-order effects caused by the complex interaction between light and face surface. However, they cannot export geometry or material assets for further usages like material editing or animation.
Model-based methods capture the parameters of a reflectance model and utilize the rendering equation [30] to synthesize novel appearances. Previous works [57, 27, 43] use polarised illumination to directly capture the diffuse and specular albedo of human face. Using the captured data from [57, 43, 27], recent works train a neural network to map a monocular face image into its diffuse and specular albedo map [38, 73] or build a morphable model for this maps [56, 40]. Another class of works adopts an inverse rendering framework to estimate the face reflectance parameters from images. Weyrich et al. [71] develop a novel reflectance model for face and estimate its parameter from dense OLAT images captured by a Light Stage [14]. Riviere et al. [51] leverage images captured by a lightweight single-shot system for inverse rendering. In our method, we estimate the reflectance parameters for each identity in the Multi-PIE dataset from the provided OLAT images via an inverse rendering framework and use these parameters to build an initial morphable face reflectance model.
More recently, some works attempt to capture face appearance using a low-cost setup, such as a single selfie video of the subject rotating under the sun [68], recorded video of the subject illuminated by a desktop monitor [54], and co-located captured sequence [2, 55]. However, all of these previous works are person-specific. Our goal is to build a generic morphable reflectance model using low-cost data.
## 3 Method
In this Section, we first introduce the representation of our morphable face reflectance model (Section 3.1), then propose a method to learn this model from low-cost publicly-available data. Specifically, we first learn an initial model from the Multi-PIE dataset (Section 3.2) and then finetune it on the FFHQ dataset to improve its generalization capability and expressive power (Section 3.3).
### Morphable Face Reflectance Model
Our goal is to design a morphable model to represent the spatially varying BRDF of the human face across the population. To this end, we build our model upon the BFM09 [47] geometry model and assign the spatially varying reflectance parameters to its vertices. We employ a linear model for the reflectance parameters of the human face:
\[R=\bar{R}+\mathrm{M}_{\mathrm{R}}\cdot\beta \tag{1}\]
Here, \(\bar{R}\in\mathbb{R}^{kV}\) and \(\mathrm{M}_{\mathrm{R}}\in\mathbb{R}^{kV\times N_{R}}\) are the mean and bases of face reflectance parameters, respectively; \(N_{R}\) is the number of bases; \(k\) is the number of reflectance model parameters for each vertex; \(V\) is the number of vertices of the BFM09 geometry model; \(\beta\in\mathbb{R}^{N_{R}}\) is the morphable model coefficients. Note that previous works [58, 26, 60] represent face reflectance as the Lambertian BRDF. So in their scenario \(k=3\) to represent the RGB diffuse color.
Next, we first introduce our reflectance representation. Then, we illustrate our efficient shading technique for this representation under directional or environmental illumination, which can accelerate the model learning process detailed in later Sections.
Reflectance RepresentationTo model non-Lambertian effects such as facial specularity, we incorporate a diffuse term and a specular term in our face reflectance representation \(f_{r}\). We instantiate them as the Lambertian BRDF and the linear combination of several Blinn-Phong BRDFs [5] with different predefined specular exponents, respectively:
\[f_{r}(\mathbf{l},\mathbf{v},\mathbf{n})=\frac{c}{\pi}+\sum_{i=1}^{k_{1p}}w_{i }\cdot f_{i}\cdot\frac{\langle\mathbf{h},\mathbf{n}\rangle^{p_{i}}}{\langle \mathbf{l},\mathbf{n}\rangle} \tag{2}\]
Figure 2: Our reflectance representation can naturally modulate specular intensity and shiness by adjusting the absolute or relative scales of the linear combination weights. Here we show an example using the linear combination of 2 Blinn-Phong BRDFs with specular exponents \(p_{1}\!=\!8\) and \(p_{2}\!=\!64\), respectively. Note the changes in specular intensity and shiness under different linear combination weights \(w_{1}\) and \(w_{2}\).
Here, **l**, **v**, and **n** indicate the incident light direction, viewing direction, and normal direction, respectively; \(c\) is the RGB diffuse color; \(w_{i}\) are the linear combination weights; \(p_{i}\) are the predefined specular exponents; \(k_{bp}\) is the number of Blinn-Phong BRDFs; \(f_{i}=\frac{p_{i}+2}{4\pi\cdot(2-2^{-\frac{p_{i}}{2}})}\) are the energy normalization factor [1] such that the corresponding Blinn-Phong lobe integrates to 1; \(\langle\cdot,\cdot\rangle\) is the clamped cosine function; \(\textbf{h}=\frac{\textbf{v}+\textbf{l}}{||\textbf{v}+\textbf{l}||_{2}}\) is the half vector [1].
In our scenario, the reflectance parameters for each face vertex are the diffuse color \(c\) and \(k_{bp}\) linear combination weights \(w_{i}\). Thus, each face vertex has \(k=k_{bp}+3\) reflectance parameters attached to it. Note that the specular exponents \(p_{i}\) are predefined and shared by each face vertex; they are hyper-parameters in our model. Our representation can naturally modulate the specular intensity and shiness. As illustrated in Figure 2, doubling all the weights would double the specular intensity, while adjusting the aspect ratio between weights would change the specular shiness. Moreover, our reflectance representation generalizes to previous work [18, 19, 20] with spatially varying specular albedo and a global specular exponent when \(k_{bp}=1\).
Efficient ShadingFor _directional illumination_, by denoting the incoming irradiance from direction **l** as \(E\), we can directly obtain the shading:
\[s=E\cdot(\frac{c}{\pi}\cdot\langle\textbf{l},\textbf{n}\rangle+\sum_{i=1}^{k_ {bp}}w_{i}\cdot f_{i}\cdot\langle\textbf{h},\textbf{n}\rangle^{p_{i}}) \tag{3}\]
For _environmental illumination_, we denote the incoming radiance from direction **l** as \(E(\textbf{l})\). According to the rendering equation [30], we have:
\[s=\int_{\textbf{l}\in\Omega^{+}}E(\textbf{l})\cdot f_{r}(\textbf{l},\textbf{v},\textbf{n})\cdot\langle\textbf{l},\textbf{n}\rangle\mathrm{d}\textbf{l} \tag{4}\]
Here, \(\Omega^{+}\) is the upper hemisphere centered by **n**. Substituting Equation (2) into Equation (4), we then separate the shading \(s\) into a diffuse part \(s_{d}\) and a specular part \(s_{s}\):
\[s =s_{d}+s_{s},\mathrm{where} \tag{5}\] \[s_{d} =\int_{\textbf{l}\in\Omega^{+}}\frac{c}{\pi}\cdot E(\textbf{l}) \cdot\langle\textbf{l},\textbf{n}\rangle\mathrm{d}\textbf{l}\] (6) \[s_{s} =\int_{\textbf{l}\in\Omega^{+}}\sum_{i=1}^{k_{bp}}w_{i}\cdot f_{i }\langle\textbf{h},\textbf{n}\rangle^{p_{i}}\cdot E(\textbf{l})\mathrm{d} \textbf{l} \tag{7}\]
We can efficiently compute \(s_{d}\) and \(s_{s}\) in the frequency space [49]:
\[s_{d} =\frac{c}{\pi}\cdot\sum_{l=0}^{L}\sum_{m=-l}^{l}A_{l}\cdot K_{lm} \cdot Y_{lm}(\textbf{n}), \tag{8}\] \[s_{s} =\sum_{i=1}^{k_{bp}}\sum_{l=0}^{L}\sum_{m=-l}^{l}w_{i}\cdot B_{l} \cdot K_{lm}\cdot Y_{lm}(\textbf{r}). \tag{9}\]
Here, \(L\) is the SH order, \(A_{l}\), \(B_{l}^{i}\), and \(K_{lm}\) are the SH coefficients of the Lambertian BRDF, Blinn-Phong BRDF with specular exponent \(p_{i}\), and the environmental illumination, respectively; \(Y_{lm}(\cdot)\) are the SH basis functions; \(\textbf{r}=\frac{2(\textbf{n}\cdot\textbf{v})\textbf{n}-\textbf{v}}{||2( \textbf{n}\cdot\textbf{v})\textbf{n}-\textbf{v}||_{2}}\) is the specular reflect direction [1].
### Initial Model Learning
In this part, we propose a method to learn the mean \(\bar{R}\) and bases \(\mathrm{M_{R}}\) of our morphable face reflectance model from the publicly-available Multi-PIE dataset [28]. Specifically, we first estimate the reflectance parameters for each identity in the dataset via inverse rendering, and then train a PCA model for them.
Dataset PreprocessingThe Multi-PIE dataset contains 337 identities captured under 15 viewpoints and 19 illuminations. We first exclude objects with facial accessories or hair occlusions, resulting in 128 identities. Then, we manually select 9 viewpoints and 12 illuminations (including 11 directional flash images and 1 room light image) with well color consistency to train our model. By removing the room light effect in the flash images, we obtain 11 OLAT images per viewpoint. We adopt a simple model-based approach to reconstruct the BFM09 geometry coefficients of each identity, camera parameters of each viewpoint, and the position of each flash simultaneously. See more implementation details in our _Supplementary Material_.
Reference Parameter EstimationFor a specific identity, we estimate all the reflectance parameters in UV space, including the diffuse color map \(C\in\mathbb{R}^{3\times H\times W}\) and the linear combination weight map \(W\in\mathbb{R}^{k_{bp}\times H\times W}\). We unwarp all the OLAT images into UV space and denote the one captured under the \(i\)-th viewpoint and the \(j\)-th directional flash illumination as \(I_{ij}^{uv}\in\mathbb{R}^{3\times H\times W}\). From the reconstructed face geometry and scene information, we precompute the incident light direction \(\textbf{l}_{j}^{uv}\) for each flash, view direction \(\textbf{v}_{i}^{uv}\) for each camera, normal direction \(\textbf{n}^{uv}\) for the face geometry, and shadow mask1\(M_{ij}^{uv}\) for each OLAT image in the UV space. By predefining a reasonable incoming irradiance \(E\) from the directional flash2, we obtain the reconstructed OLAT image \(\hat{I}_{ij}^{uv}\) using the efficient shading technique under directional illumination presented in Equation (3):
Footnote 1: We obtained shadow mask via ray tracing.
Footnote 2: There is an inevitably global scale between the reflectance parameters estimated by the inverse rendering method and the ground truth, if lighting unknown. See more theoretical analysis in [49].
\[\hat{I}_{ij}^{uv}=E\cdot(\frac{C}{\pi}\cdot\langle\textbf{l}_{j}^{uv},\textbf{ n}^{uv}\rangle+\sum_{k=1}^{k_{bp}}W_{k}\cdot f_{k}\cdot\langle\textbf{h}_{ij}^{uv}, \textbf{n}^{uv}\rangle^{p_{k}}) \tag{10}\]
Here, \(\textbf{h}_{ij}^{uv}\) is the half vector UV map obtained by \(\textbf{l}_{j}^{uv}\) and \(\textbf{v}_{i}^{uv}\). We optimize the diffuse color map \(C\) and linear com
bination weights map \(W\) with loss:
\[\operatorname*{arg\,min}_{C,W}\mathcal{L}_{recon}+w_{reg}\mathcal{L}_{reg} \tag{11}\]
\(\mathcal{L}_{recon}\) is the weighted L1 reconstruction loss:
\[\mathcal{L}_{recon}=\sum_{i,j}(\mathbf{l}_{j}^{uv},\mathbf{n}^{uv})\cdot||M_{ ij}^{uv}\cdot(\hat{I}_{ij}^{uv}-I_{ij}^{uv})||_{1}, \tag{12}\]
\(\mathcal{L}_{reg}\) is designed to restrict the reflectance parameters to be non-negative:
\[\mathcal{L}_{reg}=-M_{C}\cdot C-M_{W}\cdot W \tag{13}\]
Here, \(M_{C}\) and \(M_{W}\) are the masks that indicate negative values in \(C\) and \(W\), respectively. During parameter estimation, we randomly horizontal flip \(C\) and \(W\) to introduce symmetric constraints [72] to the reflectance parameter map.
Compared to the traditional way that uses specular intensity and exponents to parameterize the Blinn-Phong BRDF, our linear representation is much easier for parameter estimation in practive.
Model LearningWith the estimated reflectance parameter maps for each identity, we can build our initial morphable face reflectance model. Similar to AlbedoMM [56], we learn a PCA model only for the diffuse albedo. Then, we transfer it to the specular weights by using the same linear combination of the training samples to form the bases. Thus, we can use the same coefficients \(\beta\) for the diffuse and specular reflectance parameters as Equation (1) while keeping the orthonormality of the diffuse bases so that the user can use our diffuse model independently.
### Model Finetuning
To improve the generalization capability and expressive power of our initial morphable face reflecactance model, we finetune it on an in-the-wild face image dataset, FFHQ [32], by jointly doing face reconstruction and model updating.
Dataset PreprocessingBefore model finetuning, we use an off-the-shelf [17] method to estimate the BFM09 geometry coefficients and head pose for each image in the dataset. To further improve the geometry reconstruction accuracy, we apply an offline optimization using the same loss functions as [17]. Finally, we obtain the shape coefficients \(\alpha\), expression coefficients \(\delta\)3, and head pose \(\mathbf{R},\mathbf{t}\) for each image. Similar to [17], we use the perspective camera model with a reasonable predefined focal length to represent the 3D-2D projection \(\Pi\).
Network ArchitectureAs illustrated in Figure 3, given a single face image \(I\) as input, our face reconstruction network \(E_{\theta}(\cdot)\) predicts the reflectance model coefficients \(\beta\) and the SH lighting. Combined with the geometry parameters \(\alpha,\delta\), head pose \(\mathbf{R},\mathbf{t}\), and the projection \(\Pi\), we can obtain the reconstructed image \(\hat{I}\) via a differentiable rasterizer [37, 50] using the efficient shading technique presented in Equation (8) and (9).
Footnote 3: The expression bases are adapted from FaceWarehouse [9] since BFM09 does not model expression. See more details in [17].
To update the high-frequency information in our non-Lambertian reflectance representation, we need to predict high-order SH lighting [49]. We adopt 8-order SH lighting with 273 parameters in our method as [20, 39]. However, if handled naively, the network cannot predict reasonable SH lighting due to the large searching space of high-order SH coefficients, as shown in Figure 5. To constrain the searching space, we build a PCA model for the real-world environmental lighting in SH coefficient space inspired by [21]. Specifically, we utilize a real-world HDRI environment map dataset [70] and apply rotation augmentation to it. For each environment map, we compute its SH coefficients up to the 8-\(th\) order. We then divide them by the 0-\(th\) order coefficient for normalization. Note that each color channel is normalized independently. Next, we learn a PCA model for these normalized SH coefficients. We use the first \(N_{L}\) bases for lighting prediction.
During finetuning, together with the reflectance model
Figure 3: Model finetuning pipeline overview. Given a single input face image, we apply an encoder \(E_{\theta}\) to estimate its lighting scale \(z\), lighting coefficients \(\gamma\), and reflectance coefficients \(\beta\). Combined with the precompute geometry parameters \(\alpha\), \(\delta\), \(\mathbf{R}\), \(\mathbf{t}\), and \(\Pi\), we can obtain the reconstructed face image via a differentiable renderer to compute self-supervised loss and jointly update \(E_{\theta}\) and our reflectance model.
coefficients \(\beta\), our network predicts \(\gamma\in\mathbb{R}^{N_{L}}\) as the lighting PCA model coefficients and \(z\in\mathbb{R}^{3}\) to represent the scale of 0-\(th\) order SH coefficient for each color channel. From this, we first use \(\gamma\) to recover the SH coefficients from the PCA lighting model and then apply the predicted scale \(z\) to them. We adopt the ResNet-50 [29] architecture as the reconstruction network \(E_{\theta}(\cdot)\) and modify the last fully-connected layer to \(N_{R}+N_{L}+3\) neurons. We adopt the Softplus activation for \(z\) to ensure a non-negative prediction and linear activation for \(\beta\) and \(\gamma\).
Loss FunctionIn model finetuning, the learnable parameters are the morphable model parameters, including the mean \(\bar{R}\) and bases \(\mathrm{M_{R}}\), and face reconstruction network parameters \(\theta\). We optimize them with the combination of a reconstruction loss \(\mathcal{L}_{rec}\) and a regularization loss \(\mathcal{L}_{reg}\):
\[\operatorname*{arg\,min}_{\bar{R},\mathrm{M_{R}},\theta}\mathcal{L}_{rec}+ \mathcal{L}_{reg} \tag{14}\]
\(\mathcal{L}_{rec}\) is the combination of a L1 term \(\mathcal{L}_{l1}\) and a perceptual term \(\mathcal{L}_{per}\); see more details in our _Supplementary Material_. In our regularization loss \(\mathcal{L}_{reg}\), we design \(\mathcal{L}_{upd}\) to constrain the updating of our morphable reflectance model:
\[\mathcal{L}_{upd}=||\bar{R}-\bar{R}_{0}||_{1}+||\mathrm{M_{R}}-\mathrm{M_{R_ {0}}}||_{1} \tag{15}\]
In addition, we adopt \(\mathcal{L}_{light}\) to encourage monochromatic environment lighting as [15] to resolve the color ambiguity between albedo and lighting and \(\mathcal{L}_{coef}\) to constrain the predicted PCA coefficients \(\beta\) and \(\gamma\); see more details in our _Supplementary Material_.
## 4 Experiments
### Implementation Details
Initial Model LearningFor reflectance parameter estimation, we set \(w_{reg}{=}100\) and adopt Adam [34] optimizer to minimize the loss function, with learning rate 5e-3. We use 3 Blinn-Phong BRDFs with specular exponents 1, 8, and 64 in our method, _i.e._\(k_{bp}{=}3\). Thus, there are 6 reflectance parameters for each face vertex. We use the first 80 PCA bases in our initial model, _i.e._\(N_{R}{=}80\).
Model FinetuningFor the lighting PCA model, we also use the first 80 PCA bases, _i.e._\(N_{L}{=}80\). The weights for \(\mathcal{L}_{l1}\),\(\mathcal{L}_{per}\), \(\mathcal{L}_{coef}\), \(\mathcal{L}_{upd}\), \(\mathcal{L}_{light}\) are set to \(2,0.1,0.001,10,10\), respectively. We finetune our model on the FFHQ dataset [33], with 70000 high-fidelity single-view face images, and crop them to \(224{\times}224\) when input to our reconstruction network \(E_{\theta}\). We first pretrain \(E_{\theta}\) using \(\mathcal{L}_{l1}\),\(\mathcal{L}_{per}\),\(\mathcal{L}_{coef}\) for 20 epochs to ensure it can output reasonable reflectance and lighting coefficients prediction, with learning rate 1e-4. Then, we use the full loss function to simultaneously update the parameters of \(E_{\theta}\) and our morphable face reflectance model for 2 epochs, with learning rate 1e-5. We adopt Adam optimizer [34].
exponent cannot capture this phenomenon. With the spatially varying linear combination weights of different Blinn-Phong BRDFs, our method can naturally represent spatially varying specular intensity and shiness. In Table 1, we report the SSIM [69], PSNR, and LPIPS [76] scores in the face region to quantitatively measure the discrepancy between the re-rendered face and the ground truth. Again, our method achieves better results than the global exponent counterpart.
Model VisualizationWe visualize the first 3 principal components of our morphable model in Figure 1, including the diffuse albedo, and the weights for Blinn-Phong BRDF with specular exponents 1, 8, and 64, from left to right; we multiply the weights by 3 for better visualization. It shows that our model learns to assign a large specular shiness to the tip of the nose while a small value to the cheek. See more visualizations in our _Supplementary Material_.
Ablation StudyAs shown in Figure 5 and Table 2, the proposed finetuning strategy can improve the generalization capability and expressive power of our initial morphable face reflectance model, leading to better face reconstruction quality, especially around mouth and eyes. We then verify the effectiveness of our lighting PCA model. As illustrated in Figure 5, directly predicts 273 coefficients of the 8-order SH (w/o Light PCA) leads to unreasonable results; our lighting PCA model obtains better lighting predictions by constraining the searching space of the SH coefficients.
### Comparisons
BaselinesWe compare our method with BFM09 [47] and AlbedoMM [56]. BFM09 is a diffuse-only model built from 3D scans; AlbedoMM is a morphable model for diffuse and specular albedo built from Light Stage data. To ensure a fair comparison, we use the same CNN-based framework (see Section 3.3) to implement the competitors. We train the reconstruction network for them on the FFHQ dataset, but we do not update their morphable model parameters. As we only focus on appearance, the reconstruction network only predicts reflectance coefficients and lighting parameters and uses fixed precomputed geometry during training. For BFM09, we adopt the same geometry parameters as ours. For AlbedoMM, we use the same steps as mentioned in Section 3.3 to obtain its geometry parameters. Akin to [56], we adopt the Blinn-Phong BRDF for AlbedoMM and set the global shiness to 20.
Face ReconstructionWe evaluate our method and the competitors on the CelebA-HQ [31] dataset. As illustrated in Table 2, our method obtains better photometric face reconstruction quantitative scores than the competitors since we finetune it on an in-the-wild dataset to improve its generalization capability and expressive power while the competitors are built from a limited number of scans. As shown in Figure 6, our method can reconstruct the input image well; compared to AlbedoMM trained from Light Stage scans, our method trained from low-cost data can also disentangle the diffuse and specular shading in a plausible way.
Face RelightingWe evaluate the relighting performance of our method and the competitors on the Multi-PIE dataset [28]. Specifically, given an input image, we first obtain its geometry parameters as described in Section 3.2 and
\begin{table}
\begin{tabular}{l c c c} \hline \hline & LPIPS \(\downarrow\) & SSIM \(\uparrow\) & PSNR \(\uparrow\) \\ \hline BFM09 & 0.114 & 0.893 & 23.51 \\ AlbedoMM & 0.116 & **0.901** & 23.69 \\ Ours & **0.110** & 0.896 & **24.21** \\ \hline Ours w/o finetune & 0.1256 & 0.886 & 23.26 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Photometric face reconstruction comparison between our method and competitors on 1000 images randomly sampled from the CelebA-HQ dataset.
Figure 5: Qualitative ablation study of model finetuning and the use of our lighting PCA model.
Figure 6: Qualitative comparison of face reconstruction and shading contributions between our method and AlbedoMM.
reconstruct its reflectance parameters using our CNN-based face reconstruction network (columns 1 and 3 in Figure 7). Then, we re-render the image under a new point light source and compare it to the corresponding OLAT image with exactly the same light position obtained from the preprocessed Multi-PIE dataset in Section 3.2 (columns 2 and 4 in Figure 7). Since we do not have the ground truth light color and intensity information of the Multi-PIE dataset, we render our method and the competitors using the same white point light source for a fair comparison; _please ignore the color difference and only focus on the distribution of facial specularities in Figure 7_. Compared to BFM09, our method successfully render plausible facial specularities since we adopt a non-Lambertian reflectance representation. Compared to AlbedoMM, our method achieves more realistic results especially around the tip of the nose since we can model both spatially varying specular intensity and shiness. See the video comparisons on our _project page_ for a better demonstration.
Reflectance ReconstructionAlthough our goal is not to model physically-accurate reflectance parameters, we compare our method with AlbedoMM on 23 Light Stage scans with ground truth diffuse and specular albedo captured under neutral expression from the 3D-RFE database [57]. We adopt the sum of 3 linear combination weights in our reflectance representation as the specular albedo; this quantity shares the same meaning as the specular albedo, _i.e._ the specular shading under a spatially-uniform environment lighting with unit radiance. As shown in Figure 8, our method can reconstruct plausible reflectance maps. However, as shown in Table 3, our method obtains inferior quantitative results than AlbedoMM on specular albedo reconstruction. We attribute this to two reasons: _i)_ AlbedoMM uses the 3D-RFE database to build their model [56] while our method has never seen these scans, and _ii)_ our method is built from low-cost data without lighting information, so there exists a global scale between our reflectance parameters and the ground truth although we try to mitigate it by setting a reasonable lighting color in Section 3.2.
A More Implementation Details
### Multi-PIE Dataset Preprocessing
We select 9 viewpoints (09_0, 08_0, 13_0, 14_0, 05_1, 05_0, 04_1, 19_0, and 20_0) and 11 flashes (03, 04, 05, 06, 07, 08, 09, 10, 11, 14, and 18) for reflectance parameter estimation. Please refer to [28] for the detailed configuration of the viewpoints and flashes. We develop a model-based method to reconstruct the camera parameters and the BFM09 [47] geometry coefficients for each identity. According to the Multi-PIE dataset [28], each selected viewpoint has one selected flash attached to it4. Hence, we approximate the flash position as the camera position.
Footnote 4: We use the viewpoints 08_1 and 19_1 to solve the position of the flashes 14 and 18. However, we do not use the images captured by 08_1 and 19_1 since there exists apparent color inconsistency between these two viewpoints and the other selected 9 viewpoints.
We use the room-light images [28] for reconstruction. Specifically, we first adopt a CNN-based single-view face reconstruction method [17] to obtain the BFM09 coefficients, illumination coefficients, and head pose for each room-light image of a given identity. Then, we apply an offline optimization using the same loss function as [17] to improve the reconstruction accuracy. During the offline optimization, each room-light image shares the same BFM09 coefficients since they are the multi-view images of the given identity, and we initialize them as the average of the coefficients of all views predicted by the face reconstruction CNN. Similar to [17], we use the perspective camera model with a reasonable predefined focal length to represent the 3D-2D projection. After reconstruction, we can compute the camera parameters from the head pose **R** and **t** for each viewpoint:
\[\textbf{R}_{cam}=\textbf{R}^{\mathrm{T}},\quad\textbf{t}_{cam}=-\textbf{R} ^{\mathrm{T}}\cdot\textbf{t} \tag{16}\]
Here, \(\textbf{R}_{cam}\) and \(\textbf{t}_{cam}\) are the camera rotation and translation in the BFM09 canonical space, respectively. We repeat the steps above for all the identities in the Multi-PIE dataset.
Before reflectance parameter estimation, we obtain the OLAT image by removing the effect of the room light in the flash image. Specifically, we subtract the room-light image from the flash image in linear space with a reasonable mapping function5:
Footnote 5: We empirically find that performing image differencing in linear space leads to better reflectance parameter estimation than in non-linear space.
\[I_{OLAT}=(I_{flash})^{1.2}-(I_{roomlit})^{1.2} \tag{17}\]
Here, \(I_{OLAT}\) is the OLAT image in linear space, \(I_{flash}\) and \(I_{roomlit}\) are the flash and room-light image provided by the Multi-PIE dataset, respectively. We then estimate the reflectance parameters from \(I_{OLAT}\) and build our morphable face reflectance model in linear space. To synthesis a face image in nonlinear space, we convert the shading \(s\) to pixel color \(c\) using the inverse mapping:
\[c=s^{\frac{1}{1.2}} \tag{18}\]
DemographicsOur initial morphable face reflectance model is built from a total of 128 manually selected individuals from the Multi-PIE dataset. We release the ID of the selected individuals in our _code repository_.
Feasibility of reflectance parameter estimationThe RGB diffuse color and 3 linear combination weights are the only unknowns in our reflectance representation. Theoretically, the ambiguity can be solved with 6 independent equations. We have 99 light-view direction pairs (the combination of 9 viewpoints and 11 light directions) in total, and if considering visibility, most of the vertices have 50+ light-view direction pairs. Different light-view direction pairs give independent equations. Thus, it's feasible to estimate the BRDF parameters theoretically.
Practically, the light-view direction pairs which are not hitting the lobe of the BRDF would lead to a low activation value, and thus solving the reflectance parameters from these equations are highly ill-posed. In our setup, we find that the ill-posed scenario only happens on very few face vertices on the side face or with normal directions going down like nares. For most of the face vertices, our setup can provide enough well-conditioned equations with the corresponding light-view direction pairs hitting the lobe. Thus, it's feasible to estimate the BRDF parameters practically.
### Model Finetuning
Recall that in model finetuning, the learnable parameters are the morphable model parameters, including the mean \(\bar{R}\) and bases \(\mathrm{M_{R}}\), and face reconstruction network parameters \(\theta\). We optimize them with the combination of a reconstruction loss \(\mathcal{L}_{rec}\) and a regularization loss \(\mathcal{L}_{reg}\):
\[\operatorname*{arg\,min}_{\bar{R},\mathrm{M_{R}},\theta}\mathcal{L}_{rec}+ \mathcal{L}_{reg} \tag{19}\]
\(\mathcal{L}_{rec}\) is the combination of a L1 term \(\mathcal{L}_{l1}\) and a perceptual term \(\mathcal{L}_{per}\):
\[\mathcal{L}_{rec} =\omega_{l1}\cdot\mathcal{L}_{l1}+\omega_{per}\cdot\mathcal{L}_{ per},\mathrm{where} \tag{20}\] \[\mathcal{L}_{l1} =M_{skin}\cdot||\hat{I}-I||_{1}\] (21) \[\mathcal{L}_{per} =1-\langle\phi_{feat}(\hat{I}),\phi_{feat}(I)\rangle \tag{22}\]
Here, \(M_{skin}\) is the mask indicated skin region, obtained by an off-the-shelf face parsing method [75]; \(\langle\cdot,\cdot\rangle\) is the inner product operation; \(\phi_{feat}\) is a pretrained FaceNet architecture [53] for feature extraction. Note that we directly compute the reconstruction loss \(\mathcal{L}_{rec}\) in the linear space. Although \(\phi_{feat}\) is trained on images in the nonlinear space,
we empirically find that it can still provide a reasonable supervision signal if the input image is in the linear space.
In our regularization loss \(\mathcal{L}_{reg}\), we first adopt \(\mathcal{L}_{coef}\) to constrain the predicted PCA coefficients \(\beta\) and \(\gamma\):
\[\mathcal{L}_{coef}=\sum_{i=1}^{N_{R}}(\frac{\beta_{i}}{\sigma_{\beta_{i}}})^{2 }+\sum_{i=1}^{N_{L}}(\frac{\gamma_{i}}{\sigma_{\gamma_{i}}})^{2} \tag{23}\]
Here, \(\sigma_{\beta}\) and \(\sigma_{\gamma}\) are the standard deviations of the initial morphable face reflectance model and the lighting PCA model, respectively. Then, to constrain the updating of our morphable reflectance model, we design \(\mathcal{L}_{upd}\) as:
\[\mathcal{L}_{upd}=||\bar{R}-\bar{R}_{0}||_{1}+||\mathrm{M}_{\mathrm{R}}- \mathrm{M}_{\mathrm{R}_{0}}||_{1} \tag{24}\]
Here, \(\bar{R}_{0}\) and \(\mathrm{M}_{\mathrm{R}_{0}}\) are the mean and bases of our initial morphable face reflectance model built from the Multi-PIE dataset. To resolve the color ambiguity between albedo and lighting, we involve \(\mathcal{L}_{light}\) to encourage monochromatic environment lighting as [15]:
\[\mathcal{L}_{light}=||l-l_{mean}||_{2}^{2} \tag{25}\]
Here, \(l\) is the retrieved 8-\(th\) order SH coefficients; \(l_{mean}\) is the mean of \(l\) over the color channel dimension, representing the monochromatic counterpart of \(l\). Thus, our regularization loss \(\mathcal{L}_{reg}\) can be written as:
\[\mathcal{L}_{reg}=\omega_{coef}\cdot\mathcal{L}_{coef}+\omega_{upd}\cdot \mathcal{L}_{upd}+\omega_{light}\cdot\mathcal{L}_{light} \tag{26}\]
In our experiments, we set \(\omega_{11}\), \(\omega_{per}\), \(\omega_{coef}\), \(\omega_{upd}\), \(\omega_{light}\) to 2, 0.1, 0.001, 10, 10, respectively.
## Appendix B More Results
### Model Visualization
In Figure 10 and Figure 11, we visualize our model by showing random samples drawn from it before and after fine-tuning, respectively. The images are rendered in non-linear space with a white frontal point light.
### Face Reconstruction
More Reconstruction ResultsWe show more face reconstruction results on in-the-wild face images in Figure 9, including diverse ethics groups and challenging cases with facial occlusions and makeups. We multiply the linear combination weights (columns 3, 4 5 in Figure 9) by 3 for better visualization.
Thanks to the model-finetuning process, our method is robust to handle diverse input images and predicts plausible reflectance attributes. However, it has the same limitation as previous in-the-wild face reconstruction methods [60, 17, 58]: _i)_ the global skin tone can not be disentangled from the illumination due to the scale ambiguity between lighting and reflectance (row 5), and _ii)_ shadow cast by external geometry (hat in row 9) bakes into the reflectance channels.
Evaluation on Geometry ReconstructionAlthough our goal is not to better reconstruct face shape from images, we compare our method and BFM09 [47] on the validation set of the NoW challenge [52] to help the readers better understand our model. Note that both methods use the same BFM09 geometry model; we do not compare to AlbedoMM since AlbedoMM [56] is built on top of the BFM17 [26] geometry model.
In this experiment, we adopt a similar network architecture as [17] by simply modifying the number of neurons of the last fully-connect layer of \(E_{\theta}(\cdot)\) from \(N_{R}+N_{L}+3\) to \(N_{S}+N_{E}+N_{P}+N_{R}+N_{L}+3\) to predict the shape and expression coefficients and the head pose. We use the first 80 and 64 bases of the BFM09 shape and expression morphable model, respectively; thus, \(N_{S}=80\) and \(N_{E}=64\). For the head pose, we use the Euler angle to represent rotation and a 3D vector to represent translation; thus, \(N_{P}=6\). To train the network for geometry reconstruction, we involve a landmark loss term akin to previous works [61, 62, 17, 52]:
\[\mathcal{L}_{ldm}=\sum_{n=1}^{68}||\hat{q_{n}}-q_{n}||_{2}^{2} \tag{27}\]
Here, \(q_{n}\) are the 2D landmarks obtained from an off-the-shelf landmark detector [7]; \(\hat{q_{n}}\) are the 2D projection of the 3D landmarks defined on the reconstructed shape. In addition, we modify \(\mathcal{L}_{coef}\) to add constraints on the shape and expression coefficients:
\[\mathcal{L}_{coef}=\sum_{i=1}^{N_{S}}(\frac{\alpha_{i}}{\sigma_{\alpha_{i}}}) ^{2}+\sum_{i=1}^{N_{E}}(\frac{\delta_{i}}{\sigma_{\delta_{i}}})^{2}+\sum_{i=1} ^{N_{R}}(\frac{\beta_{i}}{\sigma_{\beta_{i}}})^{2}+\sum_{i=1}^{N_{L}}(\frac{ \gamma_{i}}{\sigma_{\gamma_{i}}})^{2} \tag{28}\]
Here, \(\alpha\in\mathbb{R}^{N_{S}}\) and \(\delta\in\mathbb{R}^{N_{E}}\) are the predicted shape and expression coefficients, respectively; \(\sigma_{\alpha}\) and \(\sigma_{\delta}\) are the standard deviations of the shape and expression morphable model, respectively. Our full loss functions for geometry reconstruction can be written as:
\[\mathcal{L} =\omega_{l1}\cdot\mathcal{L}_{l1}+\omega_{per}\cdot\mathcal{L}_{ per}\] \[+\omega_{coef}\cdot\mathcal{L}_{coef}+\omega_{light}\cdot \mathcal{L}_{light}+\omega_{ldm}\cdot\mathcal{L}_{ldm} \tag{29}\]
In the geometry reconstruction experiments, we set \(\omega_{l1}\), \(\omega_{per}\), \(\omega_{coef}\), \(\omega_{light}\), \(\omega_{ldm}\) to 2, 0.2, 0.001, 10, 0.002, respectively. We train the geometry reconstruction network on the FFHQ [32] dataset for 20 epochs.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & Median (mm) \(\downarrow\) & mean (mm) \(\downarrow\) & std (mm) \(\downarrow\) \\ \hline BFM09 & 1.44 & 2.06 & 2.51 \\ Ours & 1.51 & 2.15 & 2.61 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Quantitative face geometry reconstruction error on the validation set of the NoW challenge.
Figure 9: Face reconstruction results on diverse in-the-wild face images.
As shown in Table 4, our method just obtains similar quantitative results compared to the BFM09 under the same CNN-based face geometry reconstruction pipeline. However, we believe that our model has the potential to achieve better geometry reconstruction results with the advance of lighting estimation and differentiable ray tracer.
### Face Relighting and OLAT Rendering
See our _project page_ for the video results.
## Appendix C Limitations and Discussions
Our method still has several limitations. We adopt the Lambertian BRDF to represent diffuse reflectance. Thus, we cannot model the subsurface scattering effect. Integrating a more complicated reflectance representation [71] into our morphable face reflectance model to improve face rendering realism is an interesting direction.
Our model cannot well represent the specularities around the eyes. We try a straightforward way by adding more mirror-like specular terms in our reflectance representation but find it does not work. We attribute this to the following two reasons: _i)_ the reconstructed geometry is inaccurate around eyes during inverse rendering, and _ii)_ our BRDF reflectance representation cannot well model the complex properties of eyes (e.g. refraction).
During model finetuning, we use a differentiable rasterizer with an efficient local shading technique to render the reconstructed image, without considering global illumination effects like self-shadowing, considering that the illumination is soft, and the self-shadows are insignificant in most in-the-wild images. We believe that using a differentiable ray tracer [42] would slightly improve the current results as demonstrated in existing works [18, 19, 20]. Moreover, leveraging a multi-view in-the-wild face image dataset [10] or video dataset [12] could improve the face reconstruction results, as demonstrated by the previous works [23, 58]. We leave these as our future works.
In addition, there is an inevitably global scale between the reflectance parameters in our model and the ground truth since the low-cost data does not provide lighting information [49].
|
2307.03775 | Eccentric Binaries in Retrograde Disks | Modern numerical hydrodynamics tools have recently enabled detailed
examinations of binaries accreting from prograde circumbinary disks. These have
re-framed the current understanding of binary-disk interactions and disk driven
orbital evolution. We present the first full-domain grid-based hydrodynamics
simulations of equal-mass, eccentric binaries accreting from retrograde
circumbinary disks. We study binary eccentricities that span $e=0.0$ to $e =
0.8$ continuously, and explore the influence of retrograde accretion on the
binary orbital response, disk morphology, and observational properties. We find
that, at all eccentricities, retrograde accretion shrinks the binary semi-major
axis and pumps its eccentricity leading to the previously identified
possibility of highly eccentric mergers. Contrary to past studies and models,
we observe gravitational forces to dominate the binary's orbital evolution as
opposed to the physical accretion of mass and momentum. Retrograde accretion
variability also differs strongly from prograde solutions. Preeminently,
binaries with $e > 0.55$ reveal a unique two-period, double-peaked accretion
signature that has not previously been identified. We additionally find
evidence for the emergence of retrograde Lindblad resonances at large
eccentricities in accordance with predictions from linear theory. Our results
suggest that some astrophysical binaries for which retrograde accretion is
possible will experience factors-of-a-few times faster orbital decay than in
prograde disks and will have their eccentricities pumped beyond the limits
found from prograde solutions. Such effects could lead to rapid inward
migration for some young stellar binaries, the detection of highly-eccentric
LISA mergers, and the tentatively observed turnover at the low-frequency end of
the gravitational wave background. | Christopher Tiede, Daniel J. D'Orazio | 2023-07-07T18:00:07Z | http://arxiv.org/abs/2307.03775v2 | # Eccentric Binaries in Retrograde Disks
###### Abstract
Modern numerical hydrodynamics tools have recently enabled detailed examinations of binaries accreting from prograde circumbinary disks that have re-framed the current understanding of binary-disk interactions and disk driven orbital evolution. We present the first full-domain grid-based hydrodynamics simulations of equal-mass, eccentric binaries accreting from retrograde circumbinary disks. We study binary eccentricities that span \(e=0.0\) to \(e=0.8\) continuously, and explore the influence of retrograde accretion on the binary orbital response, disk morphology, and observational properties. We find that, at all eccentricities, retrograde accretion shrinks the binary semi-major axis and pumps its eccentricity leading to the previously identified possibility of highly eccentric mergers. Contrary to past studies and models, we observe gravitational forces to dominate the binary's orbital evolution as opposed to the physical accretion of mass and momentum. Retrograde accretion variability also differs strongly from prograde solutions. Preeminently, binaries with \(e>0.55\) reveal a unique two-period, double-peaked accretion signature that has not previously been identified. We additionally find evidence for the emergence of retrograde Lindblad resonances at large eccentricities in accordance with predictions from linear theory. Our results suggest that some astrophysical binaries for which retrograde accretion is possible will experience factors-of-a-few times faster orbital decay than in prograde disks and will have their eccentricities pumped beyond the limits found from prograde solutions. Such effects could lead to rapid inward migration for some young stellar binaries, the detection of highly-eccentric LISA mergers, and the tentatively observed turnover at the low-frequency end of the gravitational wave background.
keywords: hydrodynamics - software:simulations - black hole mergers - gravitational waves - quasars:general
## 1 Introduction
Accretion onto a binary from a surrounding circumbinary disk (CBD) is important for the evolution and observation of many types of astrophysical binaries ranging from protoplanetary systems, to binary stars, to massive black hole binaries (Kley and Nelson, 2012; Orosz et al., 2012; Barnes and Hernquist, 1996; Mayer, 2013). A plethora of work has been completed in recent years to determine the effect of prograde CBD's on the orbital evolution of the inner binary and on the associated observational characteristics (MacFadyen and Milosavljevic, 2008; Cuadra et al., 2009; Shi et al., 2012; D'Orazio et al., 2013; Shi and Krolik, 2015; Munoz and Lai, 2016; Miranda et al., 2017; Tang et al., 2017; Moody et al., 2019; Tiede et al., 2020; Duffell et al., 2020; Zrake et al., 2021; D'Orazio and Duffell, 2021; Dittmann and Ryan, 2022; Franchini et al., 2022; Siwek et al., 2023). A primary result from the majority of these studies is that for the fiducial setup of an equal mass, circular binary embedded in an \(\alpha\)-disk (Shakura and Sunyaev, 1973) with characteristic scale height \(h/r=0.1\) and turbulent viscosity parameter \(\alpha=0.1\), the CBD delivers angular momentum to the binary and drives binary outspiral (Munoz et al., 2019). The most recent studies have largely focused on subsequently filling out the parameter space beyond this reference model by varying the binary mass ratio (Duffell et al., 2020; Siwek et al., 2023), the binary eccentricity (Zrake et al., 2021; D'Orazio and Duffell, 2021; Siwek et al., 2023), the disk scale-height (Tiede et al., 2020; Heath and Nixon, 2020; Dittmann and Ryan, 2022; Penzlin et al., 2022; Franchini et al., 2022), and the disk cooling (Sudarshan et al., 2022; Wang et al., 2022, 2023). Nearly all of these studies have found that both the direction and magnitude of binary inspiral depends non-trivially on each of these parameters; see Lai and Munoz (2022) for a recent review.
Many of these works have also detailed observational characteristics of these disks through measured accretion rates onto the binary as a proxy for the emitted accretion flux (_e.g._ Farris et al., 2015; Tang et al., 2018). A primary result from prograde disks is that binaries accrete at a rate that is on average equal to that of a single central object. The underlying accretion modulations occur near the binary orbital frequency (and its harmonics) as well as on longer timescales at \(\sim 5\times\) the orbital period due to periodic over-feeding from a slogically cavity formed in circular and near-circular binaries (MacFadyen and Milosavljevic, 2008; Shi et al., 2012; D'Orazio et al., 2013; Farris et al., 2014; Tang et al., 2017; Duffell et al., 2020; Dittmann and Ryan, 2022). Eccentric binaries in prograde disks, however, lose the longer \(\sim 5\) orbit period and instead have their accretion variability dominated by the binary orbital frequency (Hayasaki et al., 2007; Dunhill et al., 2015; Miranda et al., 2017; Zrake et al., 2021; Westernacher-Schneider et al., 2022).
Of primary relevance to this study, both D'Orazio and Duffell (2021) and Zrake et al. (2021) found that equal-mass binaries that start
with small eccentricity \(e\lesssim 0.1\) have their eccentricity damped towards circular orbits--at which they are driven apart by the gas--but all other initial binary eccentricities \(e\gtrsim 0.1\) are driven towards an equilibrium eccentricity \(e\sim 0.4-0.45\). Further, Siwek et al. (2023) demonstrated that this general behavior holds for all binary mass ratios \(q>0.1\), and the equilibrium eccentricity varies between \(0.25\lesssim e_{\rm eq}\lesssim 0.5\). Such an equilibrium eccentricity could manifest in populations of stellar binaries that have undergone gas accretion phases as well as in residual eccentricity measurements of merging super-massive black hole binaries (SMBHB's) in gravitational waves with the space-based interferometer LISA.
Nearly all modern numerical studies have focused on prograde CBD's; but in some astrophysical systems it is not clear a priori what the angular momentum of the CBD ought to be relative to the binary. In particular, SMBHB's that undergo active accretion phases in the late-stages of their evolution following a galaxy merger may receive gas injections isotropically, and it is possible that retrograde gas feeding onto a SMBHB is comparably likely to prograde configurations (King and Pringle, 2006; Nixon et al., 2011; Hobbs et al., 2011). It is also possible that a non-negligible fraction of young stellar binaries born in dense, chaotic star forming regions acquire retrograde CBD's (\(e.g.,\sim 10\%\) in Elsender et al., 2023; Bate et al., 2010). This has made past numerical studies of retrograde-CBD's around circular binaries with viscous, smoothed particle hydrodynamics (Nixon et al., 2011; Nixon and Lubow, 2015) and inviscid 3D magnetohydrodynamics (Bankert et al., 2015). Roedig and Sesana (2014) have also simulated retrograde, self-gravitating disks around eccentric binaries for a few eccentricities up to \(e=0.8\), and Amaro-Seoane et al. (2016) examined the effect of different sink prescriptions for binaries with mass ratio \(q=1/10\). These results have also motivated a series of approximate analytic models for retrograde binary accretion (Nixon et al., 2011; Roedig and Sesana, 2014; Schnittman and Krolik, 2015; Amaro-Seoane et al., 2016). We review these works and their findings in more detail in SS2, but generally speaking, they all find that a retrograde-CBD causes accreting binaries at all eccentricities and mass ratios to shrink their separation (inspiral) and to pump their eccentricity (above some small threshold).
That said, the simulations performed for these studies were either focused on a few specific parameter setups or did not simulate the full multi-scale domain including both the entire CBD and the binary with its intra-orbit material1. Moreover, the developed analytic models of retrograde binary orbital evolution relied on these simulations and typically assumed the dominant contribution to be inelastic collisions with parcels of gas at binary apocenter.
Footnote 1: This has been demonstrated to be integral to converging on a solution in prograde scenarios Farris et al., 2014; Tang et al., 2017; Munoz et al., 2019; with possible exceptions in confined regions of parameter space \(e.g.,\) Tiede et al., 2022.
In addition to being equally astrophysically viable as prograde solutions, retrograde circumbinary systems are analytically interesting because at low binary eccentricity they lack the Lindblad resonances that strongly affect--and possibly dominate (see, however, Mahes et al., 2023)--the prograde scenario (Goldreich and Tremaine, 1979, 1980). Thus, retrograde CBD's offer an informative insight into the relevant dynamics of accretion onto eccentric binaries by comparison to their prograde counterparts.
In this paper we present the first full-domain, grid-based simulations of equal-mass binaries accreting from retrograde CBD's and expand upon recent works exploring the orbital evolution of eccentric binaries in prograde configurations - this work can be directly compared with the recent prograde results of D'Orazio and Duffell (2021); Zrake et al. (2021). SS2 reviews the previous literature on retrograde binary accretion. SS3 lays out the numerical techniques used in this study. SS4 includes our numerical results for binary evolution (SS4.1), details of the disk morphology and how it changes with eccentricity (SS4.2), an analysis of the disk-mediated eccentricity evolution and its primary drivers (SS4.3, SS4.4), the appearance of retrograde resonances (SS4.5), and variability signatures from retrograde accretion (SS4.6). We discuss how our results compare with previous works and explore observational implications in SS5, and summarize in SS6. The Appendix includes an additional study on how our results depend on the specifics of the sink prescription and gravitational softening.
## 2 Retrograde Results in the Literature
The initial motivation for the study of retrograde CBD's was as an astrophysically plausible mechanism to shepherd SMBHB's from the canonical dynamical friction stalling radius (Milosavljevic and Merritt, 2003) down to the sub-parsec separations where gravitational waves could merge the binary within a Hubble time. At the time, analytic and numerical works had suggested that prograde circumbinary disks would absorb angular momentum from the binary, becoming either decretion disks (Webbink, 1976; Pringle, 1991) (that transfer mass outwards) or unstable (Lodato et al., 2009) halting binary migration.
The angular momentum transport in the prograde interaction was thought to be dominated by Lindblad resonances where the disk angular velocity \(\Omega(r)\) was equal to the combination of binary orbital frequency \(\omega_{b}\) and positive integer mode number \(m=1,2,\ldots\) as \(\Omega(r)=m\;\omega_{b}/(m\pm 1)\). However, this is only valid for \(|\Omega|>|\omega_{b}|\) such that these resonances are not present when \(\Omega\) and \(\omega_{b}\) have opposite sign. Nixon et al. (2011) explored the possibility that such retrograde CBD's could be a promising mode for shrinking an accreting SMBHB toward a GW dominated regime and coalescence. They employ a toy model where the binary-disk interaction is assumed to only occur at apocenter and pericenter where the binary semi-major axis \(a\) and eccentricity \(e\) evolve assuming inelastic collisions between the smaller, secondary binary component of mass \(M_{2}\) and Keplerian gas parcels of some mass \(\Delta M\). Their model finds that all binaries shrink their semi-major axis due to the loss of energy and angular momentum and note that eccentricity is pumped at binary apocenter and damped at pericenter. As a result, binaries that start with small eccentricity tend to remain circular because of the relative balance between effects at pericenter and apocenter; but binaries that start with eccentricity greater than the characteristic scale-height of the disk \(e\gtrsim h/r\) will have their evolution dominated by interactions at apocenter and have their eccentricity pumped until it is near unity and the binary coalesces. They compare their toy model with a limited set of 3D viscous smoothed particle hydrodynamics (SPH) simulations and confirm that secondary-gas interactions at apocenter drive eccentricity and at pericenter damp eccentricity for a binary mass ratio of \(q=1/10\). They also include one simulation with \(q=0.5\) and very small initial eccentricity and demonstrate that it stays near circular; but they do not include an example simulation of the case with initial eccentricity \(e_{0}>h/r\) where they would expect rapid eccentricity growth towards binary coalescence. Nixon et al. (2011) also point out that such an interaction ought to drive eccentricity in the interacting gas parcels and could potentially drive an eccentric CBD.
Roedig and Sesana (2014) simulated a more extensive set of eccentric binaries in retrograde CBD's using 3D SPH with \(\beta\)-cooling and self-gravity (instead of a typical viscosity). All of their simula
tions were for near-equal mass binaries with \(q=1/3\), and similar to Nixon et al. (2011a), they find that near-circular binaries in retrograde CBD's remain nearly circular while shrinking their semi-major axis (at a rate comparable to a prograde, \(e=0\) control). For eccentricities above a critical value \(e>0.04\), they find that both \(\{\dot{a},\,\dot{e}\}(e)\propto e\) such that eccentricity grows exponentially until the binary merges at eccentricities near unity. Additionally, they amend the toy model of Nixon et al. (2011a) based on an impact-interaction at binary apocenter to include terms \(\propto 1+q\) for when \(q\ll 1\) is not satisfied and find an overall qualitative agreement; although they note that their model slightly underestimates both the binary semi-major axis shrinking and the eccentricity growth for \(e\gtrsim 0.3\).
It is worth mentioning that one major discrepancy between the simulations of Nixon et al. (2011a) and Roedig and Sesana (2014) was that the former observed the presence of material around each binary component in the form of circum-single "minorisks", while the later did not. Roedig and Sesana (2014) attributed this to their cooling prescription.
The first grid based simulations of retrograde circumbinary accretion were performed by Bankert et al. (2015) in 3D Newtonian MHD. These simulations were for equal mass binaries \(q=1\) fixed on a circular orbit where the grid had an inner boundary at \(r=0.8a\) (with \(a\) the binary separation) such that hydrodynamics in direct proximity of the binary components themselves were not resolved. This study used time- and azimuthally-averaged radial torque density profiles from \(r>0.8a\) to conclude that very little angular momentum is transferred gravitationally between the binary and the disk and that the dominant effect in the orbital evolution of the binary is from the direct accretion of mass and retrograde angular momentum; in general agreement with the modelling of Nixon et al. (2011a); Roedig and Sesana (2014). Bankert et al. (2015) also found the the retrograde CBD does not become eccentric and maintains its general axisymmetry unlike its prograde counterpart, but that the retrograde MHD disk around a circular binary does still exhibit spiral density perturbations despite the lack of standard Linblad and co-rotation resonances.
Schnittman and Krolik (2015) used the conclusion from Bankert et al. (2015) that the binary evolution is dominated by the direct accretion of mass and momentum to develop an analytic model for binary separation and eccentricity evolution as a function of both binary eccentricity and mass-ratio. This model was again based around the "impact approximation" that all energy and angular momentum exchange occurs at apocenter; however, their model is built around a value of the specific angular momentum exchange measured from Bankert et al. (2015) and includes the possibility of differential accretion between the two binary components. Consistent with others, their model predicted semi-major axis shrinking for all eccentricities and mass ratios, but in contrast with previous modelling posited that eccentricity driving is maximal at small \(e\) and decreases with growing eccentricity.
Using 2D viscous hydrodynamics, Amaro-Seoane et al. (2016) studied the interaction between a binary with mass ratio \(q=1/10\) and a retrograde CBD. They considered both circular setups and configurations with an initial eccentricity \(e=0.6\). They point out that because of the large relative velocities between the secondary and retrograde gas, it is important to use a sink radius--for the removal of gas at the location of the black hole (see Sec. 3 for more precise definition and details)--that is smaller than the black hole's sphere of influence; and determine a sink radius of 2% the secondaries Roche Lobe radius to be sufficient for a \(q=1/10\) secondary. Notably, they point out that this is because gas near to the secondary black hole can exert strong gravitational torques that alter the orbit, and too large a sink radius removes this gas from the domain resulting in erroneous estimates of the evolutionary effects. Accordingly, they develop an analytic model for the evolution of binary semi-major axis and eccentricity for small mass ratios \(M_{2}\ll M_{1}\) based on both accretion and gas-dynamical friction from nearby, but unbound material. The outcome of this model depends on the steepness of the CBD density profile, but they find that most eccentric binaries will continue to have their eccentricity pumped \(e\to 1\) by the retrograde CBD. However, for near-circular binaries they find that relatively flat density profiles give eccentricity growth, but slightly steeper profiles with \(\rho\propto r^{-n}\) and \(n>3/2\) near-circular binaries stay near-circular while shrinking their semi-major axis. They find this model to be consistent with their circular and \(e=0.6\) simulations, but do not perform an extensive comparison.
Each of the studies above has noted that because of the lack of binary-disk resonances, the retrograde CBD is not truncated away from the binary, but rather extends all the way down until it interacts directly with the binary orbit at apocenter. Nixon and Lubow (2015) pointed out, however, that while this lack of Lindblad resonances is true for circular binaries, there exist components of the potential expansion for eccentric binaries that rotate retrograde to the binary allowing for resonant torques in eccentric, retrograde binaries. They find that these torques are generally weak, but for sufficiently high eccentricity, can become strong enough to drive spiral waves in the CBD and possibly carve a cavity (similar to prograde solutions). They corroborate their analytic calculations with a suite of SPH simulations and find a critical eccentricity \(e\sim 0.6\) where retrograde Lindblad resonances become comparable to viscous torques in the CBD for \(\alpha\)-viscosity value \(\alpha=0.05\). They observe these resonances to drive spiral density waves into the CBD and to possibly carve a circumbinary-cavity (although these can be challenging to resolve in SPH simulations because of the extreme low-densities associated with circumbinary cavities).
## 3 Numerical Methods
For comparison and increased confidence in results, this study utilizes two separate grid-based Newtonian hydrodynamics codes: the moving-mesh code Disco in cylindrical coordinates (Duffell, 2016), and the GPU-accelerated code Saifish in Cartesian coordinates (see Westermacher-Schneider et al., 2022, 2023). Both codes solve the Navier-Stokes equations for viscous, locally isothermal hydrodynamics in 2-dimensions. We refer the reader to an upcoming code-comparison paper Duffel et al. (2023) for implementation-specific details as well as a direct comparison of prograde CBD solutions.
### Problem setup
We initialize the disk in two ways: In Saifish, the vertically-averaged surface density is initially uniform \(\Sigma/\Sigma_{0}=1\) with a Keplerian rotation profile \(v_{\phi}=\sqrt{GM/\kappa}\). In Disco, a depleted central cavity of minimum density \(\delta_{0}=10^{-5}\) is imposed on the otherwise uniform initial surface density profile \(\Sigma/\Sigma_{0}=(1-\delta_{0})e^{-(2.5/r)^{12}}+\delta_{0}\) and the initial angular velocity of the gas is set by a Keplerian profile plus binary quadrupole and pressure corrections far from the binary, denoted \(\Omega_{0}\) (see, _e.g._, Miranda et al., 2017), down to \(r=a\) at which it saturates to \(\Omega=\Omega_{b}\equiv\sqrt{GM/a^{3}}\) for \(r<a\): \(\Omega(r)=\left[\Omega_{0}(r)^{-4}+\Omega_{b}^{-4}\right]^{-1/4}\). As both sets of initial conditions capture steady state disk solutions far from the binary, and because we allow the disk to relax for \(\gtrsim 500\) binary orbits before measurement
(see SS 3.3), these small difference in set up do not have a significant impact on the results and diagnostics discussed below.
The disks are subject to the time-varying gravitational potential of a binary with separation \(a\), orbital frequency \(\Omega_{b}\), total mass \(M\), and mass ratio \(q\). \(\Sigma_{0}\) is an arbitrary density scaling (=1 in code units), and \(\kappa=\sqrt{r^{2}+r_{s}^{2}}\) is the softened radial coordinate to prevent divergences at the origin (the softening radius is fiducially set to 5% the binary separation, \(r_{s}=0.05\,a\)). The disk has constant kinematic viscosity \(\nu=10^{-3}\,a^{2}\Omega_{b}\) which induces radial inflow in the disk and implies a steady-state accretion rate at infinity \(\dot{M}_{0}=3\pi\nu\Sigma_{0}\).
We consider the disk to be in the thin-disk limit with Mach number \(\mathcal{M}=10\) implying a disk aspect ratio \(h/r\sim\mathcal{M}^{-1}=0.1\). The locally isothermal condition is applied via the sound speed definition \(c_{s}^{2}=-\Phi_{g}/\mathcal{M}^{2}\) with \(\Phi_{g}\) the binary potential
\[\Phi_{g}=\Phi_{1}+\Phi_{2}\,,\qquad\Phi_{j}=-\frac{GM_{j}}{\epsilon_{j}}, \tag{1}\]
and \(\epsilon_{j}\) the smoothed distance to binary component \(j\in\{1,2\}\). For this study we only consider equal mass binaries \(M_{1}=M_{2}=M/2\) (or equivalently, binary mass ratio \(q=1\)). Moreover, we take the disk mass \(M_{d}\sim\Sigma_{0}a^{2}\) to be much smaller than the total binary mass \(M_{d}\ll M\) such that the timescale for altering the binary orbit is long compared to the orbital time, and that we may ignore the disks self-gravity (_e.g._, the disk Toomre parameter \(Q\sim\mathcal{M}^{-1}(M/M_{d})\gg 1\)).
### Source terms and boundary conditions
The mass and momentum conservation equations solved in both codes can be written as
\[\dot{q}_{t}\Sigma+\nabla\cdot(\Sigma\mathbf{v})=S_{\Sigma} \tag{2}\] \[\dot{q}_{t}(\Sigma\mathbf{v})+\nabla\cdot(\Sigma\mathbf{v}+P \mathbf{I}-\tau)=\mathbf{S_{J}}+\mathbf{F_{g}} \tag{3}\]
where \(v_{i}\) is the mid-plane fluid velocity, \(P=c_{s}^{2}\Sigma\) is the vertically averaged gas pressure, \(\tau\) is the viscous stress tensor, \(F_{g}\) is the gravitational force density from the potential in Eq. 1, and \(S_{\{\Sigma,J\}}\) are mass- and momentum-sinks respectively. The viscous stress tensor is calculated as
\[\tau_{ij}=\nu\Sigma(\nabla_{i}\nu_{j}+\nabla_{j}\nu_{i}-\delta_{ij}\nabla_{k }\nu_{k}) \tag{4}\]
with the covariant derivatives taken in the relevant coordinate system. The sink terms are included to mimic the accretion of material onto each binary component because we do not resolve our solutions down to the physical accretion boundaries. The mass and momentum sinks are given respectively as
\[S_{\Sigma} =-\gamma\Omega_{b}\Sigma\sum_{i}w_{i} \tag{5}\] \[\mathbf{S_{J}} =-\gamma\Omega_{b}\Sigma\sum_{i}\mathbf{v}_{i}^{\ast}\,w_{i} \tag{6}\]
where \(\gamma\) is a dimensionless sink-rate which--unless specified otherwise--is set to 1, and \(\Omega_{b}=\sqrt{GM/a^{3}}\) is the Keplerian orbital frequency of the binary. \(w_{i}\) is a window function defining the strength of the sink for a gas parcel distance \(r_{i}\) away from black hole \(i\) in terms of some characteristic sink radius that is fiducially set equal to the softening radius \(r_{s}\):
\[w_{i}=\exp\left[-(r_{i}/r_{s})^{4}\right]\,. \tag{7}\]
Different sink behaviors are achieved through calculation of the adjusted gas-velocity \(\mathbf{v_{i}^{\ast}}\) associated with each fluid element removed. For nearly all runs presented in this study we have adopted "torque-free" (Dempsey et al., 2020)--or _spinless_--sinks such that no spin angular momentum is accreted by each binary component--only orbital angular momentum. To accomplish this, the adjusted velocity is calculated as
\[\mathbf{v_{i}^{\ast}}=(\mathbf{v}-\mathbf{v_{i}})\cdot\mathbf{\hat{r}_{i}}+ \mathbf{v_{i}}, \tag{8}\]
where \(\mathbf{v}\) is the gas velocity in the inertial frame, \(\mathbf{v_{i}}\) is the binary component velocity, and \(\mathbf{\hat{r}_{i}}\) is the radial unit-vector in a coordinate system centered on binary component \(i\). In the appendix we briefly discuss a separate "acceleration free" sink prescription (also sometimes referred to as a "standard sink" Dittmann and Ryan (2021) in which \(\mathbf{v_{i}^{\ast}}=\mathbf{v}\) and the momentum sink reduces to \(\mathbf{\hat{S_{j}}}=S_{\Sigma}\mathbf{v}\).
In both codes the outer boundary is set so to enforce the steady-state accretion rate \(\dot{M}_{0}\) so that the solution mimics that of an infinite accretion disk. In Discoc this is accomplished with outer ghost-cells fixed to the setup initial conditions. In Salifish this is attained via an additional "buffer" source term that drives the solution back towards the initial condition. This "buffer" prevents artifacts from the square domain propagating into the solution. The details of the buffer prescription can be found in SS3 of Westernacher-Schneider et al. (2022).
### Adiabatic eccentricity variation
The primary calculations presented in this study fix the binary mass ratio \(q=1\) and slowly vary the binary eccentricity from \(e_{0}=0\) up to \(e_{f}=0.8\). In the process we measure the rate of change of both the binary semi-major axis \(\dot{a}(e)\) and eccentricity \(\dot{e}(e)\) as in D'Orazio and Duffell (2021) (see also Duffell et al. (2020) for the same procedure performed on \(q\) at fixed-\(e\)). The runs that vary the binary eccentricity are initialized from the output of a simulation with eccentricity fixed at \(e=0\) for 500 binary orbits. The eccentricity growth is performed linearly over \(n\) binary orbits as
\[e(t)=e_{0}+\frac{e_{f}}{2\pi\,n}\,t\,. \tag{9}\]
We take \(n=5\times 10^{3}\) and \(10^{4}\) and have verified that further increasing \(n\) does not meaningfully change our results. The time rate of change of the binary orbital elements \(\dot{a}\), \(\dot{e}\) are calculated in the same way as is detailed in D'Orazio and Duffell (2021), except with additional full accounting for all momentum accreted by the sinks (the mass deposition effect is included as previously).
The fiducial grid resolution in Salifish is taken to be \(\delta=0.0067a\). In Disco, the grid is hybrid-logarithmic in radius such that the resolution is \(\delta=0.0127a\) at \(r=0.5\,a\), slightly higher inside \(r<0.5\,a\), and decreases as one moves toward the outer boundary at \(r=50a\). In Salifish, the resolution is constant everywhere and the outer boundary is taken to be \(r=10a\) because initial simulations (and the previous numerical work discussed in SS2) showed that the retrograde CBD becomes near completely axisymmetric at \(r>\mathrm{few}\times a\), with possible exception at large eccentricity to be discussed later.
## 4 Numerical results
### Orbital evolution: \(\dot{a}(e)\) and \(\dot{e}(e)\)
Figure 1 shows calculations of \(\dot{a}(e)\) (_top panel_) and \(\dot{e}(e)\) (_bottom panel_) from the simulations that vary eccentricity adiabatically from \(e=0\) to \(e=0.8\). Results from Disco are displayed in green and results from Salifish are given in pink. Both codes find that the retrograde CBD shrinks the binary semi-major axis \(\dot{a}<0\) and pumps
the binary eccentricity \(\dot{e}>0\) for all eccentricities. The shrinking of the binary orbit and pumping of the binary eccentricity is in general agreement with previous studies of binary evolution in retrograde CBD's, but unlike Nixon et al. (2011) and Roedig & Sesana (2014) these simulations show that near-circular binaries are not in fact driven back towards circularity, but rather have their eccentricity driven up by the retrograde CBD. The eccentricity driving rate grows linearly towards a maximum of \(de/d\log M\approx 3\) at \(e\approx 0.15\) and settle to a near constant growth rate \(de/d\log M\approx 2.25\) by \(e\gtrsim 0.4\). The shrinking of the binary \(\dot{a}\) shows almost no dependence on binary eccentricity and always retains a value \(d\log a/d\log M\approx-10\).
To sanity check these results we have also run a series of high-resolution \(\delta=0.005\,a\) fixed-eccentricity runs in Sailfish shown as grey crosses. These runs were done for 2000 binary orbits and we report the rate of change to the orbital elements over the final 500 orbits. These show near exact agreement with the Sailfish variable eccentricity run and very close agreement with the Disco run. At high eccentricities \(e>0.7\) we observe a notable growth in the fluctuations of \(\dot{a}\) and \(\dot{e}\) in the Sailfish runs alone. We attribute this growth in variation in one code and not the other to the fact that the Cartesian grid of Sailfish evolves the fluid linear momenta, and thus, angular momentum is not perfectly conserved. Disco by contrast explicitly evolves and conserves the fluid angular momentum by construction. We have confirmed this effect by lowering the resolution of the Sailfish run and verifying that the variations become more extreme, that they set in at lower eccentricity, and that the solutions begin to diverge at large eccentricities.
Apart from these variations, both simulations show a notable increase in variation at \(e\sim 0.55\). We attribute this growth in variability to the emergence of a two-orbit periodic switching in the circumbinary disk structure (discussed further in SS4.2). This growth in variability also appears to be coincident with the emergence of two-armed spiral density waves that extend from the CBD cavity edge suggesting the possible realization of retrograde circumbinary resonances in accordance with the predictions of Nixon & Lubow (2015).
### Disk morphology
A single snapshot for a circular binary (\(e=0\)) after 2000 orbits is shown in Figure 2 that highlights many of the general features of retrograde solutions. The colormap shows the log-density of the accretion flow, and the overlayed arrows show the direction of the gas velocity. Similar to previous studies, the circumbinary disk extends all the way to the binary orbital radius, and remains almost completely axisymmetric everywhere except for the inner-most \(r<a\) (in contrast with the large, lopsided cavities observed in circular, prograde scenarios). The binary curves a low-density cavity inside of its
Figure 1: Time rates of change of binary semi-major axis (_top_) and eccentricity (_bottom_) from a retrograde circumbinary disk as a function of binary eccentricity. Results from Sailfish are shown in pink and from Disco in green. The grey crosses show single, fixed-eccentricity runs computed with Sailfish for comparison to the results from the eccentricity-varying runs.
Figure 2: Snapshot of the steady-state flow pattern for a circular binary in a retrograde CBD. The arrows show the direction of the fluid velocity. The binary is orbiting counter-clockwise. The minidisks are retrograde in accordance with the bulk flow. The minidisks have persistent wakes that also feed low-angular momentum material into the standing-bridge between the binary components.
orbital radius, but does not entirely expel it of material, and rather is always orbiting in some ambient medium. The binary components capture retrograde material into circum-single "minidisks" that retain opposite angular momentum to the orbital angular momentum of the binary. Rather than becoming perfectly circular, the mini-disks have bow-shock-like structures on their leading edge from the ram-pressure of incident counter-rotating material, as well as trailing "tadpole wakes" of low angular momentum material that falls almost radially toward the binary barycenter in both directions. This creates a persistent standing "retrograde-bridge" of material between the binary components. This retrograde-bridge is quasi-steady--it does wobble slightly--in circular retrograde solutions, but becomes more transitory for eccentric binaries as will be demonstrated later. We also note that all future surface density plots will use the same colorbar indicated in Figure 2 and the same \([-2a,2a]\) Cartesian stretch (unless otherwise indicated).
Figure 3 displays snapshots of the disk log-surface-density for eccentricities \(e=\{0.0,0.1,0.3,0.5,0.7\}\) with Cartesian extent \([-3a,3a]\). The _top two rows_ sample from the eccentricity varying runs while the _bottom row_ uses results of the fixed-eccentricity runs. All snapshots are taken at binary pericenter. There is generally very good agreement in disk morphology between the eccentricity varying runs (Disco sweep and Salifish sweep; _rows 1 and 2_) and with the fixed-eccentricity runs (_row 3_). The only discernible differences are in slight variations to the minidisk density and the exact shape of the retrograde-bridge between the binary components. We investigate the sensitivity of these wakes and bridges to numerical choices in SS 6.
The time-dependence of the disk evolution has three separate regimes: (_i_) steady-state in the co-rotating frame near circularity, (_ii_) the driving of axisymmetric density waves for eccentric binaries \(0.025\leq e\leq 0.55\), and (_iii_) the forcing of non-axisymmetric spiral density waves at large eccentricities \(e\gtrsim 0.55\). Circular binaries (_i_) with \(e\lesssim 0.025\) resemble Figure 2 and show no phase dependence in their evolution. The systems in regime (_ii_) acquire a "breathing-mode" that drives axisymmetric density waves into the CBD. This process is shown in Figure 4. At pericenter, the binary has carved a fully-depleted, axisymmetric cavity. As it approaches apocenter (true anomaly \(0<f<\pi\); here-on we refer to this as "binary waking"), the minidisks circularize in the relative-vacuum of the cavity, but CBD material encroaches on the cavity as the orbit expands and tidally redirects material through the domain center. This redirected material from each component collides forming the retrograde-bridge, and by binary apocenter the cavity has been replenished with low-density gas. The ram-pressure of this ambient material disrupts the minidisks compressing the leading edge and stripping off a spiral wake. As the binary accelerates towards pericenter (\(\pi<f<2\pi\); "binary waning"), it expels the material inside of its orbit driving an axisymmetric density ring into the CBD. This ring can be seen forming in the final panel of Figure 4 (\(f=3\pi/2\)), as circularized in the first panel (at pericenter), and propagating through the CBD as a sound wave in the second and third panels. We illustrate \(e=0.3\) as an example, but all eccentricities \(0.025\leq e\leq 0.55\) were observed to have the same phase-dependent behavior. We note that the driving of an axisymmetric density wave sets in at very small eccentricity \(e\approx 0.025\), but the carving of a depleted cavity around pericenter, however, occurs more gradually. A fully-depleted cavity is effectively formed once per orbit starting at \(e\approx 0.1\).
Binaries in regime (_iii_) become qualitatively different as they acquire a two-orbit periodicity in the phase-dependence of their flow. These two orbits are illustrated in Figure 5. Similar to regime (_ii_), at the first pericenter (selected arbitrarily, but shown at the top-left panel of Figure 5) the binary has carved a fully-depleted, mostly axisymmetric cavity with the exception of two weak spiral arms extending from the cavity wall. In the first orbit (_top row_), during binary waking, the minidisks similarly circularize and material is tidally re-oriented into the depleted cavity. At apocenter, the cavity is replenished with gas, the retrograde-bridge has formed, and the minidisks are strongly
Figure 3: Density maps at \(e=\{0.0,0.1,0.3,0.5,0.7\}\) from the eccentricity varying runs with both Disco (_top row_) and Salifish (_middle row_) as well as the fixed-eccentricity runs (_bottom row_), all taken at pericenter.
perturbed from the collisions with encroaching CBD material. In the approach to pericenter, the accelerating binary again expels gas from within its orbit, but instead of driving an axisymmetric density wave (as in the lower \(e\) case), it propels two spiral density waves that propagate into the disk. Because of this, at second-pericenter (bottom left panel), the binary has carved a depleted cavity, but the cavity wall is no longer circular as it is dominated by the \(m=2\) spirals. In the second orbit (_bottom row_), the same processes occur, but the non-axisymmetric cavity at pericenter is less efficient at refilling the cavity during binary wasting. The minidisks are less-perturbed at second-apocenter (in the bottom row) as a result of these lower cavity densities; and while the binary still creates a two-armed spiral structure upon expelling its intra-orbit material during binary waning, the lower densities weaken the response, and the resulting pericenter cavity is left essentially circular. This two orbit periodicity also appears as power in the accretion rate time series and periodogram (see Figures 10 and 11).
We emphasize that at all eccentricities, these 2D Newtonian, isothermal hydrodynamics simulations show the formation of binary minidisks and a retrograde-bridge. The presence of these minidisks--and their visible asymmetries throughout the binary orbit--preempt the importance of including the binary and the central most \(r<a\) regions of the accretion flow in order to accurately ascertain the gravitational forces on the binary and its resultant orbital evolution.
### Gravity vs. accretion
A major component of most previous studies of orbital evolution from retrograde CBD's was that the evolution is dominated by the direct accretion of both mass and angular momentum due to collisions between retrograde fluid elements and the binary at apocenter; with the exception of Amaro-Seoane et al. (2016) who pointed out that gas near the binary orbit can exert very strong gravitational forces before physically accreting. In order to quantify the relative effects of gravitational and accretion forces, we separated both \(\dot{a}\) and \(\dot{e}\) into their components due solely to gravitational forces and those due only to the accretion of mass and momentum. Figure 6 shows this decomposition by plotting the total \(\dot{a}\) (_purple_) and \(\dot{e}\) (_orange_), and the components due to gravitational forces alone \(\dot{a}_{\rm grav}\), \(\dot{e}_{\rm grav}\) (illustrated as _light-purple_ and _light-orange_). The top panel shows this decomposition for the Disco run and the bottom panel shows it for the Salifish simulation. The primary conclusion from these breakdowns is that--contrary to previous studies and models--gravitational forces are the dominant component of the binary's or
Figure 4: Phase dependent disk morphology for an \(e=0.3\) orbit at four values of the true anomaly \(f=\{0,\pi/2,\pi,3\pi/2\}\). Overlayed arrows show the direction of the fluid velocity.
Figure 5: Phase dependent disk morphology for two contiguous \(e=0.7\) orbits at four values of the true anomaly showing the two-orbit periodicity in the flow behavior. The first row illustrates the first orbit in this two-orbit behavior, and the second row shows the second. Overlayed arrows show the direction of the fluid velocity.
bital evolution for both \(\dot{a}\) and \(\dot{e}\) as their components due to gravity alone near perfectly describe the full orbital evolution curves.
The effect from the physical accretion of mass and momentum is almost entirely negligible for the eccentricity evolution in both codes and only represents a \(\sim 10\%\) contribution to the change in semi-major axis. We posit that this--just as was pointed out in Amaro-Seoane et al. (2016) for \(q=1/10\) binaries--is because material captured from the CBD at binary apocenter is not directly added to the binary via accretion, but rather, is transferred onto orbits around the individual binary components. Moreover, these circum-single, "minidisk" structures are not symmetric around each binary component, but instead have small wakes that trail each component exerting gravitational torques on and removing energy from the binary orbit. Even larger wakes and non-symmetric features are formed when the mini-disks are partially disrupted from the impact with the CBD at each apocenter passage as discussed in SS4.2.
To compare the extent to which gravitational forces from this intra-orbit material (material within \(r\leq a\)) dominate over those from the outer CBD (\(r>a\)), Figure 7 shows the average magnitudes of the unit-less torque \(|(\mathbf{r_{b}\times f_{g}})/\sqrt{GMa(1-e^{2})}|\) and power \(|(\mathbf{f_{g}\cdot v_{b}})/(GM/2a)|\), with \(r_{b}\) the binary separation, exerted on the binary from each of the fixed-eccentricity comparison runs. The average torque is shown by crosses and the average power by circles. The components from the intra-orbit material with \(r\leq a\) are given in red and those from the outer CBD in blue. At all eccentricities both the gravitational torque and power exerted on the binary are dominated by the intra-orbit material. Further, the components of the average torque and power from the outer-CBD\(r>a\) are almost entirely negligible at eccentricities below the threshold for retrograde resonances to drive spiral density waves into the CBD, \(e\lesssim 0.55\). At \(e=0.7\) we once again see the effect of persistent spiral density waves in the CBD as the average torque from the outer-disk is no longer negligible--and instead accounts for approximately 25% of the total torque on the binary.
### Eccentricity driving
A primary component of past modelling for binaries accreting from retrograde disks was the assumption that eccentricity pumping is focused around binary apocenter (and damping--if included--occurs around pericenter). We examine this hypothesis by plotting the averaged change in binary eccentricity \(\Delta e=\dot{e}\delta t\) per binary phase--the orbital mean anomaly--with \(\dot{e}\) the instantaneous eccentricity driving rate and \(\delta t\) the associated timestep in code units. We show the total effect from both gravitational forces and the accretion of mass and momentum as solid black curves as well as the decomposed contributions from gravitational torque (_orange dashed_ curves) and gravitational power (_green dash-dotted_ curves) given respectively as
\[\dot{e}_{\rm power}=-\frac{\dot{e}}{\epsilon}\left(\frac{1-e^{2}}{2e}\right) =\frac{\mathbf{f_{g}\cdot v_{b}}}{GM/a}\left(\frac{1-e^{2}}{e}\right) \tag{10}\]
\[\dot{e}_{\rm torque}=-2\frac{\dot{e}}{\ell}\left(\frac{1-e^{2}}{2e}\right)=- \frac{\mathbf{r_{b}\times f_{g}}}{\sqrt{GMa(1-e^{2})}}\left(\frac{1-e^{2}}{e} \right), \tag{11}\]
such that \(\dot{e}_{\rm grav}=\dot{e}_{\rm power}+\dot{e}_{\rm torque}\) (see D'Orazio & Duffell 2021 Equations 6-8 for more detailed discussion of these terms). These phased eccentricity driving curves are shown in Figure 8 for four values of eccentricity \(e=\{0.02,0.1,0.3,0.7\}\) to encapsulate each of the three morphologic disk regimes: \(e=0.02\) in regime (\(i\)), \(e=0.1,\ 0.3\) in regime (\(ii\)), and \(e=0.7\) in regime (\(iii\)) (see SS4.2). Of primary note, eccentricity pumping is found to occur during binary waxing (pericenter \(\rightarrow\) apocenter) for all eccentricities, peaking near a mean anomaly of \(\pi/2\) (or equivalently at \(P_{b}/4\)) at small eccentricities and shifting nearer to pericenter with growing \(e\). Eccentricity is correspondingly damped during binary waning (apocenter \(\rightarrow\) pericenter), and the effect at pericenter and apocenter is minimal at all eccentricities considered. The net integrated effect over one full orbit yields the \(\dot{e}_{\rm grav}\) curves shown in Figure 6.
One major distinguishing characteristic between regime (\(i\)) and regimes (_ii_, _iii_) is that in regime (\(i\)) the binary is always orbiting through ambient intra-orbit material and as a result forms persistent tadpole wakes that trail each binary component. In regimes (_ii_ & _iii_), the binary expels all intra-orbit material once per orbit during its waning phase such that it evolves in a fully-depleted cavity for at least half its orbit until material is once again redirected within the orbital radius around apocenter (see Figures 4 & 5). We identify two separate eccentricity driving modes associated with these disk morphologic regimes: (1) a _wake-driven_ mode when the binary evolves through persistent intra-orbit material and never carves a fully depleted cavity (regime \(i\); see Figure 2). The associated eccentricity driving is shown in the top-panel of Figure 8 for \(e=0.02\) where torques from the tadpole wakes pump eccentricity at all binary phases. The associated power from the wakes always acts with opposite effect, damping the binary's eccentricity; but the torque-mediated pumping wins out on aggregate.
The second eccentricity driving mode, (2) a _cavity-mediated_ mode, occurs once the binary has begun carving a fully depleted cavity and orbits in relative vacuum for a significant portion of its orbit (regimes _ii_ & _iii_). This mode is observed in the \(e=0.1,\ 0.3,\ 0.7\) panels of Figure 8 where the total effect on \(\Delta e\) is dominated by, and near perfectly tracks, the power contribution. Once the binary begins
Figure 6: Orbital evolution from Disco (_top_) and Sailfish (_bottom_) separated into the total effect and the component from gravitational forces alone. The semi-major axis and eccentricity evolution are both dominated by the gravitational pull of the gas.
carving a depleted cavity, gravitational power continues to damp eccentricity during binary waning as the binary expels its intra-orbit material. However, the power effect switches to eccentricity pumping during waving when the binary and its minidisks evolve through relative vacuum. This transition occurs gradually, but has mostly set in by \(e\approx 0.1\). The effect of torque on \(\Delta e\) during this mode is negligible at all phases, except near apocenter when the binary has refilled its cavity and temporarily evolves through ambient material with the associated tadpole wakes. This effect, though, becomes less significant with growing eccentricity.
We attribute the initially small eccentricity driving at small eccentricities \(e\sim\mathcal{O}(10^{-2})\) in Figure 1 to the small net difference between the eccentricity changing effects of gravitational torque and power from the wakes before the binary begins to carve a depleted cavity once per orbit. The slow linear growth in \(\epsilon\) with \(e\) reflects the gradual transition to a cavity-carving binary; and once the binary is effectively depleting its orbit of material around pericenter--despite the amplitude of all eccentricity modifying effects decreasing with \(e\)--the net balance remains relatively constant with \(e\).
### Retrograde resonances
As discussed in SS2, Nixon & Lubow (2015) pointed out that eccentric binaries have components of the potential expansion that rotate retrograde to the binary admitting retrograde Lindblad resonances; and their simulations find that these resonances become strong enough to drive persistent spiral density waves into the disk at eccentricities \(e\geq 0.6\). The simulations presented in this study also show evidence for retrograde resonances at eccentricities \(e\gtrsim 0.55\). Figures 3 shows the first signs of non-axisymmetric density waves at \(e=0.5\), and at \(e=0.7\) reveals the presence of two-armed, \(m=2\) spiral density waves originating at the CBD inner edge and propagating into the CBD.
In traditional linear analyses of resonant torques on a CBD, the forces associated with a harmonic mode \(m\) of the gravitational potential will drive spiral density waves into the CBD with the same azimuthal mode number. Moreover, for equal-mass binaries, odd components of the potential expansion disappear, so only even-\(m\) resonant torques should act on the CBD. To check indirectly for the presence of such resonances, we calculate timeseries of the azimuthal density mode \(m=\{1,2,3,4\}\) as
\[\Psi_{m}(t)=\int_{a}^{r_{out}}dr\int_{0}^{2\pi}rd\phi\,\Sigma(t)\,e^{im\phi}. \tag{12}\]
Figure 9 shows the time averaged strength of these azimuthal density modes \(\tilde{\Psi}_{m}\) from the fixed-eccentricity runs. At low eccentricities \(e\leq 0.1\) the disk is almost perfectly axisymmetric, but an \(m=2\)
Figure 8: Averaged change in eccentricity \(\Delta e\) per binary phase (mean anomaly) at four eccentricities \(e=\{0.02,0.1,0.3,0.7\}\). The top panel—representative of small eccentricities \(e\sim\mathcal{O}(10^{-2})\) in a _wake-driven_ mode— shows different phase-dependent eccentricity driving behavior than all other eccentricities \(e\gtrsim 0.1\) in a _cavity-mediated_ mode.
Figure 7: Magnitude of gravitational torque and power for fixed-eccentricity runs with \(\mathtt{Sai}\,\mathtt{1fish}\). We see that gravitational forces from the inner-most region of the flow dominate the binary evolution at all eccentricities. At high-eccentricities \(e\gtrsim 0.5\), the binary drives spiral wakes into the outer-CBD, leading to growth in the torque component from \(r>a\); although this component remains subdominant by a factor of a few.
structure begins to appear at \(e=0.3\). This is not visually apparent in Figure 3, but we posit that it is the result of asymmetries in the innermost CBD that are momentarily created at binary apocenter (_e.g._, in the third panel--at apocenter-- in Figure 4). At eccentricities \(e\geq 0.5\), the \(m=2\) density mode continues to grow with binary eccentricity in accordance with the observation of spiral density waves. Of note, for these larger eccentricities \(e\geq 0.5\), an \(m=4\) mode also appears--but it is weaker than the lower-\(m\) mode--and there are no odd-\(m\) azimuthal density perturbations in alignment with linear theory.
### Accretion rate
To determine periodicity structure in the accretion rate, we compute a 2D periodogram of the accretion rate time series as measured onto both binary components. To do so we utilized the entire \(10^{4}\) binary orbit accretion rate time series from Disco, spanning from \(e=0-0.8\) (similar results are found with Salifish, see below). As in Duffell et al. (2020), we convolve a Gaussian window in time (and so also binary eccentricity) with the inner product of the accretion rate and Fourier vector with angular frequency \(\omega\) in frequency space,
\[\mathcal{P}(e,\omega)=\frac{1}{\sqrt{2\pi\sigma^{2}}}\int_{t(e_{0})}^{t(e_{f} )}\mathrm{e}^{-\frac{1}{2}\frac{(t(e_{f})-e^{2})}{\sigma^{2}}}\dot{M}(\tau) \mathrm{e}^{i\omega\tau}d\tau, \tag{13}\]
where eccentricity dependence comes through \(t(e)\), the inverse of Eq. (5). We choose \(\sigma=30P_{b}\) and compute Eq. (13) over a \(300\times 300\) grid of values of \(e\) ranging from \(0.0\) to \(0.8\) and \(\omega\) ranging from \(0.1\Omega_{b}\) to \(2.5\Omega_{b}\), corresponding to variability timescales between \((0.1-2.5)P_{\mathrm{orb}}\).
Figure 10 plots contours of the log-base-10 power computed over this grid. The most prominent power is concentrated in narrow bands at the orbital time and its higher frequency harmonics. In addition, a wider band is concentrated at twice the binary orbital period (half the orbital frequency), appearing for eccentricities above \(e\approx 0.55\). The latter appearance of a twice-orbital periodicity is notably coincident with the appearance of retrograde resonances as argued in SS 4.5. The twice-orbital periodicity derives from the alternating process of cavity-clearing described in SS 4.2.
In Figure 11 we further explore the time series accretion rates for representative binary eccentricities. In this figure we plot the Disco results in thick black lines and also the Salifish results in grey lines for comparison. We first describe the Disco accretion-rate time series. For \(e=0\), the accretion rate is steady. As \(e\) increases from zero, the accretion rate becomes strongly modulated at the orbital period with peaks just following apocenter, and with amplitude growing with \(e\) until \(e\sim 0.2\). From \(e\sim 0.2-0.5\), the amplitude of accretion rate modulations saturates to \(\sim 50\%\) of the mean, and the time of peak accretion rate drifts from just after apocenter towards just before pericenter with increasing \(e\). At \(e\gtrsim 0.3\), a second peak in the accretion rate develops, occurring just after apocenter and before the other peak, which, for \(0.3\lesssim e\lesssim 0.5\), occurs just before pericenter. For \(e\gtrsim 0.55\), the twice-per-orbit periodicity appears, and the amplitude of modulations begins to again grow with \(e\). For \(e=0.6\) and \(e=0.7\), the twice-orbital-period periodicity manifests as the double peaked accretion rate modulation becoming a factor of \(\sim 2\) higher every other orbit, but otherwise similar in shape, due again to the alternating cavity structure seen in Figure 5 and described in SS 4.2. For \(e=0.8\), the higher accretion-rate modulation occurring every other orbit is punctuated by a large, factor of 3.5 accretion rate spike just following apocenter.
The Salifish accretion-rate time series are similar to the Disco results in periodicity structure and in the magnitude of accretion rate modulations, but exhibit a number of differences. Primarily, the double peaked structure does not manifest until \(e\gtrsim 0.6\) for the Salifish runs. Rather, at intermediate eccentricities, what appears as a double peaked structure in the Disco runs, appears instead as a low accretion rate kink in the grey Salifish time series. At high eccentricities Salifish also exhibits large spikes following apocenter, but starting at \(\sim 0.7\) compared to \(e\approx 0.8\) for Disco. We note that the Salifish accretion rate time series in the final \(e=0.8\) panel of Figure 11, is likely exhibiting spurious behavior due to the non-explicit conservation of angular momentum. Hence, while periodicity timescales and the magnitude of accretion-rate variations are robust between the codes, small differences in the shape of the accretion rate time series such as the observed double peaked structure are not, and caution should be taken in applying these to observable features of accreting binaries (in addition to further complications in conversion of accretion rates to luminosities).
## 5 Discussion
### Comparison with previous studies
The results presented in this study generally agree qualitatively with previous numerical and analytic works on retrograde binary accretion, but there exist pertinent differences. Analytic modelling of retrograde accretion scenarios from Nixon et al. (2011b); Roedig and Sesana (2014); Schnittman and Krolik (2015) were all generally founded upon the assumption that binary evolution is dominated by the physical accretion of counter-rotating gas at binary apocenter (and sometimes at pericenter). Generally, these models accurately predict that the binary will shrink its semi-major axis and grow its eccentricity at nearly all eccentricities. However, Nixon et al. (2011a) forecasted that at small eccentricities \(e<h/r\), the binary would actually shrink its eccentricity and remain near circular due to opposing accretion effects at binary apocenter and pericenter. This study does not corroborate this prediction and finds that the binary grows its
Figure 9: Time averaged strength of azimuthal density modes with mode numbers \(m=\{1,2,3,4\}\). We see the emergence of only even-\(m\) azimuthal modes in accordance with linear theory of binary-disk resonances for equal mass binaries. The strong growth of the \(m=2\) mode at \(e=0.7\) reflects the observed \(m=2\) spiral density waves in the associated surface density plots and is evidence for the emergence of retrograde resonances. The non-zero even-\(m\) mode strengths at \(e<0.55\) are likely due to the temporary non-axisymmetry of the CBD around binary apocenter once every orbit.
eccentricity always. On the contrary, the modelling of Schnittman and Krolik (2015) anticipates binary eccentricity growth at all eccentricities, but quantitatively predicts the effect to be largest at small eccentricities and to decline as eccentricity grows. The results in Figure 1 show the opposite. Eccentricity growth is smallest at small \(e\) and grows to a peak value at \(e\sim 0.15\) after which it stabilizes to \(de/d\log M\sim 2\).
These studies (along with Bankert et al., 2015; Nixon and Lubow, 2015) have additionally either verified or motivated their models with particle- and grid-based numerical studies of accreting retrograde systems. These simulations generally corroborated the postulate that the orbital evolution was dominated by the physical capture of gas at binary apocenter, and that gravitational forces were a subdominant effect. We have tested this directly in our full-domain solutions and found that the opposite is true: the binary orbital evolution is almost entirely explained by gravitational forces on the binary, and the physical accretion of mass and momentum is a secondary effect. Moreover, the dominant phase for driving eccentricity occurs during orbital waxing, between pericenter and apocenter, and the effect at apocenter is small. Similar to prograde scenarios, the evolution of the binary is controlled by time-dependent, non-axisymmetric features in the disk morphology that exert strong gravitational forces. We hypothesize that this had previously been missed because older simulations did not include the inner-most regions of the accretion flow either by construction (Bankert et al., 2015) or because of the specifics of their numerical scheme and setup (Nixon et al., 2011; Roedig and Sesana, 2014; Nixon and Lubow, 2015). We have demonstrated that the strongest forces come from the intra-orbital material that passes within the semi-major axis of the binary \(r<a\) (a fact that had additionally been noted for small mass-ratio binaries \(q<0.1\) by Amaro-Soane et al., 2016).
### Observational implications
The presence of circumbinary disks can leave observable imprints in both electromagnetic and gravitational wave radiation from accreting binaries, and the effects of such accretion phases leave a lasting impact on populations of binaries that have undergone gas-mediated phases. Such effects have been discussed in detail for prograde accretion scenarios (_e.g._, Farris et al., 2015; Ryan and MacFadyen, 2017; Bortolas et al., 2021; Major Krauth et al., 2023), and many retrograde effects have been discussed in previous works (c.f. Schnittman and Krolik, 2015). We expand on these by discussing the implications of this study in the context of recent prograde results.
_Disk-mediated decay_ -- In prograde solutions, it is possible for accretion-mediated phases of binary evolution to both expand the binary orbit and to facilitate binary inspiral. Namely, near-circular-orbit binaries tend to circularize and expand their orbits due to accretion and interaction with a CBD for disk scale-heights \(h/r\geq 0.05\); however, prograde binaries with initial eccentricity \(e\geq 0.08\) evolve towards an equilibrium eccentricity of \(e_{\rm eq}\sim 0.4\) (or some value \(0.25\la e_{\rm eq}\la 0.5\) for \(q<1\) Siwek et al., 2023) where they shrink their semi-major axis at a rate \(\xi\equiv d\log a/d\log M\) that is order unity (Zrake et al., 2021; D'Orazio and Duffell, 2021). In contrast, our results--in accordance with previous retrograde accretion studies--show that retrograde accretion scenarios facilitate binary decay at all eccentricities, and do so at a rate \(\xi\approx-10\); a factor of \(2-3\) faster than prograde disks at \(e_{\rm eq}\). For black holes accreting at their Eddington rate, this implies a retrograde gas-mediated inspiral timescale \(\tau_{\rm d}=a/\dot{a}=-\xi^{-1}M/\dot{M}\sim 4.5\,\)Myr where we've used \(M/\dot{M}\approx 4.5\times 10^{7}\,\)yr as the Eddington-limited mass doubling time of an accreting black hole--or Salpeter time--with an accretion efficiency 0.1. This is shorter than the expected \(10-100\,\)Myr lifetimes of quasars (Martini, 2004).
In contrast with prograde solutions, binaries accreting from retrograde disks have their eccentricity pumped at all eccentricities. Our \(\dot{e}(e)\) solution is nearly linear for \(e\la 0.1\), with an estimated form \(30e\,\dot{M}/M\), and approximately constant for \(e\geq 0.1\). Therefore, for initial eccentricities \(e_{0}\la 0.1\), the binary will grow its eccentricity exponentially, with an e-folding timescale of \((\sim 30\dot{M}/M)^{-1}\), quickly driving initially small eccentricities into the the constant \(\dot{e}\) regime, where eccentricity grows at approximately twice the mass doubling rate. Hence, for any initial binary eccentricity, retrograde circumbinary disks will drive eccentricities \(e\to 1\) in approximately half of a Salpeter time, \(\sim 20\) Myr. Comparing the mutual evolution of \(\dot{a}\) and \(\dot{e}\), this implies that \(a\) will decrease by 5 e-foldings \(a_{0}{\rm e}^{-5}\) in the time required for \(e\to 1\).
A retrograde disk will simultaneously pump binary eccentricity and shrink the semi-major axis until the effects of energy and angular momentum loss by GWs begin to dominate at either high-\(e\) or small-\(a\). In the large eccentricity limit, there exists some eccentricity \(e_{\bullet}\) at which GWs will begin to damp binary eccentricity at a rate faster than the disk can pump it. We can estimate a _lower-bound_ for the eccentricity achieved from retrograde accretion before GWs take over by finding \(e_{\bullet}\) such that
\[\dot{e}_{GW}(M,a_{0}{\rm e}^{-5},e_{\bullet})=-\dot{e}_{CBD}(e_{\bullet}) \tag{14}\]
as a function of binary mass \(M\) and initial separation \(a_{0}\) (assuming \(q=1\)). This lower bound is likely the more realistic scenario, but we also compute an _upper-bound_ for binary eccentricity at the onset of a GW dominated regime by using \(a_{0}\) instead of \(a_{0}{\rm e}^{-5}\). This ignores the commensurate shrinking of semi-major axis during eccentricity pumping2. These upper- and lower-bounds on the disk-driven eccentricity are shown as contours of \(\log_{10}(1-e_{\star})\) in \(a_{0}-M\) space in Figure 12 (the _left_ and _right_ panels respectively). The light shaded region in the upper right corner denotes where \(a_{0}\) would not fit into a gravitationally stable, steady-state thin disk (using Equation 16 in Haiman et al., 2009), and the dark shaded region in the bottom right
Figure 10: Power in the total binary accretion-rate periodogram computed via Equation 13. The dominant power at all eccentricities is at the orbital period and its higher frequency harmonics. For eccentricities \(e\geq 0.55\) power also emerges in a wide band around two-times the orbital period.
corner illustrates where a binary of given mass is within the component ISCOs. The black-dashed lines show initial semi-major axes associated to the initial binary period \(P_{0}\).
Massive binaries that undergo periods of prograde accretion before merging in the LISA band are predicted to be driven to an equilibrium eccentricity \(e_{\rm eq}=0.4\). Binaries that enter a GW-dominated regime at \(e_{\rm eq}\) will retain only a sub-percent eccentricity very near the LISA detection threshold (Cuadra et al., 2009; Zrake et al., 2021). Massive binaries that have their eccentricity pumped to near unity by retrograde accretion, however, may retain significantly more eccentricity upon entering the LISA band. Assuming the lower-bound as the more realistic scenario in Figure 12, we still observe that phases of retrograde accretion could drive very large eccentricities \(e_{\star}\gtrsim 0.9\) into massive binaries that are eventually detected by LISA, if accretion commences at large enough separations \(a_{0}\sim 10^{-2}\)pc. Moreover, because we observe retrograde minidisks, binaries that have undergone retrograde accretion will acquire spins that are counter-aligned with the binary orbital angular momentum and would thus present with a negative effective spin parameter \(\chi_{\rm eff}<0\) in the analysis of the gravitational wave signal.
Gravitational wave emission at such large eccentricities will also shift GW power from the standard, circular \(2f_{b}\)--with \(f_{b}\) the binary frequency--to higher frequencies \(f=f_{b}(1+e)^{1/2}/(1-e)^{3/2}\). Rapid environment driven coalescence will additionally diminish GW power at frequencies where such effects are active. For the most massive binaries \(M>10^{8}\)\(M_{\odot}\), like those posited to source the low-frequency gravitational wave background (Agazie et al., 2023), the effect of a retrograde accretion phase that both pumps eccentricity and rapidly shrinks the binary orbit would be to reduce GW power at the low-frequency end of the background, as was tentatively observed in the NANOGrav 15-year dataset (Agazie et al., 2023). However, Figure 12 suggests that eccentricity pumping for such massive binaries may not be as effective as for smaller systems, and that self-gravitating disks, where our results break down (c.f. Franchini et al., 2021), likely become relevant. Further modeling of the effects of retrograde accretion on low-frequency gravitational wave backgrounds is warranted.
It is worth mentioning that varying the disk scale-height \(h/r\) can dramatically alter the standard circular prograde picture presented above, as disks with \(h/r\lesssim 0.05\) recover binary inspiral (Tiede et al., 2020; Penzlin et al., 2022) and show tentative evidence that those with values approaching the expected theoretical limit of \(h/r\sim 0.01-0.001\) for geometrically thin disks around massive black holes possess orders of magnitude faster inspiral rates \(\xi\approx\mathcal{O}(10^{2})\)(Ditmann and Ryan, 2022). The dependence of eccentricity evolution for such thin disks, however, is yet to be explored, and it remains unclear
Figure 11: The total accretion rate measured onto the binary components from the Disco sweep (black) and for comparison, the Salifish sweep (grey). In each panel, accretion rate time series are shown for 10 orbits starting from where the eccentricity sweep (Equation \({}^{\circ}\)) reaches the binary orbital eccentricity \(e\), indicated in each panel. Vertical dotted blue (red) lines denote pericenter (apocenter). While there are small differences between codes, especially at high eccentricities, a few robust qualitative features persist: For lower eccentricities (\(e\lesssim 0.55\)) the accretion rate times series is modulated at the orbital period, with a double peaked (for Disco) or kinked (for Salifish) structure at intermediate eccentricities. For \(e\gtrsim 0.55\), a twice-orbital period modulation arises, as indicated by Figure 10. For the highest eccentricities probed in this study, a large accretion rate spike is induced following apocenter.
how changing the scale-height in retrograde scenarios changes the solutions.
_Electromagnetic observations_ -- While 2D isothermal hydrodynamics simulations cannot self-consistently produce lightcurves of the accretion flow, the accretion rate can serve as an approximation for the variability in the system luminosity, and recent magnetohydrodynamics simulations have shown that the variability of Poynting fluxes from component jets track the binary accretion rate (_e.g._ Combi et al., 2022). One of the most prominent features of prograde accretion solutions are the presence of a 5 orbit accretion variability associated with an \(m=1\) density mode that orbits the inner edge of the prograde CBD for near-circular binaries. By contrast, the accretion rate for circular retrograde binaries shows almost no variability, except for small fluctuations at the orbital frequency. As documented in SS4.2 and SS4.5, retrograde solutions also retain a high-degree of axisymmetry for \(e<0.55\), and as such are strongly modulated only on the orbital frequency of the binary; and the periodic variability for eccentricities in this range are almost indistinguishable, with the exception of a possible double peak emerging for \(e=0.4-0.5\), making them hard to distinguish in time-domain observational data. At large eccentricities \(e>0.55\), however, retrograde binary accretion manifests a two-orbit double peaked accretion signature characterized by a large flare, followed by a flare of approximately half the magnitude. This kind of behavior has not been observed in any prograde configurations and could serve as a unique observational signature of retrograde binary accretion. At the highest eccentricities this "double flaring" feature could manifest as quasi-periodic (or periodic if temporally and flux resolved) eruptions on timescales of super-massive black hole binary orbits (days to years).
Beyond variability in the accretion rate and system luminosity, the existence of axisymmetric density waves propagating through the CBD for \(0<e\lesssim 0.55\) may also have observational implications. The higher density rings, with possibly higher temperature when considering non-isothermal equations of state, would cause a time dependent variation of the disk spectra compared to that of a steady disk, causing the opposite of a spectral notch proposed for prograde disks (Gultekin and Miller, 2012; Roedig et al., 2014), but also varying periodically in time. In addition, these retrograde ring waves may manifest in images of circumbinary disks around stellar binaries, resolvable with instruments such as the Atacama Large Millimeter Array (ALMA, _e.g._, Alves et al., 2019). Synthetic observations of retrograde systems should explore this possibility (_e.g._, Ragusa et al., 2021), as recent theoretical work predicts 10% of stellar circumbinary disk systems to form retrograde (Elsender et al., 2023).
### Numerical limitations
In order to densely sample the binary eccentricity and accurately calculate the orbital evolution, we have made a number of simplifying assumptions. Foremost, these simulations have employed an isothermal treatment of the gas thermodynamics where any heat generated from viscosity or shocks is assumed to instantly leave the system as radiation. In future work it would be informative to include an equation of state that accounts for such heating processes and subsequently cools the disk on an appropriate timescale. We have also restricted ourselves to only two dimensional, co-planar configurations, but possible out of plane effects have been previously reported (Nixon et al., 2011; Roedig and Sesana, 2014). We have additionally ignored magnetic fields and considered only a simple constant-\(\nu\) model for the disk viscosity. A more detailed treatment would include electromagnetic effects and resolve the magneto-rotational turbulence that would self-consistently govern angular momentum transport in the disk.
The other major simplification of this study is that it focused exclusively on equal mass-ratio binaries. Because of the complexity and importance of the inner-most regions of the accretion flow to our results, we posit that varying \(q\) may produce substantially different results and plan to explore this in future work.
Figure 12: Upper- and lower-bounds (_left_ and _right_ respectively) to the eccentricity achieved by MBHB’s accreting from a retrograde disk determined according to Equation 14. The light shaded region in the upper-right of each panel shows the region where a binary of that mass and initial semi-major axis does not fit in a gravitationally stable disk. The dark grey shaded region in the bottom right corner shows where the binary does not fit outside the component ISCO’s for a given mass. The dashed black lines illustrate initial semi-major axes \(a_{0}\) (in parsec) associated with the initial binary periods \(P_{0}\) shown on the y-axes.
## 6 Conclusions
We have conducted a number of simulations of equal mass binaries of varying eccentricity accreting from retrograde circumbinary disks using the grid based hydrodynamics codes Disco and Salifish. We have continuously characterized the effects of such retrograde accretion on the binary's orbital elements, documented the morphology and phase-dependent behavior of retrograde accretion flows at numerous eccentricities, and determined observational signatures associated with retrograde solutions.
In accordance with most theoretical expectation and previous analytic work, we find that retrograde accretion scenarios shrink the binary semi-major axis at all eccentricities with a near-constant rate \(d\log a/d\log M=-10\), and simultaneously pump eccentricity for any \(0<e<0.8\) (Figure 1). The latter point, however, is in contrast with previous SPH simulations and impulse-approximation based models that predicted near-circular binaries would have their eccentricity damped, remaining effectively circular. Moreover, we find that the dominant contribution to the binary orbital evolution is gravitational forces from the retrograde binary minidisks and retrograde-bridge that form in the inner-most \(r<a\) of the accretion flow (Figures 6 and 7). Asymmetries in these features yield gravitational forces that act at all phases of the binary orbit, and the primary contributions to orbital eccentricity pumping occur during binary waxing (Figures 4, 5, and 8).
We found that the morphology of the retrograde CBD's posses three separate regimes as we vary binary eccentricity (see Figure 3). First, near-circular binaries \(e<0.01\) display a predominantly time-invariant structure in the co-rotating frame of the binary, with the exception of small wiggles in the retrograde-bridge (Figure 2). In regime (\(ii\)), binaries with eccentricities \(0.025<e<0.55\) yield phase-dependent disk behavior that repeats every binary orbit and is characterized by an axisymmetric density wave that is driven into the disk once per orbit during binary waning (Figure 4). Lastly, binaries with \(e>0.55\) in regime (\(iii\)) manifest two-orbit periodic disk oscillations characterized by the forcing of non-axisymmetric \(m=2\) spiral density waves in the "first" orbit and a predominantly axisymmetric density ring (with comparatively weak spiral arms) in the "second" orbit (Figure 5). The emergence of even-azimuthal-mode spiral density waves at \(e>0.55\) is consistent with previous predictions that disk resonances can manifest in viscous, retrograde disks at large eccentricities (see also \(\lx@sectionsign\)4.5 and Figure 9).
We have additionally analyzed the characteristic variability of the accretion rates for binaries of many different eccentricities in retrograde CBD's (Figure 11). We observed that circular, retrograde solutions are dramatically different from their prograde counterparts as they exhibit almost no accretion variability due to their functionally steady-state configurations. Binary eccentricities in retrograde regime (\(ii\)) all exhibit strong accretion rate modulation at the orbital frequency with the possible emergence of a kinked or double peaked structure at \(e=0.4-0.5\). For eccentricities in regime (\(iii\)), \(e>0.55\), accretion rate modulations start varying at both the binary period and twice the binary period as the accretion oscillates between minor- and major-spikes.
In light of modern numerical studies that have begun characterizing the behavior of prograde circumbinary accretion across system parameters like eccentricity, mass ratio, and disk thickness, we have revisited the complimentary retrograde scenario. Occurrences of retrograde accretion can be an important contributor to binary orbital decay and eccentricity growth, and they exhibit unique observational features for binary searches in large-scale surveys. We have demonstrated that high resolution, full-domain treatments are required to accurately quantify these effects and how they contrast with prograde solutions. As such, retrograde investigations should continue alongside their prograde counterparts as the community continues to characterize these systems.
## Acknowledgements
The authors sincerely thank Jonathan Zrake and Paul Duffell for making their codes Salifish and Disco available for use in this work. C.T. and D.J.D also extend their gratitude to all members of the GW-Astro group at the NBIA and Jeff J. Andrews for useful discussions. D.J.D. received funding from the European Union's Horizon 2020 research and innovation programme under Marie Sklodowska-Curie grant agreement No. 101029157. D.J.D. and C.T. received support from the Danish Independent Research Fund through Sapere Aude Starting Grant No. 121587.
## Data Availability
The data used for this study can be shared upon reasonable request to the corresponding author.
|
2306.14120 | Object Detection based on the Collection of Geometric Evidence | Artificial objects usually have very stable shape features, which are stable,
persistent properties in geometry. They can provide evidence for object
recognition. Shape features are more stable and more distinguishing than
appearance features, color features, grayscale features, or gradient features.
The difficulty with object recognition based on shape features is that objects
may differ in color, lighting, size, position, pose, and background
interference, and it is not currently possible to predict all possible
conditions. The variety of objects and conditions renders object recognition
based on geometric features very challenging. This paper provides a method
based on shape templates, which involves the selection, collection, and
combination discrimination of geometric evidence of the edge segments of
images, to find out the target object accurately from background, and it is
able to identify the semantic attributes of each line segment of the target
object. In essence, the method involves solving a global optimal combinatorial
optimization problem. Although the complexity of the global optimal
combinatorial optimization problem seems to be very high, there is no need to
define the complex feature vector and no need for any expensive training
process. It has very good generalization ability and environmental
adaptability, and more solid basis for cognitive psychology than other methods.
The process of collecting geometric evidence, which is simple and universal,
shows considerable prospects for practical use. The experimental results prove
that the method has great advantages in response to changes in the environment,
invariant recognition, pinpointing the geometry of objects, search efficiency,
and efficient calculation. This attempt contributes to understanding of some
types of universal processing during the process of object recognition. | Hui Wei, Fu-yu Tang | 2023-06-25T04:16:50Z | http://arxiv.org/abs/2306.14120v1 | # Object Detection based on the Collection of Geometric Evidence
###### Abstract
Artificial objects usually have very stable shape features, which are stable, persistent properties in geometry. They can provide evidence for object recognition. Shape features are more stable and more distinguishing than appearance features, color features, grayscale features, or gradient features. The difficulty with object recognition based on shape features is that objects may differ in color, lighting, size, position, pose, and background interference, and it is not currently possible to predict all possible conditions. The variety of objects and conditions renders object recognition based on geometric features very challenging. This paper provides a method based on shape templates, which involves the selection, collection, and combination discrimination of geometric evidence of the edge segments of images, to find out the target object accurately from background, and it is able to identify the semantic attributes of each line segment of the target object. In essence, the method involves solving a global optimal combinatorial optimization problem. Although the complexity of the global optimal combinatorial optimization problem seems to be very high, there is no need to define the complex feature vector and no need for any expensive training process. It has very good generalization ability and environmental adaptability, and more solid basis for cognitive psychology than other methods. The process of collecting geometric evidence, which is simple and universal, shows considerable prospects for practical use. The experimental results prove that the method has great advantages in response to changes in the environment, invariant recognition, pinpointing the geometry of objects, search efficiency, and efficient calculation. This attempt contributes to understanding of some types of universal processing during the process of object recognition.
Template-based method, object detection, evidence accumulation reasoning
## I Introduction
This daily life, we almost always need to recognize the objects that we see. Object recognition typically occurs so effortlessly that it is hard to believe it is actually a rather complex achievement. In the area of cognitive psychology, experts believe that there are three key processes involved in object recognition [6]:
First, there are usually numerous different overlapping objects in the visual environment, and the viewer must somehow decide where one object ends and the next starts. This is a difficult task.
Second, objects can be recognized accurately over a wide range of viewing distances and orientations. The apparent size and shape of an object do not change despite large variations in the size and shape of the retinal image.
Third, allocating diverse visual stimuli to the same category of objects. Objects, such as cups, vary enormously in their visual properties (e.g., color, size, and shape), but viewers are still able to recognize them without any apparent difficulty.
Over the years, there have been plenty of theories about how we observe and recognize objects. Irving Biederman maintained that viewers recognize some basic geometries first and then later identify the object [7]. Besides, when we observe or imagine an object, we trend toward a perspective slightly above the object looking down and offset a little to the right or left. This has been dubbed the canonical perspective. Almost no one would like a perspective just above the object. Objects are recognized most quickly at this canonical perspective. At some special perspectives, such as just above the object, we might take much longer time to recognize it, and even sometimes we are unable to recognize it (e.g., some cylindrical objects) [8][9].
It's a very natural idea to recognize objects by their geometric features, and it also conforms to human cognitive behavior. In daily life, many objects (e.g., cups, basins, and buckets) differ in color and texture, and the key means of identifying them tends to involve their geometric features. Geometric features are also the most intuitive and easiest to understand. And it's also simple to implement and highly efficient. According to Biederman's theory, object recognition depends on edge information rather than on surface information (e.g., color). To test this, participants were presented with line drawings or full-color photographs of common objects for between 50 and 100 ms. Performance was comparable with the two types of stimuli: mean identification times were 11 ms faster with the colored objects, but the error rate was slightly higher. Even objects for which color would seem to be important (e.g., bananas) showed no benefit from being presented in color [6][7]. Sanocki et al. (1998) also pointed out that edge-extraction processes are more likely to lead to accurate object recognition when objects are presented on their own rather than in the context of other objects [30].
However, there are still some difficulties in object recognition based on geometric features. These are described in three key processes involved in object recognition in cognitive
psychology. First, objects might be located in different scenes, and it's hard to decide where one object ends and the next starts. Second, an object could be observed in different perspectives and distances, which causes various of changes in geometric features. Third, although objects in the same category may have similar shapes, those shapes may still vary. In addition, regarding human behavior, it might be hard or even impossible to recognize objects in some special perspectives.
To be as consistent as possible with the results of psychological experiments and to help solve those difficulties, template methods that treat the geometric features in some major perspectives as templates were here used to recognize objects with corresponding templates. Here are some advantages of template methods:
First, geometric templates contribute to overcoming interference from various changes in lighting, color, and texture.
Second, geometric templates contribute to the implementation of invariant recognition at transformations such as translation, rotation, and scale, and strengthen the adaptability of each template.
Third, geometric templates contain geometric and topologic features. These features are stable.
Fourth, we just need a few major templates to recognize an object, and these do not require ex-period training, which can be costly.
Fifth, the recognition processes using geometric templates are closely connected to other cognitive processes, such as inductive learning, knowledge representation, reasoning, attention selection, and hypothesis testing.
Sixth, recognition using geometric templates is simple to implement, and it is also efficient.
In section 2, some typical template methods for object recognition are introduced. In section 3, the template is defined and preprocessing is performed. In section 4, the evidence used for object recognition is defined and a search for such evidence searching is conducted. In section 5, template matching is performed and then the results are filtered. Finally, the experimental results and conclusion are shown in sections 6 and 7.
## II Overview
Object Recognition based on template methods can be categorized as follows:
The first category contains methods based on contours or shapes, which have been described in many previous works [2][3][17][18][19][20][21][25]. Methods in this category generally obtain templates based on contours or shapes from sample images or other ways, and then compare these templates with the contours or shapes in target image. Changes in lighting and color do not much affect the methods. However, they are sensitive to interference lines in the background. If there are many interference lines in the background, the recognition rate and efficiency decrease greatly.
The second category includes methods based on color or grayscale [1][11][12][26]. These methods work at pixel level, consider both the position and strength of pixels, and then find the difference between templates and target images. These methods perform well on objects that have significant color and grayscale features, such as faces. However, they are sensitive to changes in lighting and color. Therefore, they do not perform well for objects that change in lighting and color.
The third category includes methods based on texture and gradient [4][22][27][28][29]. These methods, which consider the textural features of objects, compare the textures of templates and target images for object recognition. They perform well on objects that have significant textural features. However, they are also sensitive to changes in lighting and color.
There are also other methods, such as those based on histograms of receptive field responses [13][14][15][16]. However, all methods need templates of some sort to perform object recognition. An object can be observed from any perspectives or at any distances, so may be necessary to collect multiple templates for the same object. The retinal image of an object may differ, so new shapes can be produced through any type of transformation, such as translation, rotation, or changes in scale. In general, traditional template methods only recognize objects that have shapes similar to those of their templates. To perform object recognition for various of shapes, new templates should be constructed for any new shapes produced through transformation, but this is obviously quite undesirable. And the reason is that these methods do not make full use of the geometric features of objects and templates. The templates of objects can also be used to perform two-dimensional transformations, such as translation, rotation, and changes in scale. However, these methods usually consider only transformations of translation and scale, or even fail to consider this. In this way, these methods require many templates because of the low adaptability of templates. Based on this issue, this paper describes a new method that makes fuller use of the geometric features of objects and templates to improve the flexibility and adaptability of each template. This method acts on artificial objects that are composed primarily of line segments. These objects are very common, and they can be seen anywhere in our daily life.
## III Template definition and data pre-processing
The method described here is based on geometric features of objects, which include points, lines, angles, position, topological location, etc. All of these features can be represented and calculated using points and lines. Here, a set of line segments serves as a template for the target object. Fig. 1 shows a template of F117 tilted and at a slight angle above. This is based on a set of line segments. This type of template is not only simple and intuitive but also reflects the geometric features of objects very well.
Similarly, the preprocessing for the original images is very simple, and it includes the following steps:
The first step, perform edge detection for the original image, such as Canny edge detection or Berkeley edge detection [23].
The second step, get line segments for the result of edge detection, such as Hough transform for line detection or edge link method [24].
The third step, combine the line segments, which are continuous at position, to a longer line segment.
Fig. 2 shows the results based on Berkeley edge detection and edge link method.
## IV Evidence searching
### _Hypothesis generation_
After getting the line segments of the templates (Fig. 1) and target images (Fig. 2), the problem of object recognition becomes the problem of combining line segments. This shifts the problem of object recognition from the pixel level to the geometric line segment level. At this level, all geometric features of objects are well represented, and they can be calculated very simply and efficiently. In general, an image includes thousands of pixels. At the pixel level, it is not efficient to perform recognition, and it is also difficult to extract useful information, such as geometric features and topologic features. However, at the line segment level, a target image usually includes merely hundreds of line segments, and a template can include fewer than 100 line segments. It is not only easier to find and calculate geometric features but also possible to perform more complex calculations, such as geometric transformation and hypothesis testing. We need only select a series of line segments based on geometric features of templates, which are derived from target images, and then combine them so that they correspond with the line segments of the templates.
For this purpose, the line segments of templates should align with the line segments of target images. That means we need to perform transformations, such as translation, rotation, and changes in scale for the line segments of templates, to produce template line segments that correspond to the target images, and then project them onto the line segments of target images to produce hypotheses regarding the position, size, and shape of the objects. As shown in Fig. 3, a line segment was selected randomly from a template, here marked with red or blue ellipse (Fig. 3(a)). A line segment was also selected randomly from the target image, and marked with red or blue ellipse (Fig. 3(b)). We assume that the two line segments marked with ellipses of matching color may correspond to each other, and then the line segments of the template can be transformed and projected onto the target image (Fig. 3(c) and 3(d)).
If the transformation factors are known early on, we can perform translation, rotation, and changes in scale on the line segments of templates easily, and then project them onto the target images. However, the transformation factors are often unknown, and it is a difficult task to find suitable transformation factors. Here, we introduce a simple and efficient algorithm based on hypothesis generation:
The first step, \(\forall\ m1\in\) the line segments of the template, such as the line segment marked in red/blue ellipse, as shown in Fig. 3(a).
The second step, \(\forall\ l1\in\) the line segments of the target image, such as the line segment marked in red/blue ellipse, as shown in Fig. 3(b).
The third step, let \(t1\) denote the transformation factor, and define _transform(m1, 11)_, which is a function of two variables about \(m1\) and \(t1\), as the result of transform \(m1\) by \(t1\). Assume _transform(m1, 11) = \(l1\)_, and this produces \(t1\).
The fourth step, according to \(t1\), transform the line segments of the template and then project them onto the target image, as
Fig. 1: A template of F117 based on a set of line segments.
Fig. 3: Projection of the line segments from the template onto the target image.
Fig. 2: Preprocessing of the original image.
shown in Fig. 3(c) and 3(d).
Here we use the geometric relationship between the line segments of templates and target images to enumerate transformation factors. Therefore, we have got many projected results, and Fig. 3(c) and 3(d) show two such results. What do we care is that, is there an ideal projected result among so many projected results? Thanks to the enumerated method of transformation factors, if there exist an ideal projected result between the template and the target image and an almost complete line segment in the target image, then this ideal projected result (or a very similar one) is among our projected results. Therefore, our method can almost always produce ideal transformation factors. Based on the experimental results, it often costs about 1 second to enumerate all possible hypotheses and then verify them. If there are too many line segments in background, it may take a few seconds.
### _Hypothesis verification_
After getting so many projected results, we need to verify these results, to find out the ideal projected result(s). During verification, the key step is to decide how to select line segments from the target image, to be corresponding with the template. The projected template is here considered a hypothesis, and we need to find some evidence to support this hypothesis. We call this process evidence collection, and call those selected line segments evidence line segments, as the evidence to support the hypothesis. During this process, we need to select line segments from the target image, and determine which line segments could be considered as the evidence line segments of a template line segment. We try to consider the problem from angle, distance, and projected position between line segments of the template and the target image, and try to make full use of geometric features. Fig. 4 shows the process. Here \(CD\) is a line segment from the target image, and \(AB\) is a template line segment.
Combined with the process in Fig. 4, here is the formula to determine whether \(CD\) is an evidence line segment of \(AB\):
\[\begin{array}{l}\vspace{-0.2cm}\left\{\begin{array}{l}\vspace{-0.2cm}D_{1}= \sin\theta;\\ D_{1}=\frac{d}{\left[AB\right];\\ \vspace{-0.2cm}\left[\begin{array}{l}0.5,\text{if }t_{x}\!\!=\!\!t_{r}\!=\! \!0\,\text{or}\,t_{x}\!\!=\!\!t_{r}\!=\!\!1\\ 0,\text{if }0\!<\!\!t_{r}\!<\!10\,<\!\!t_{r}\!<\!1\\ 0,\text{if }t_{x}\!\!\leqslant\!0\,\!t_{r}\!\geq\!1\,\text{or}\,t_{x}\!\!\geq\!1 \,\text{$t_{r}\!\!\leqslant\!0$}\\ \vspace{-0.2cm}\left[\begin{array}{l}\vspace{-0.
## V Template matching
### _Calculation of similarity_
For each projected result, a set of evidence line segments are found for the hypothetical line segments. Here, we denote the hypothetical line segment as _h1_, and the set of evidence line segments for _h1_ as _Ev(h1)_, to serve as evidence to support this hypothetical line segment. We notice that a line segment of the target image can serve as the evidence for at most one hypothetical line segment. However, a hypothetical line segment can have one or more evidence line segments or none at all.
The closer _dis\({}_{CD\to aB}\)_ is to 0, the more similar the two line segments are. In this way, if a line segment of the target image can become evidence for more than one hypothetical line segments, this line segment should serve as evidence for the most similar hypothetical line segment, for which _dis\({}_{CD\to bB}\)_ has the lowest value. We denote the set of hypothetical line segments as {_h1, h2,..., h4r_}, the set of line segments of the target image as {_l1, l2,..., l3r_}, and the threshold of evidence selection as _TH_. The pseudo-code of the algorithm used to find the set of evidence line segments is as follows:
\begin{tabular}{|l|} \hline Let _Ev(h1) = Ev(h2) =... = Ev(h3) = \\ \
### _Comparison with other methods_
#### Iv-B1 Fan-shape model
In CVPR2012, a Fan-shape model for object detection [31] was proposed, and we compared with it. First, besides that famous ETHZ dataset, we designed a new dataset, F117 dataset, in which there are 50 images for training (our method does not need training, and this is for Fan-shape model) and another 121 images for testing. Fig. 7 shows two samples in this dataset. The Fan-shape model was trained on this new dataset.
After training, we ever expected that this shape-based model can work well on those clean, standard images, such as images with the perfect contour only (in the right of Fig. 8). Pure contour information is always thought to be enough for shape recognition, but the actual fact is that the Fan-model failed completely, i.e. it found nothing in these two almost perfect testing images, this is because this model actually depends more on SIFT features, however line images can not provide gray or color gradient information. Another disproof is listed in the third row of Fig. 8, in which we shown a giraffe image upon which the Fan-model functioned normally. Then we replaced this original image by its grayscale and color-inversing versions, as a consequence the Fan-shape model method failed. These results once again doubled the generality of this method.
Fig. 6: Detection results of F117s.
Fig. 7: Two pairs of image samples in the newly created dataset, F117 dataset.
After training, we ever expected that this shape-based model can work well on those clean, standard images, such as images with the perfect contour only (in the right of Fig. 8). Pure contour information is always thought to be enough for shape recognition, but the actual fact is that the Fan-model failed completely, i.e. it found nothing in these two almost perfect testing images, this is because this model actually depends more on SIFT features, however line images can not provide gray or color gradient information. Another disproof is listed in the third row of Fig. 8, in which we shown a giraffe image upon which the Fan-model functioned normally. Then we replaced this original image by its grayscale and color-inversing versions, as a consequence the Fan-shape model method failed. These results once again doubled the generality of this method.
For the performances of different methods that work on classical ETHZ dataset are mature, we altered to the new F117 dataset and collected statistical datum. Fig. 9 shows the detection rate comparison and time-costing comparison among our method, the Fan-shape model [31], and the DCLE module [32][33]. In Fig. 9, detection rightly criteria means a detection is deemed correct if the intersection of the detected bounding box and ground truth over the union of the two bounding boxes is larger than the x axis value. From these curves, we can see that our method not only performs better but also spends much less time.
#### V-A2 Discriminative combinations of line segments and ellipses (DCLE)
A.S. Chia et al. described a method based on discriminative combinations in CVPR2010 [32] and PAMI2012 [33]. This
Fig. 8: Fan-shape model fails on pure contour image and this contradicts its shape-based claim.
Fig. 10: Results of the method based on discriminative combinations.
Fig. 9: Performance comparisons among our method, the Fan-shape model, and DCLE module.
method is characterized by the use of line segment primitives, ellipse primitives, and SIFT features. The method pairs a reference primitive to its neighboring primitive to construct more discriminative shape-tokens, and then get the codebook of the target object (adding SIFT features if necessary). After that, it learns from the codebook and produces a series of discriminative codeword combinations for object recognition. In Fig. 10, the left panel shows some results for images of F117 based on this method, and the right panel shows results for re-organized images, which cut objects into small pieces and then combine them randomly. In theory, the method should not be able to detect out the F117 on re-organized images. However, in fact, the method still detected out the F117 in its original position.
Our method is able to work well on pure shape images, so there is no need for any SIFT features. In order to compare the two methods fairly, we experiment on images of the edge segments. The experimental results are shown in Fig. 11.
### _Results on UIUC car datasets_
In order to compare with more methods, we test our method on UIUC car datasets. Images of the datasets are quite small with the fuzzy object boundaries. Besides, images are grey and difficult to extract shape boundaries. Therefore, this is a challenge to our algorithm. We used a side view template to test 170 single-scale images of the UIUC datasets. Fig. 12 shows some experimental results, and we can see that our algorithm detected the cars very accurately.
To test detection accuracy, for each test image, we regard those results with high ranking and similarity above a certain threshold as detection results. If the intersect area between a detection result and a ground truth was more than a half, the result was considered to have found the car at corresponding position of that ground truth correctly. We calculated two precision-recall values, here one had maximum recall value and the other had maximum precision value. Fig. 13 shows the precision-recall results of our algorithm and some other methods.
As shown in Fig. 13, we calculated the recall-precision equal error rate (EER) for our algorithm on UIUC car datasets. Table 1 shows the EER of our algorithm and some advanced methods, and we can see that our algorithm based on geometric features are as thoroughly able to produce ideal results as other methods.
Fig. 11: Comparison between our method and the method based on discriminative combinations.
Figure 12: Our results on UIUC car datasets.
Figure 13: Recall-precision curves for the UIUC datasets.
## VII Conclusions
At present, the most common method of object recognition is the pattern classification method based on machine learning. This method has the following problems. First, the feature descriptors of images have poor generality, and the feature vectors often have very high dimensions, which creates huge computational load. Second, the machine learning algorithms are complex, with strong application specificity and data-set specificity, and the classifiers are also hard to generalize. Third, the classification criteria produced by machine learning algorithms are obscure and have poor declarative semantics. These criteria are often implicit and hard to present as knowledge. Finally, it reduces the recognition problem to a classification problem and conceals many inner processes. Based on the above viewpoint, this paper hopes to use a simpler and more natural method of recognition. We believe that search and optimization have such attributes.
An outstanding advantage of our method is the simplicity of the algorithm. In the principle, we use the template method. In the image representation, we only use line segments, and there is no need for other complex and high-dimensional feature descriptors. In the recognition process, we collect the contour line segments that satisfy the geometrical constraint as evidence, and there is no need for the expensive machine learning processes to train and produce classifier. In the numerical calculation, it only involves some simple geometry and algebra calculations, which are small computation. Based on the simplicity of these components, our method still performance very well.
Purely shape-based methods provide more generalization. This paper uses only the geometric features of objects to perform object recognition, because the geometric features are the most stable features of the object and they tolerate changes to the environment, color, lighting, scale, and rotation well. In view of the fact that changes in the environment are unlimited and unpredicted, those methods, which are based on the representation of physical feature vectors and statistical learning, are difficult to produce comprehensive statistics. However, methods based on the representation of shapes and contours can contain all possible poses using only a few templates, and there is no need for expensive learning and training processes. The most important is that representation based on shapes can describe the structural features of objects well, and the accuracy of semantic representation of objects is better than the classifiers based on positive and negative rules.
Evidence accumulation has simpler and more general implementation than other methods. Object recognition processes based on geometric evidence are more similar to the human cognitive process, which mainly includes evidence collection, hypothesis formation, and hypothesis verification. This process is simpler and more general, and it includes the most basic steps of recognition. Although this paper does not consider color features, texture features, or depth features, these can be added to improve the efficiency of the combination process by serving as clues for how to combine line segments.
|
2304.12328 | Virus2Vec: Viral Sequence Classification Using Machine Learning | Understanding the host-specificity of different families of viruses sheds
light on the origin of, e.g., SARS-CoV-2, rabies, and other such zoonotic
pathogens in humans. It enables epidemiologists, medical professionals, and
policymakers to curb existing epidemics and prevent future ones promptly. In
the family Coronaviridae (of which SARS-CoV-2 is a member), it is well-known
that the spike protein is the point of contact between the virus and the host
cell membrane. On the other hand, the two traditional mammalian orders,
Carnivora (carnivores) and Chiroptera (bats) are recognized to be responsible
for maintaining and spreading the Rabies Lyssavirus (RABV). We propose
Virus2Vec, a feature-vector representation for viral (nucleotide or amino acid)
sequences that enable vector-space-based machine learning models to identify
viral hosts. Virus2Vec generates numerical feature vectors for unaligned
sequences, allowing us to forego the computationally expensive sequence
alignment step from the pipeline. Virus2Vec leverages the power of both the
\emph{minimizer} and position weight matrix (PWM) to generate compact feature
vectors. Using several classifiers, we empirically evaluate Virus2Vec on
real-world spike sequences of Coronaviridae and rabies virus sequence data to
predict the host (identifying the reservoirs of infection). Our results
demonstrate that Virus2Vec outperforms the predictive accuracies of baseline
and state-of-the-art methods. | Sarwan Ali, Babatunde Bello, Prakash Chourasia, Ria Thazhe Punathil, Pin-Yu Chen, Imdad Ullah Khan, Murray Patterson | 2023-04-24T08:17:16Z | http://arxiv.org/abs/2304.12328v1 | # Virus2Vec: Viral Sequence Classification Using Machine Learning
###### Abstract
Understanding the host-specificity of different families of viruses sheds light on the origin of, e.g., SARS-CoV-2, rabies, and other such zoonotic pathogens in humans. It enables epidemiologists, medical professionals, and policymakers to curb existing epidemics and prevent future ones promptly. In the family Coronaviridae (of which SARS-CoV-2 is a member), it is well-known that the spike protein is the point of contact between the virus and the host cell membrane. On the other hand, the two traditional mammalian orders, Carnivora (carnivores) and Chlorore (bats) are recognized to be responsible for maintaining and spreading the Rabies Lyssavirus (RABV). We propose Virus2Vec, a feature-vector representation for viral (nucleotide or amino acid) sequences that enable vector-space-based machine learning models to identify viral hosts. Virus2Vec generates numerical feature vectors for unaligned sequences, allowing us to forego the computationally expensive sequence alignment step from the pipeline. Virus2Vec leverages the power of both the _minimizer_ and position weight matrix (PWM) to generate compact feature vectors. Using several classifiers, we empirically evaluate Virus2Vec on real-world spike sequences of Coronaviridae and rabies virus sequence data to predict the host (identifying the reservoirs of infection). Our results demonstrate that Virus2Vec outperforms the predictive accuracies of baseline and state-of-the-art methods.
2023
## Data and Code Availability
We extracted the labeled Spike protein sequences for COVID-19 hosts dataset from GISAID 1 and labeled Nucleotide genome sequences for rabies virus hosts dataset from RABV-GLUE 2. Where the label is the name of the host for which we are classifying the sequences. Our preprocessed dataset and code are available online 3.
Footnote 1: [https://www.gisaid.org/](https://www.gisaid.org/)
2. [http://rabv-glue.cvr.gls.ac.uk/#/home](http://rabv-glue.cvr.gls.ac.uk/#/home)
3. [https://github.com/sarwanpasha/Virus2Vec](https://github.com/sarwanpasha/Virus2Vec)
## 1 Introduction
The global COVID-19 pandemic has drawn the attention of researchers to understand the origin of (zoonotic) viruses in humans. In the case of the coronaviruses (the family Coronaviridae), it has been established that SARS was transmitted to humans from civets and MERS-CoV from dromedary camels (Reusken et al., 2014). In contrast, it is widely thought that SARS-CoV-2 (which causes COVID-19) originated from bats (Zhou et al., 2020). However, numerous zoonotic diseases have been around for a while, and medical professionals have been attempting to combat them better. The rabies virus is one such illness with a near 100% death rate after symptoms appear (Taylor and Nel, 2015). All mammal species are
susceptible to rabies. However, domestic dog bites account for up to 99% of human rabies cases (WHO -- Rabies, 2021). There is frequent spillover from dogs into other carnivores, but typically this only results in transient chains of transmission. Therefore, it is crucial to locate and keep an eye on any potential wildlife reservoirs (Worsley-Tonks et al., 2020). It is crucial to understand the origins of such diseases to create effective prevention and mitigation measures as well as vaccines and therapeutics. Pathogen sequence data are readily available, and genomic monitoring is being used more and more frequently. To account for this, genomic tools and classification algorithms need to be updated. New genomic technologies (Gigante et al., 2020), machine learning and learning-based classification can improve disease control and epidemic response (Ali et al., 2023; Chourasia et al., 2023b).
The coronaviruses (CoVs) are grouped into five genera, infecting different hosts, including humans, palm civets, bats, dogs, and monkeys, among others (Li et al., 2006). CoVs are known to mutate quickly and adapt to new environments. They have shown a capacity for animal-to-human, human-to-animal, and animal-to-animal transmission (Graham and Baric, 2010). There have been accounts of cross-species transmission and alteration in viral tropism resulting in new diseases in different hosts (Shi and Hu, 2008; Vijgen et al., 2006). The surface (S) protein or spike protein of different CoVs is key to the binding and entry of the virus into the host cell and determines the range of host specificity. It is composed of the receptor-binding domain or S1 subunit and S2 subunit (see Figure 1) that harbor sequences for viral fusion to the cell membrane (Li et al., 2006). The spike proteins of CoVs recognize different receptors across different hosts. Also, the sequences of the S1 subunit of CoVs has been reported to show differences across genera (Li, 2016).
The rabies Lyssavirus (RABV), belongs to the genus Lyssavirus in the Rhabdoviridae family. Rhabdoviruses are simple viruses that encode five proteins and appear as bullet-shaped, enveloped virions with glycoprotein spikes on the surface. These virions have a helical nucleocapsid within the envelope that is symmetrically coiled into a cylindrical structure. The nucleocapsid is composed of one molecule of negative sense, single-stranded RNA about 12kb long. Even with only five proteins encoded, the virus can protect itself from ribonuclease digestion and retain a shape ideal for transcription. Five proteins (N, P, M, G, and L) are produced as shown in Figure 2.
Traditional methods based on phylogenetic tree construction are computationally expensive and do not scale to the large volume of sequence data (Hadfield et al., 2018). Employing machine learning on sequencing data is a viable alternative (Ali and Patterson, 2021). However, some of the existing sequence classification methods, such as the one proposed in (Kuzmin et al., 2020), require the sequences to be aligned (sequence characters must be in a one-to-one correspondence). Aligning large volumes of sequences is computationally expensive (if required) and utilizes expert knowledge that can potentially introduce bias in the data (Golubchik et al., 2007). Planning future endemic/pandemic protection measures on time may be aided by alignment-free embedding techniques (Ali et al., 2023b). They will be helpful in swiftly implementing machine learning solutions and work as excellent tools for healthcare professionals.
This paper proposes a feature vector generation named Virus2Vec. We depict the spike protein sequences of the SARS-CoV-2 and rabies viral nucleotide sequence data using Virus2Vec. It provides for improved host identification and downstream clustering and classification activities. Virus2Vec combines the use of _minimizers_ and the position weight matrix (PWM) for a compact alignment-free representation of amino acid sequences. Although the notion of minimizer is previously used in metagenomics (Girotto et al., 2016; Chourasia et al., 2023a), it has not been used (to the best of our knowledge) for viral sequence classification. The main contributions of this work are:
1. We propose Virus2Vec, a compact alignment-free embedding approach based on minimizers and the
Figure 1: The coronavirus genome is 26–32kb in length. The structural genes include spike (S), envelope (E), membrane (M), and nucleocapsid (N). S region encodes the spike protein.
Figure 2: The rabies genome is 12kb in length and encodes five proteins Nucleoprotein (N), Phosphoprotein (P), Matrix Protein (M), Glycoprotein (G), and Polymerase (L).
position weight matrix to generate a feature vector representation of different coronaviruses and rabies virus sequence data.
2. Our method eliminates the need for the sequence alignment step (multiple sequence alignment is an NP-Hard problem (Chatzou et al., 2016)) from the classification pipeline (unlike (Kuzmin et al., 2020; Ali et al., 2022a)) while maintaining the performance of the underlying classifiers.
3. We show that without aligning the sequences and using a fraction of the information as compared to a more traditional \(k\)-mers based approach (Ali et al., 2023d), we are still able to outperform the baselines and state-of-the-art (SOTA) methods.
4. Virus2Vec is a compact sequence representation scheme that is scalable to "Big Data" and can also be used for many other sequence analysis tasks.
Our manuscript is organized as follows: Section 2 contains the previous work for sequence classification. Section 3 contains the detail about our proposed alignment-free method for sequence classification. Section 4 contains the experimental setup and dataset collection and statistics detail. The results for our proposed method are in Section 5. Finally, we conclude our paper in Section 6.
## 2 Related Work
After the spread of COVID-19, efforts have been made to study the virus's behavior using machine-learning approaches to biological sequences. Using a trait-based approach, authors in (Worsley-Tonks et al., 2020) identified candidate wildlife species that may contribute to the transmission and maintenance of rabies Lyssavirus(RABV). This approach has a problem since the domestic dog (Canis lupus familiaris) is regarded as a key reservoir in many developing nations, particularly in African and Asian countries (Cleaveland and Hampson, 2017). Because of the vast number of canine cases and the absence of standard wildlife monitoring systems (Vercauteren et al., 2012) or diagnostic assays, most wildlife species' contributions to the preservation of certain RABV variants are still largely unknown (Cordeiro et al., 2016). Considering modern approaches, some work is done using sequence data for machine learning-based solutions.
Authors in (Ali et al., 2023b) use \(k\)-mers and a kernel-based approach to classifying the spike sequences. However, it can not scale on big data because of memory inefficiency. Authors in (Kuzmin et al., 2020) propose using one-hot encoding to classify the viral hosts of coronavirus using spike sequences only. Although they achieved higher predictive performance, authors in (Ali et al., 2023c) show that the \(k\)-mers-based approach outperforms the one-hot. Authors in (Taslim et al., 2023; Ali et al., 2023a) propose a faster method for embedding generation but it mainly focuses on faster implementation as compared to generating a compact and effective embedding.
For the embedding generation of short reads data, authors in (Chourasia et al., 2023a) advise using a minimizer-based technique. Additionally, the classification of metagenomic data has been suggested in (Wood and Salzberg, 2014a; Kawulok and Deorowicz, 2015). To obtain accurate read binning for metagenomic data, the authors in (Girotto et al., 2016) use probabilistic sequence signatures. According to the theoretical work on minimizers (Zheng et al., 2020), there is a close relationship between universal hitting sets and minimizers schemes, where efficient (low-density) minimizers schemes match up with small-sized universal hitting sets. The main issue with all of these methods is that is intended for short reads data and they cannot be used in real-world situations where there may be millions of sequences because they cannot scale to larger datasets.
Sequence analysis, motif predictions, and identification investigations have effectively used position weight matrix (PWM) based techniques. A number of well-known software programs or web servers, such as the PWM-scan software package (Ambrosini et al., 2018), and PSI-BLAST (Bhagwat and Aravind, 2007), have been developed based on the implementation of PWMs. The development of a PWM-based approach for protein function prediction and a justification for the PWM and its related characteristics' high potential for protein sequence analysis are presented in (cheol Jeong et al., 2010). Although the aforementioned approaches are effective in these fields, they do not offer a universal approach for designing a feature embedding for the underlying sequence, which would contain rich information about the sequence and serve as input to various machine learning algorithms.
In another work a position weight matrix (PWM) based approach is proposed in (Ali et al., 2022a), which generates a fixed-length representation of spike sequences based on weights of \(k\)-mers computed using a PWM. However, their method only works with aligned sequence data. Authors in (Girotto et al., 2016) propose the use of minimizers for metagenomic data. Since metagenomic data contains short reads, each can be represented by a single minimizer (\(m\)-mer) (Chourasia et al., 2022b,
2023a). Their approach is not directly applicable to our scenario.
## 3 Proposed Approach
This section discusses our proposed alignment-free methods based on minimizers and the position weight matrix (PWM) to design a better feature vector representation from spike amino acid sequences and rabies virus nucleotide sequences. The problem of sequence classification is challenging due to the following points.
1. Sequences can have different lengths. Designing a fixed-length numerical representation without loss of information becomes challenging.
2. Mutations (changes in the sequence) do not happen randomly but rather due to selection pressures. For example, mutations happen disproportionately many in the spike region of coronaviruses due to their importance in interfacing with the host. Designing a model to capture those variations is challenging.
3. Some of the existing method requires sequence alignment as a preprocessing step. Designing a scalable alignment-free method without compromising on predictive performance is challenging.
### Virus2Vec
Although \(k\)-mers-based frequency vectors are proven to be efficient and perform better than the traditional one-hot-encoding on aligned sequences (Ali et al., 2023b), a major problem with \(k\)-mers is that there are too many (similar) \(k\)-mers generated for a given sequence (Wood and Salzberg, 2014b). Counting these similar \(k\)-mers can be an expensive -- and redundant -- task, as for each \(k\)-mer, we need to check which "bin" of the frequency vector it will be placed. Another issue could be storing all \(k\)-mers in memory, especially for longer sequences. Hence, we need memory efficient way to make the overall algorithm scalable. For a given \(k\)-mer, a _minimizer_ of length \(m\) (\(m<k\)) is the \(m\)-mer that is lexicographically smallest both in forward and reverse order of the \(k\)-mer.
**Remark 1**: _Authors in (Singh et al., 2017) considered the first \(m\) characters from \(k\)-mers (to design \(m-mers\)) rather than selecting the lexicographically smallest \(m\) characters. However, we noted that we were getting better results by considering the smallest \(m\) characters lexicographically. Thus, we use this approach._
See Figure 3 for an example of a set of \(k\)-mers and the corresponding minimizers. To compute minimizer, sliding window is used on \(k\)-mer to find minimizes from both directions(forward and reverse). And finally, the lexicographic smallest is selected as the minimizer for that \(k\)-mer.In this way, minimizers ignore many amino acids in each \(k\)-mer, only preserving a fraction of the \(m\)-mers, for which binning of these \(m\)-mers becomes much more efficient. Using the minimizers for \(m=3\), which is decided empirically using standard validation set approach (Devijver and Kittler, 1982), we generate a fixed-length feature vector of length \(|\Sigma|^{m}\). For each minimizer, we compute a weight using the "Position Weight Matrix" (PWM) method. Figure 4 shows the flow diagram for Virus2Vec.
Figure 4, consists of the steps (a-g) as explained next. Given an input spike protein sequence, Figure 4 (a) extract the minimizers (\(m\)-mers) of length \(3\) (decided using a standard validation set approach (Devijver and Kittler, 1982)). A Position Frequency Matrix (PFM) is generated see in Figure 4 (b), which contains the frequency count for each character at each position. In our experiments, since we have \(20\) unique amino acids in the spike protein sequence dataset, our PFMs have 20 rows and \(m=3\) columns. Whereas for rabies virus sequence dataset we have \(4\) unique nucleotide; our PFMs have 4 rows, and \(m=3\) columns. In Figure 4 (c), we normalize the PFM matrix to create a Position Probability Matrix (PPM) containing the probability of each amino acid at each position.
It is possible that the frequency (hence probability in the PPM) of a character at a certain position is 0. To avoid zeros, we add a Laplace estimator or a pseudocount to each value in the position probability matrix as shown in Figure 4 (d). We use a pseudocount of 0.1 in our experiments (Nishida et al., 2009). A position weight matrix (PWM)is then computed from the adjusted probability matrix (after adding laplacian). We make the PWM
Figure 3: Example of \(k\)-mers and their corresponding minimizers in a spike amino acid sequence “MDPEGRKMLSVBSLRDS”.
by computing the log-likelihood of each amino acid character \(c\), i.e., \(c\in A,C,\ldots,Y\) for spike sequences or \(c\in A,C,G,T\) for rabies virus sequences, appearing at each position \(i\) according to the following expression:
\[W_{c,i}=\log_{2}\frac{p(c,i)}{p(c)} \tag{1}\]
where \(c\in A,C...Y(bases)\) or \(c\in A,C,G,T(bases)\) and
\[p(c)=\frac{n(c)}{61} \tag{2}\]
The \(n(c)\) is the number of codons for each amino acid (i.e., 1 for MW, 2 for CFYHQNKDE, 3 for I, 4 for VP-TAG, and 6 for RL) and \(61\) is the number of sense codons.
As shown in Figure 4 (e), by using Equation 1, a scalar value to each Amino Acid (AA) for each position in \(m\)-mer is assigned. After getting the PWM, we use it to compute the absolute scores for each individual minimizer generated from the sequence. It is the sum of the score of bases for the index.
After getting the score for each \(m\)-mer, the final step as shown in Figure 4 (g) we generate a vector of length \(|\Sigma|^{m}\). We use the score of each \(m\)-mer (computed using the PWM-based approach) to the corresponding bin to get the final feature vector representation. The pseudocode for Virus2Vec is given in Algorithm 1.
**Remark 2**: _Note that steps b to f in Figure 4 for our method are the same as given in PWM2Vec (Ali et al., 2022a). Our method differs in a way that it works with the minimizers instead of k-mers (our input is different as given in step a). The idea of using minimizers is that they are proven to work better compared to the \(k\)-mers in the metagenomic domain (Girotto et al., 2016). However, their use for full-length sequences is not well explored. Similarly, our feature embedding is "general" and can work for both aligned and unaligned sequences (see step g), unlike the PWM2Vec, which only works with aligned sequences. Our methods differ in the computation of likelihood weight for each amino acid, where we consider \(log_{2}\frac{p(c,i)}{p(c)}\) rather than the equal probability of each amino acid (which is \(\frac{1}{Unique\_AA,Count}\)) as given in PWM2Vec method._
```
Input: Set of Spike Sequences \(S\) of dimension \(X\times Y\), alphabet \(\Sigma\), \(k\)-mer length \(k\), m-mer length \(m\) Output: Weighted Frequency Matrix \(V\) functionCompFreqVector(\(S,\Sigma,k\)) \(V\) = []\(\triangleright\) Weighted Freq Matrix for\(i\gets 1\)to\(|X|\)do \(\mathbb{A}\) = CompMinimizer(\(s,k,m\)) \(\triangleright\)\(|\Sigma|\times k\) Matrix \(PFM\) = 0 * [\(|\Sigma|\)] [\(k\)] for\(p\gets 0\)to\(k\)do \(PFM[:,p]\) = GetAlphabetCount(\(\mathbb{A}\)[:,p]) end for PC =0.1 \(\triangleright\) Pseudocount PPM = CompProbability(PFM) + PC p(c) = n(c) / 61 PWM = \(\frac{PPM}{p(c)}\) W = [] for\(u\gets 0\)to\(|\mathbb{A}|\)do W.append(CompMmersScore(\(\mathbb{A}\)[u])) end for combos = GenAllCombinations(\(\Sigma,k\)) \(v\) = [0] * \(|\Sigma|^{k}\) for\(i\gets 1\)to\(|\mathbb{A}|\)do idx = combos.index(\(\mathbb{A}\)[i]) \(\triangleright\) find \(i^{th}kmer\) idx \(v\)[idx] \(\gets V\)[idx] + W[i] end for V.append(\(v\)) end for return\(V\) endfunction
```
**Algorithm 1** Virus2Vec Overall Computation
## 4 Experimental Setup
This section describes the setup we used for the experiments, followed by the dataset statistics in the 4.1. We also give a visual representation of the data using t-SNE plots and its discussion in the 4.2. Later, we discuss the baseline models in 4.3 and its detailed discussion in 4.4 followed by we discussion of the Ablation study in Section 4.5.
All experiments are conducted using an Intel(R) Xeon(R) CPU E7-4850 v4 @ \(2.10\)GHz having Ubuntu \(64\)
Figure 4: Virus2Vec Flow Diagram.
bit OS (\(16.04.7\) LTS Xenial Xerus) with 3023 GB memory. The algorithm is implemented in Python, and the code is available online for reproducibility 4. We use several algorithms and metrics for classification, as shown in Table 4. We are using the area under the curve (AUC) receiver operating characteristic curve AUC-ROC because it is prone to overestimation and pulls the false positive rate towards zero (Sofaer et al., 2019). Finally, we report each classifier's training time (in seconds). We split the data into \(70-30\%\) training and testing (held-out) sets, respectively. We run experiments with \(5\) random initialization for train-test splits and report average results. We use \(5\) fold cross-validation in the training set for hyperparameter tuning.
Footnote 4: [https://github.com/sarwanpasha/Virus2Vec](https://github.com/sarwanpasha/Virus2Vec)
### Dataset Collection and Statistics
In this work, we use two datasets: Spike Sequence from the SARS-CoV-2 virus, and Rabies sequences data. The statistics and details are provided in Table 1. The multiple sequence alignment (MSA) is conducted using Mafft Alignment software to get aligned sequences for Coronavirus Host Data that we use to evaluate the baseline approach, some of which require Aligned Sequences.
### Data Visualization
In order to see if there is any natural (hidden) clustering in the data, we use t-distributed stochastic neighbor embedding (t-SNE) (Van and Hinton, 2008), which maps input sequences to \(2\)D representation. Particularly in the natural sciences, t-SNE is well-liked because of its ability to handle vast volumes of data and its usage for dimensionality reduction while maintaining the structure of the data (Chourasia et al., 2022). Here, we use t-SNE to analyze and contrast the ability of various embeddings to preserve the structure. The t-SNE plots for SARS-CoV-2 spike sequences for different embedding methods are shown in Figure 5 for Spike2Vec, Approx. Kernel, MFV, PWM2Vec, and Virus2Vec, respectively. We can observe that with Virus2Vec, t-SNE is able to preserve the structure of data in the same way as with the other existing embedding methods, which shows that Virus2Vec is preserving the overall structure of data. Similarly, Figure 6 shows the t-SNE plots for rabies virus data for mentioned embeddings. Similar results can be seen in spike data. Virus2Vec does not disturb the structure and even provides better clusters as compared to baseline embeddings.
### Baseline Models
A brief description of baselines and their comparison with the proposed model are provided in table 3. Detail description is given in section 4.4. We are using \(3\) Neural Network models the configuration is provided in Table 2.
### Baseline Methods Description
One-Hot Encoding (OHE)A fixed-length numerical feature vector, called OHE is proposed in (Kuzmin et al., 2020). It generates a binary (0-1) vector based on the character's position in the sequence given \(\Sigma\). Since the length of each spike sequence (after alignment) in our data is \(3498\), the length of OHE for a spike sequence is then \(3498\times 20=69,960\).
Spike2VecThe spike sequence classification method, Spike2Vec, is recently proposed in (Ali and Patterson, 2021). Given a sequence, Spike2Vec computes \(N\)\(k\)-mers, where \(N=L-k+1\) (\(L\) is the length of the spike sequence and \(k=3\) as given in (Ali and Patterson, 2021)). After generating the \(k\)-mer for a spike sequence, the count of each \(k\)-mer is used to get the frequency vector.
Approximate KernelA kernel-based method for sequence classification is proposed in (Ali et al., 2022) (for reference, we call this method "Approx. Kernel"). It computes the kernel value between two sequences using the dot product based on the matches and mismatches among the \(k\)-mers spectrum. This kernel matrix is then used in our case as input to kernel PCA (Hoffmann, 2007) to acquire the feature vector representation.
PWM2VecPWM2Vec (Ali et al., 2022) assigns different weights to each \(k\)-mer in the feature vector depending on the values of the characters in the position weight matrix. The feature vector length equals the total number of \(k\)-mers in an aligned sequence. The length of PWM2Vec-based embedding equals the number of \(k\)-mers in a sequence. Since the number of \(k\)-mers could be different for different length (unaligned) sequences, this method only applies to aligned sequences. Since the length of sequences in our dataset is \(3498\) (after alignment), the feature vector length for PWM2Vec is \(3490\) (which is equal to the total number of \(k\)-mers in a sequence, where \(k=9\) as mentioned in (Ali et al., 2022)).
Spaced \(k\)-mersThe feature embeddings generated by \(k\)-mers suffer from sparsity and the curse of dimensionality, which negatively impacts analytical performance. To address these issues, the concept of spaced \(k\)-mers was introduced (Singh et al., 2017). Spaced \(k\)-mers are a set of non-contiguous substrings of length \(k\), also known as
\(g\)-mers. To generate an embedding using spaced \(k\)-mers, the \(g\)-mers of the sequence are first computed, and then \(k\)-mers are extracted from these \(g\)-mers. In our experiments, we use \(k=4\) and \(g=9\), which were selected using a standard validation set approach.
We transform the raw sequence data into the numerical form that the machines can process to build the neural network (NN) models (for end-to-end training on the sequences). We use one hot encoding (with data padding) method considering 20 amino acids in the data. We use the following NN models as described below.
\begin{table}
\begin{tabular}{l l l l l l l l l} \hline \hline Name & Type & Source & \multicolumn{3}{c}{Sequence} & \multicolumn{3}{c}{Sequence Length} \\ \cline{4-9} & & & Count & Classes & & Min & Max & Avg & Mode \\ \hline \hline \multirow{4}{*}{\begin{tabular}{l} Corno- \\virus Host \\ Data \\ \end{tabular} } & \multirow{4}{*}{\begin{tabular}{l} Spike protein \\ sequences for \\ COVID-19 hosts \\ \end{tabular} } & \multirow{4}{*}{\begin{tabular}{l} GISAID Gki- \\ SAID Website, \\ \end{tabular} } & \multirow{4}{*}{
\begin{tabular}{l} 5558 \\ \end{tabular} } & \multirow{4}{*}{22} & \multirow{4}{*}{9} & \multirow{4}{*}{1584} & \multirow{4}{*}{1272.4} & \multirow{4}{*}{1273} \\ & & & & & & & \\ \cline{1-1} \cline{5-10} & & & & & & & & \\ \cline{1-1} \cline{5-10} & & & & & & & & \\ \cline{1-1} \cline{5-10} & & & & & & & & \\ \cline{1-1} \cline{5-10} & & & & & & & & \\ \cline{1-1} \cline{5-10} & & & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Data Statistics.
Figure 5: t-SNE plots for **Coronavirus Host data** (\(5558\) sequences) different feature embeddings.The figure is best seen in color.
Figure 6: t-SNE plots for **Rabies Virus** (\(20051\) sequences) for different feature embeddings.The figure is best seen in color.
#### 4.4.1 Long Short-Term Memory (LSTM)
The architecture of lstm scheme (Hochreiter and Schmidhuber, 1997) consists of an embedding layer (of embedding is \(500\)), an LSTM layer with \(200\) memory units, a LeakyReLU layer with alpha = \(0.05\), an LSTM layer again with \(200\) memory units followed by another LeakyReLU layer, a dropout with value \(0.2\), a Dense layer of dimensions \(500\) followed by LeakyReLU layer, and finally an output layer and a sigmoid activation function. We use the ADAM (Diederik P. Kingma, 2015) optimizer.
#### 4.4.2 Gated Recurrent Unit (GRU)
The GRU model (Cho et al., 2014) consists of an embedding layer (of embedding is \(500\)), a GRU layer with \(200\) memory units, a LeakyReLU layer with alpha = \(0.05\) followed by a Dropout layer with value \(0.2\), and finally a dense output layer and a sigmoid activation function. We also use the ADAM optimizer in the GRU architecture.
#### 4.4.3 Convolutional Neural Network (CNN)
The CNN model (Lee et al., 2017) comprises an embedding layer (of embedding is \(500\)), a 1-D convolution layer (Conv1D) with \(128\) filters with a kernel size of \(5\), a LeakyReLU layer with alpha = \(0.05\), a batch normalization layer, a 1-D convolution layer (Conv1D) again with \(128\) filters and a kernel size of \(5\), a LeakyReLU layer with alpha = \(0.05\) followed by batch normalization, a max pooling layer (pool size of \(2\)), a dense layer of 500 dimensions followed by a LeakyReLU layer with alpha = \(0.05\), and finally an output dense layer with a sigmoid activation function. For optimization, we use the ADAM optimizer.
### Ablation Study
MvMinimizer-based feature vector is a method in which for a given \(k\)-mer, a _minimizer_ of length \(m\) (\(m<k\)) is computed, called as \(m\)-mer which is a lexicographically smallest in forward and reverse order of the \(k\)-mer (See Figure 3 for an example). Fixed-length frequency vector from the set of minimizers, which contains each minimizer's count in the frequency vector's corresponding bin. The pseudocode for generating the frequency vectors is given in Algorithm 2. Given \(\Sigma\) and length of minimizers \(m\) (where \(m=3\)), the length of each vector is \(|\Sigma|^{m}\).
```
Input: Set \(\mathcal{S}\) of kmers(\(k\)-mer) or Minimizers (m-mers) on alphabet \(\Sigma\), the size of mers \(n\) (k for kmer or m for minimizer) \(\triangleright\) n represents \(k\) or \(m\) Output: Frequency Vector \(V\) functionGetFrequencyVector(\(\mathcal{S},n,\Sigma\)) combos =GenAllPossibleNMers(\(\Sigma\), n) \(V\) = [0] * \(|\Sigma|^{n}\)\(\triangleright\) zero vector for\(i\gets 1\) to \(|\mathcal{S}|\)do idx = combos.index(\(\mathcal{S}\)[i]) \(\triangleright\) Find \(i^{th}m-mer\) idx \(V\)[idx] \(\gets V\)[idx] + 1 \(\triangleright\) Increment bin by 1 end for return\(V\) endfunction
```
**Algorithm 2** Build Frequency vector from \(k\)-mers and/or Minimizers
PWM. The pseudocode for PSWM2Vec is given in Algorithm 3. The results for MFV, PSWM2Vec, and Virus2Vec for aligned data are given in Table 6 and for unaligned in Table 4. Virus2Vec outperforms the other two embeddings for all but one evaluation metric.
```
Input Set of Spike Sequences \(S\), alphabet \(\Sigma\), \(k\)-mer length \(k\), m-mer length \(m\) Output Weighted Frequency Matrix \(V\) functionComputeFreqVector(\(S,\Sigma,k\)) \(V=[]\)\(\triangleright\) Weighted Frequency Matrix for\(i\gets 1\) to \(|Srows|\)do mlList = CompMinimizer(\(s,k,m\)) \(PFM=0*[|\Sigma|]]\)\([k]\) for\(p\gets 0\) to \(k\)do \(PFM[:,p]\) = GetAlphbtCnt(mList[:,p]) end for PC =\(0.1\)\(\triangleright\) Pseudocount PPM = CompProbability(PFM) + PC \(v\) = flat(PPM) \(\triangleright\) Flat PWM to get vector \(v\) V.append(\(v\)) end for return\(V\) endfunction
```
**Algorithm 3** PSWM2Vec algorithm pseudocode
## 5 Results and Discussion
In this section, we present results for proposed embeddings and compare their performance with the baseline and SOTA methods (with and without sequence alignment).
Our results show that not only Virus2Vec outperforms the SOTA methods but also preserves structure when compared with subjective evaluation using t-SNE plots. The runtime to generate the embeddings makes it a huge factor in considering Virus2Vec over other embeddings.
Table 4, which shows the results for different embedding methods (on unaligned data) with different classification algorithms. We can observe that the Virus2Vec outperforms the SOTA methods in terms of different evaluation metrics. Note that since OHE and PWM2Vec work with aligned data only, we have not included those for unaligned data results. The performance of Virus2Vec is not very different for aligned and unaligned data. Other methods, such as Approx. the kernel has better ROC-AUC values for aligned data as compared to unaligned ones. Spike2Vec, on the other hand, was able to generalize better on unaligned data. We can observe that Virus2Vec outperforms not only the feature engineering-based baselines but also the neural network-based classifiers. The NN models are known for their application in image data. Also, since the data is of a smaller size it is likely that the NN models failed to learn the patterns in the sequences. The findings are reckoned by its visualization counterpart as well, as we saw in t-SNE plots also for Virus2Vec, it does not disrupt the general structure of the data because t-SNE is able to retain the structure of the data in the same way that the other embedding methods do.
We show the runtime comparison for computing different feature embeddings in Table 5. It can be seen in the results that Virus2Vec takes the least time to generate for
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Embedding & Alignment & Low Dim & \(|\) Vector & Space & Runtime & Decails \\ & Free & Vectors & (Spike/Rabies) & Efficient & Efficer & Efficer \\ \hline \hline One-Hot & & & & & & & \\ Encoding (Saxonia et al., 2020) & & & & & & & \\ Spike2Vec (All and Patterson, 2021) & & & & & & & \\ Approaches & & & & & & & \\ Kernel (Pathain et al., 2017) & & & & & & & \\ PFW2Vec (All et al., 2022) & & & & & & & \\ LSTM (Meckheimer and Gallinedher, 1997) & & & & & & & \\ GMC (Cho et al., 2014) & & & & & & & \\ CNN (Lee et al., 2017) & & & & & & & \\ Probabilistic (Hands et al., 2022) & & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Baseline and Proposed Methods advantages and disadvantages.
\(5558\) sequences as compared to other methods. It takes \(4\) times less as compared to the Approximate Kernel method and 15 times less than PWM2Vec, which are comparable when accuracy is considered. Similarly for the Rabies virus-host dataset also embedding generation is very less in the case of Virus2Vec. Although One hot encoding is taking less time the pre-processing for sequence alignment is not considered here which will make it expensive.
Table 6 shows the results for different embedding (on aligned data) with different classifiers. Here also, Virus2Vec outperforms the feature engineering-based baselines and the neural network classifiers. We can see a significant improvement in accuracy when compared with the Approximate kernel and PWN2Vec. It is definitely an advantage but the reduced runtime for embedding generation, when compared with the closest comparable embeddings, is huge and makes it a better choice.
## 6 Conclusion
We propose an efficient sequence embedding approach named Virus2Vec, which uses an alignment-free method based on minimizers and PWM to classify hosts of different coronaviruses using spike sequences. Virus2Vec not only performs better as compared to other methods requiring sequence alignment but also is better since it is an alignment-free approach. We show that our approach for unaligned sequences is efficient to generate compared to the popular alignment-free methods and has comparable predictive performance and better runtimes. In the future, we would focus on collecting more data to evaluate the scalability of Virus2Vec. Such an approach could also work even on _unassembled_ (short read) data (not just unaligned), in a similar way that it works for metagenomics.
|
2307.05304 | Evidence of social learning across symbolic cultural barriers in sperm
whales | We provide quantitative evidence suggesting social learning in sperm whales
across socio-cultural boundaries, using acoustic data from the Pacific and
Atlantic Oceans. Traditionally, sperm whale populations are categorized into
clans based on their vocal repertoire: the rhythmically patterned click
sequences (codas) that they use. Among these codas, identity codas function as
symbolic markers for each clan, accounting for 35-60% of codas they produce. We
introduce a computational method to model whale speech, which encodes rhythmic
micro-variations within codas, capturing their vocal style. We find that vocal
style-clans closely align with repertoire-clans. However, contrary to vocal
repertoire, we show that sympatry increases vocal style similarity between
clans for non-identity codas, i.e. most codas, suggesting social learning
across cultural boundaries. More broadly, this subcoda structure model offers a
framework for comparing communication systems in other species, with potential
implications for deeper understanding of vocal and cultural transmission within
animal societies. | António Leitão, Maxime Lucas, Simone Poetto, Taylor A. Hersh, Shane Gero, David Gruber, Michael Bronstein, Giovanni Petri | 2023-07-07T16:46:25Z | http://arxiv.org/abs/2307.05304v2 | # Social learning across symbolic cultural barriers in non-human cultures
###### Abstract
Social learning is key in the development of both human and non-human animal societies. Here, we provide quantitative evidence that supports the existence of social learning in sperm whales across socio-cultural barriers, based on acoustic data from locations in the Pacific and Atlantic Oceans. Sperm whale populations have traditionally been partitioned into clans based on their _vocal repertoire_ (_what_ they say) of rhythmically patterned clicks (codas), and in particular their _identity_\(\mathit{ads}\), which serve as symbolic markers for each clan. However, identity codes account for between 35% and 60% of all codas vocalized depending on the different clans. We introduce a computational modeling approach that recovers clan structure and shows new evidence of social learning across clans from the internal temporal structure of _non-identity codas_, the remaining fraction of codas. The proposed method is based on _vocal style_, which encodes _how_ sperm whales assemble individual clicks into codas. Specifically, we modeled clicking pattern data using generative models based on variable length Markov chains, producing what we term "subcode trees". Based on our results, we propose here a new concept of _vocal identity_, which consists of both vocal repertoire and style. We show that (i) style-delimited clans are similar to repertoire-delimited clans, and that (ii) symmetry increases vocal style similarity between clans for non-identity codas, but has no significant effect on identity codas. This implies that different clans who geographically overlap have similar styles for most codas, which in turn implies social learning across cultural boundaries. More broadly, the proposed method provides a new framework for comparing communication systems of other animal species, with potential implications for our understanding of cultural transmission in animal societies.
## I Introduction
Cultural transmission is defined as the transmission of information or behaviors between individuals of the same species by means of social learning [1]. While humans represent the pinnacle of such capacity, cultural transmission has been observed in a wide variety of animals, including cetaceans [2; 3], songbirds [4], non-human primates [5], and insects [6].
When animals have the capacity for social learning, group-specific differences can arise and remain stable when they become distinguishable by symbolic markers: arbitrary group-identity signals that are recognizable by both members of the group itself and by members of other groups [7; 8]. In humans, symbolic markers can take a myriad of forms, ranging from visible signs, such as atttos or garments, to communication cues or signals, such as idiomatic sentences or accents [7; 8; 9]. In animals, however, quantitative evidence of symbolic markers is remarkably scarce, one exception being recent results on the use of _identity codas_ in sperm whale social communication [10].
Sperm whales live in multi-tiered societies and have a complex vocal communication system [11]. They communicate through rhythmic patterns (_codas_) of short broad-band sounds (_clicks_), which have traditionally been classified into a finite set of _codas types_ based on the total number of clicks, their rhythm, and their tempo [12; 13].
The combination of the set of used codas types (_codas usage_) and how frequently each is used (_codas frequency_) makes up a _vocal repertoire_. While there is evidence of individual variation in vocal repertoites [14; 15; 16], sperm whales belonging to the same social unit--a stable, matrilienally-based group of whales--share a similar vocal repertoire which is stable across years [15; 16; 17]. Social units that share substantial parts of their repertoire are said to be part of the same _vocal clan_[18; 19]. There is clear social segregation between members of different clans, even when living in sympatry, and thus clans mark a higher level of social organization, which appears to be defined on the basis of cultural vocal markers [10; 18; 19].
The clan specificity and redundant usage of certain coda types, termed _identity codas_[10; 13], align with the expectations for symbolic markers of group membership [8]. Furthermore, quantitative evidence that sperm whales themselves use identity codas as such markers has
recently emerged: the more two clans overlap in geographic space, the more different their identity coda usage repertoires are [10]. This is consistent with computational models [8] of the evolution of symbolic marking, which predict that differences between cultural norms will be starkest when inter-group interactions are more common (e.g., in boundary or overlap regions).
All remaining coda types have been referred to as _non-identity_ (non-ID) _codas_ and constitute a very large fraction of sperm whales' total number of coda utterances. In fact, the total number of emitted non-ID codas accounts for more than 6 out of 10 codas (see SM Section 1.1 for the counts per clan and per coda type). This begets the question: if ID codas are used as clan identity signals, what can be said about the remaining 65% of codas?
Here we develop a novel descriptive framework that focuses on the temporal _subcoda structure_, the variations of intervals between successive clicks within codas. We find that these modulations within codas characterize an individual's social unit and clan, effectively fingerprinting coda repertoires from the same clan with something comparable to a clan "accent." We call the way in which individual codas are assembled _vocal style_, as opposed to the _vocal repertoire_.
Thus, we propose a new concept for _vocal identity_ of sperm whales that comprises both style and repertoire.
Crucially, we find that the vocal style of non-ID codas is more similar for more sympatric (i.e., spatially overlapping) clans. In contrast, we do not find an effect of sympatry on the similarity of vocal styles when studying only ID codas. This suggests that geographic overlap induces vocal styles to become more similar between clans, without jeopardizing each clan's acoustic identity signals. Our results strengthen previous results on the use of ID codas as symbolic markers, while providing strong evidence of cultural transmission and social learning of vocalizations among whales of _different_ clans, as predicted by theoretical models [20].
## Results
### Subcoda structure captures variability in sperm whale communication
We model the internal structure of codas at the level of clicks by using variable length Markov chains (VLMCs). Our analytical pipeline is illustrated in Fig. 1. We build each VLMC in two main steps. We first convert codas, naturally represented as sequences of continuous, absolute, inter-click intervals (ICIs), to sequences of discrete ICIs (dICIs), by discretizing time into bins (Fig. 1A). In this way, each dICI represents a narrow range of possible ICI values. The bins have a fixed width \(\delta t\) and thus implicitly correspond to the temporal resolution of our representation (see Methods for details on the optimal choice of \(\delta t\)). Note that although ICIs have units of time (seconds), dICIs are (unit-less) integers, representing multiples of \(\delta t\). Hence, each coda (a sequence of ICIs) is mapped to a string of integers (a sequence of dI
Figure 1: **Statistical modeling of sperm whale communication: our analytical pipeline.** Sperm whale communication consists of rhythmic sequences of clicks, called codas. **A** First, we represented codas as sequences of discrete Inter-Click intervals (dICIs), by fitting absolute ICIs into bins (A, B, C,...). Here, the waveform of a five-click coda is shown. **B** Then, we modeled these sequences with variable-length Markov chains, which can be visualized as _subcoda trees_. These trees can be built for an individual speaker or a group of speakers, and capture the internal structure of codas and hence of sperm whale communication. Whale social units are divided into clans according to their coda usage and repertoire. Each clan has coda types that are used to identify their clan, called ID codas [10]. Only about 35% of all emitted codas are ID codas. All other codas are non-ID. **C** The distance between trees can be calculated to compare the similarity of the communication between (groups of) speakers, and compared with other factors, such as geographical clan overlap (see Methods).
CIs). The second part of the pipeline focuses on modeling how dICI sequences are assembled from shorter to longer sequences, up to full codas. Essentially, we want to estimate transition probabilities from a dICI sub-sequence to the next dICI (Fig. 1B). A standard way would be to describe this using \(k\)-order Markov chain models, which encode information on previous sub-sequences up to \(k\) steps in the past of the sequence. However, it is possible that different sequences of dICIs contain different amounts of information or memory regarding potential next dICIs. This is akin to what happens with words (e.g., a word beginning with "re" can continue in more ways than one starting with "zy"). To account for this possibility while also retaining only the most compressed statistical representation of how dICIs are assembled in codas, we employ VLMCs.
VLMCs are generalizations of standard (fixed-memory) Markov chains that allow sub-sequences of dICIs of variable lengths, and keep longer sequences only when they are significantly more informative with respect to the previous sequence than random (see Methods for details on model fitting and selection). Furthermore, VLMCs naturally have a tree structure (see Fig. 1B), because they describe the structure of dependencies in transitions from shorter to longer sequences. In particular, each node represents a sub-sequence of dICIs, and is equipped with a probability distribution of transitions to the next dICIs. The origin node corresponds to the empty sequence, leaf nodes correspond to the longest sequences, and all nodes forming the branch in between correspond to the sub-sequences of that leaf node. Thus, we call VLMCs fitted to coda ICI data _subcoda trees_, and represent coda assembly as a walk from the origin node to the leaf nodes. Three more features of subcoda trees are noteworthy: (i) because the method's input is a set of codas, we can build subcoda trees for repertoires corresponding to different social scales, from individual sperm whales, to social units, all the way up to vocal clans; (ii) the difference between different subcoda trees can be measured using a probabilistic distance (see Methods), which we can use to compare subcoda trees across sperm whale clans; and (iii) subcoda trees can also be used as generative models, to create new synthetic codas in the form of dICI sequences to train downstream machine learning models.
### Vocal style recovers vocal clan structure
The information about vocal style contained in subcoda trees is sufficient to recover the social structure of sperm whales (social units and clans). We show this in two ways. First, we analyze a dataset from sperm whales in Dominica (Dominica dataset) [19]. This dataset has rich annotations (coda type annotations, identity of recorded whales, social relations of recorded whales) which makes it particularly useful for validation. Specifically, sperm whales in the Dominica dataset are divided into two vocal clans, each composed of several social units. For each social unit present in this dataset, we aggregate the individual whales' repertoires and build a subcoda tree. Computing the distance between these trees (see Methods), we find that the distances between social units are significantly smaller within clans than between clans (Fig. 2A). We also find that an agglomerative clustering (average linkage, see Methods for details) on the distance between the subcoda trees correctly clusters social units into their respective clans (Fig. 2B). Without a priori knowledge of the clan memberships, we used vocal style to recover the existing classification of social units into two clans, which was previously done based on similarity between vocal repertoires (i.e., coda types and usage) [19].
Second, we find that the subcoda structure of synthetic
Figure 2: **Vocal style recovers social structure of vocal clans in Dominica sperm whales.****A** We show the similarity of vocal style, measured as subcoda tree-distance, among social units within a vocal clan (_within_) and between two clans (_between_). We used the manual clan assignments from [19] as ground truth. Vocal style is more similar within clans than between clans. **B** We show the hierarchical clustering of social unit subcoda trees. Each leaf corresponds to a social unit, and the colors below show their known clan assignments. The clustering recovers the two-clan structure observed in past work [19].
codas, generated from subcoda trees fitted on real data, closely reproduces that of real codas. To do this, we first train a simple classifier to assign codas to one of the two vocal clans, based on coda type. Variations of the same classifier, trained on the same real data, have been shown to discriminate between individual whales, social units, and clans with high accuracy [21]. We train the classifier on real codas, and then test it on both real and synthetic ones. The synthetic codas were generated using the subcoda tree of each clan, and we made sure the number of codas was similar to that of the original dataset (see Methods for details). We find that synthetic codas are correctly classified into their clans with an accuracy close (\(\sim 85\%\)) to that obtained on the real data (\(\sim 90\%\), see _Supplementary Materials_ Section 4).
Motivated by these results, we extend our analysis to a much larger dataset from the Pacific Ocean (Pacific dataset) [10]. This dataset is more sparsely annotated because of the breadth of its spatial coverage. We restricted our analyses to a well-sampled subset (n = 57 repertoires) of the full Pacific dataset (see Methods for details). Coda repertoires are only labeled by the spatial position at which they were recorded, but no information is available about the identity of the vocalizing sperm whales (see Methods for details). In fact, each repertoire likely contains codas from multiple individuals of a single clan. It has recently been shown that these repertoires can be divided into seven vocal clans based on their coda usage [10]. We use those clans as a benchmark for the following analysis.
Since there is no social unit-level information for this dataset, we fit a subcoda tree for each repertoire (i.e., all of the codas recorded on a single day in a single region). Trees are significantly more similar for repertoires belonging to the same vocal clan than for those belonging to different vocal clans (Fig. 3A). We also find that clustering repertoires based on vocal style returns a dendrogram that closely matches the one obtained from coda usage in [10] (Fig. 3B). The major exception we find is the _Short_ clan (red), named because member whales produce short codons with very few clicks, for which anomalous results were previously reported as well [10]. In our case, this is due to the Short clan being less well localized in the space of trees, while the other clans have well-defined centroids (see _Supplementary Materials_ Fig. 9 for a low-dimensional representation subcoda tree metric space).
Therefore, we find that sperm whale vocal clans in the Atlantic Ocean (Caribbean Sea) and Pacific Ocean can be identified by a _vocal identity_ that encompasses both clan-specific _repertoire_[10; 18; 19; 22] and _vocal style_ as defined in this work.
### Clan sympatry impacts vocal style of non-ID codas only
While interesting, the fact that both vocal repertoires and vocal styles discriminate between clans might imply that considering both could be redundant for vocal identity. However, we find that this is not the case when we
Figure 3: **Vocal style recovers social structure of vocal clans in Pacific Ocean sperm whales.****A** We show the similarity of vocal style, measured as subcoda tree-distance, between repertoires within a vocal clan (_within_) and between a clan and all others (_between_). We used the vocal clans identified in [10]. Vocal style is more similar within a clan than between clans. **B** We show the hierarchical clustering of subcoda trees. Each leaf corresponds to a repertoire, and the colors below show their vocal clan assignments (based on vocal repertoires) from [10]. We find generally good overlap between the groups obtained from clustering vocal style and those from vocal repertoire, with the exception of the _Short_ clan (red) that is somewhat mixed with the _Palindrome_ (orange) and _Rapid Increasing_ (yellow) clans.
consider the functional role of ID versus non-ID codas.
More precisely, different clans can share significant portions of their total range, overlapping across large swaths of ocean. Such sympatric clans exhibit a decreasing similarity of their ID coda usage repertoires with increasing clan overlap [10]. This means that the more two clans overlap in space, the more dissimilar their vocal repertoires are in terms of ID coda types and usage. This is consistent with the idea that ID codas are used as symbolic markers to delineate cultural boundaries between social groups [8; 10]. In contrast, non-ID coda usage repertoires do not show any relationship to clan overlap.
We find the exact _opposite_ effect when considering vocal style. The similarity in vocal style for ID codas across clans does not depend on the level of clan overlap (Fig. 4a). In contrast, the similarity in vocal style for non-ID codas displays a clear and significant increase (i.e., decreasing subcode tree-distance) as clan spatial overlap increases (Fig. 4b). In the _Supplementary Materials_ (see Section 2.4.2), we show that the same results hold at the single coda type level, in addition to the whole clan level. These results imply that the coda assembly process of non-ID codas is more similar for groups that likely spend more time in the same space, akin to accents aligning in human populations that share the same territory [23; 24].
## Discussion
We have presented a general method for modeling animal communication systems and their complexity based on VLMCs. In the context of sperm whales, this new method allows the extraction of _subcodea trees_, which succinctly describe the internal temporal structure of codas. Previous work on the structure of sperm whale communication has largely focused on supra-level coda analyses: for example, by classifying codas into types, quantifying how often different types are used, and distinguishing between individual whales, social units, or clans based on those counts [25; 16]. Here, we adopted a more fine-scale approach by investigating potential structure _within_ codas. To do so, we used VLMCs to model the transition probability of observing a specific ICI given the previous ones. A VLMC encodes all those probabilities for ICI sequences that are informative and discards sequences that are not. As such, the VLMC is a statistically validated representation of the internal memory structure of codas at the level of sequences of clicks. What it describes is not the usage frequency of codas, but rather how these codas are internally structured: a vocal style.
Using such representations, we propose a novel concept of _vocal identity_ for sperm whales composed of _vocal repertoire_ (what they say) and _vocal style_ (how they say it), the latter being captured by our framework. We find that: (i) vocal styles vary between social units and clans, and can be used to distinguish them; (ii) the similarity of clan vocal styles for non-ID codas increases with increasing spatial overlap, while no change occurs for ID codas; and (iii) social learning across symbolic cultural boundaries most parsimoniously explains the observed trends.
### Vocal style recovers hierarchical social structure
Using the Dominica dataset, sperm whales had previously been divided into two vocal clans, based on their vocal repertoires and observed social interactions [19].
Figure 4: **Clan overlap influences non-ID coda vocal styles** Comparing the similarities of different VLMC models fit for each Pacific Ocean clan for both ID and non-ID coda repertoires. The y-axis represents the measured distance between the subcodea trees, and the x-axis shows the geographical clan overlap (as calculated in [10]). Each point represents a pairwise comparison between two clans. Note how the effect of overlap on ID coda vocal style similarity is minimal and non-significant while the opposite is true for non-ID codas: overlapping clans produce non-ID codas with a more similar vocal style. The VLMC distances are also typically much greater for ID codas than for non-ID codas.
In our study, comparing the vocal styles of those same whales led to the same assignment of social units to two vocal clans. Similarly, for the Pacific dataset, clustering based on vocal styles yielded clans that were in good agreement with those previously defined based on vocal repertoires [10] (_Supplementary Materials_ for an extended comparison). The difference between the two partitions was mainly due to the Short clan, which was more spread out in subcoda tree space than the other clans, causing overlap with other clans that showed less variability. This variability could be linked to the fact that Short clan whales typically make codas with very few (e.g., three or four) clicks, leading to subcoda trees with very few nodes. In [10] the authors observe a similar lack of uniformity in coda usage repertoires of the Short clan.
### Identity and non-identity codas show different trends
For ID codas, we show here that the similarity between clan vocal styles is not affected by spatial overlap, while it has been recently shown that the similarity between clan vocal repertoires decreases with overlap [10]. In contrast, for non-ID codas, we show here that the similarity between vocal styles increases with spatial overlap between two clans, while no change was observed for vocal repertoires [10]. Our study and that of Hersh et al. [10] combine to provide further support for ID codas being used by sperm whales as identity signals. While clan designations may be reconstructed based on vocal style alone, the contrasting results between these studies suggest that selection is acting to produce unambiguous, recognizable identity signals in the ID codas. Although we are able to discriminate among clans using vocal style, the shared nature of such vocal styles across clans and variation with spatial overlap demonstrated here suggests that these are likely vocal cues and not identity signals like the ID codas, which appear to be the product of selection favoring the ease of identification by conspecifics [26; 27]. However, ID codas only account for 35% of the total vocalizations; the remaining 65% of codas have traditionally been lumped into a catch-all category (i.e., non-ID codas) and their function remains enigmatic (these numbers are an average over the Pacific clans, and go up to 93% for non-ID codas when counting coda types instead of codas emitted, see SM 1.1 for details). Accordingly, vocal repertoire and vocal style capture different and complementary information on sperm whale communication.
### Evidence for social learning across cultural boundaries
There are several potential mechanisms driving the similarity in non-ID coda vocal styles across spatially overlapped clans: environmental variation, genetics, and/or social learning.
Local adaptation to specific ecological conditions can lead to geographic variation in acoustic signals [28]. If environmental pressures alone were responsible for the trends we observe in sperm whales, this would imply that (i) more spatially overlapped clans experience more similar environments, (ii) non-ID coda vocal style is impacted by or dependent on environmental parameters, and (iii) ID coda vocal style is not impacted by/dependent on environmental parameters. Although the first point is somewhat intuitive, to date there is no evidence that coda production systematically varies with environment. In fact, clans are recognizable across ocean basins, making local adaptation an unlikely driver of the observed trend in non-ID coda vocal style.
If genetic relatedness were responsible, this would imply that (i) non-ID coda vocal styles are genetically inherited, (ii) ID coda vocal styles are not genetically inherited, and (iii) more spatially overlapped clans are more genetically related. Existing research supports that coda repertoires are socially learned, not genetically inherited [20]. Furthermore, Rendell et al. [29] found little evidence to support genetics as an explanation of differences in vocal dialects among clans in the Pacific Ocean, and we find no reason why this would be different for vocal style.
The most parsimonious explanation for the observed similarity of non-ID coda vocal styles of clans with increasing spatial overlap is social learning across clan boundaries. Indeed, while sperm whales belonging to different clans have rarely been observed physically interacting at sea, this does not preclude the possibility that they are within acoustic range of each other [30] and that cross-cultural social learning opportunities arise. This explanation is compatible with (and bolsters) past work suggesting that ID and non-ID codas function differently in sperm whale communication, and further suggests that they experience different evolutionary pressures [20]. Whether social learning has facilitated stochastic (i.e., cultural drift) or deterministic (i.e., cultural selection) processes is more difficult to determine, and it is unclear whether the observed non-ID coda vocal style alignment has been neutral or adaptive [28; 31]. Importantly, these findings indicate that vocal learning in sperm whales is not limited to vertical transmission from mothers to offspring, but that horizontal and/or oblique social learning (from outside the natal social unit) are occurring as well. Furthermore, if non-ID coda vocal style is maintained over time, it would suggest that adult sperm whales learn as well.
Vocal identity in sperm whales is thus consistent with both cultural selection on identity codas to maintain discrete signals for vocal recognition in sympatry, and social learning between clans leading to a vocal style more similar to that of other whales with which they are in acoustic contact more frequently. This highlights a more complex system of transmission in which clan identity is maintained through selection, while gradual change over time may occur within and across clans for vocalizations
which do not function in social recognition and thus may create vocal styles.
### Future directions
Our results can be expanded in multiple ways in future work. The first, and the simplest conceptually, would be to conduct the present analysis on a larger dataset. More codas would improve the quality of the statistical analyses and ensure that all codas are represented in realistic proportions for each clan. Moreover, longitudinal datasets might provide direct evidence to discriminate between the social learning hypothesis and competing ones (e.g. drift in vocal style). Similarly, confirmations could emerge from large scale genetic datasets addressing the issues of phylogenetic relatedness (or lack thereof) in clans that are closer in vocal style distance. Such datasets do not exist at present, but efforts towards automated and semi-automated collection techniques are underway (e.g. Project CETI [32]). Second, from a methodological perspective, we could add spectral information (in terms of acoustic frequencies) to the temporal information currently used. Although sperm whale acoustic communication seems mostly based on rhythm, spectral features of individual clicks may convey additional information. This possibility could be incorporated into our method by labeling the dICIs according to the frequency content of the associated click (or by extending the available "alphabet" for the VLMC). Third, it would be interesting to investigate in more detail the function of non-ID codas. Indeed, even though ID codas were only recently formally named for the first time, they have been the primary focus of sperm whale coda research for decades. As previously mentioned, non-ID codas are a catch-all category for anything that is not an ID coda, but that does not mean that all non-ID codas function in the same way. To start to unveil their function, we need to consider the context (behavioral, environmental, etc.) in which different non-ID codas are produced [33]. The pattern we documented may or may not apply to all non-ID codas, but it is at least strong enough that we detect the relationship with clan spatial overlap when collectively considering all non-ID codas.
## Methods
### Acoustic data
In social situations, sperm whales acoustically communicate through short bursts of _clicks_ with recognizable patterns based on rhythm and tempo referred to as _codas_. Codas are generally represented as sequences of ICIs, equivalent to a time series of click onsets.
We analyzed two datasets in the present study. The Dominica dataset contains 8719 annotated codas recorded in the Atlantic Ocean off the island of Dominica between 2005 and 2019. The codas come from 12 social units grouped into two vocal clans (EC1 and EC2). The Pacific dataset was collected between 1978 and 2017 at 23 locations in the Pacific Ocean (the recording methods are available in the supplementary materials of [10]). The codas were divided into repertoires according to their recording day and each repertoire was assigned a single vocal clan inferred in [10]. When considering a clan-level analysis (Fig. 3) all repertoires were used to compute the subcoda trees (23555 codas). However, when analysing at a repertoire level (Fig. 4), we discarded repertoires with less than 200 codas with statistical inference in mind, resulting in a final count of 57 repertoires (17046 codas) for the Pacific.
### Representation of sperm whale communication as discrete inter-click intervals
As a preliminary step, we discretized the (continuous) ICI values into bins of width \(\delta t\) seconds. In other words, we represented the continuum of ICI values by a finite set of discrete ICIs (dICIs) based on the duration of the ICI. The bin width \(\delta t\) controls the _temporal resolution_ of the representation: a higher value of \(\delta t\) implies a coarser representation with fewer dICIs. We also imposed an upper bound \(t_{\max}\): any ICI value greater than that was truncated to \(t_{\max}\). This ensured that the set of dICIs was finite. Note that although ICIs have units of time (seconds), dICIs are unitless (they represent time intervals). The resulting representation of ICIs as dICIs is a discrete random variable defined as
\[X_{\delta t,t_{\max}}=\left\lfloor\frac{\min\left(\text{ICI},t_{\max}\right)}{ \delta t}\right\rfloor, \tag{1}\]
which takes values in the finite set \(\mathcal{X}=\{0,1,\ldots,\lfloor\frac{t_{\max}}{\delta t}\rfloor\}\). We represented the sequences of ICIs by sequences of dICIs from that finite set. Note that any ICI value above \(t_{\max}\) is mapped to the dICI \(\lfloor\frac{t_{\max}}{\delta t}\rfloor\) and therefore represents the end of a coda. We set \(t_{\max}=1\) (longer than any ICI) and \(\delta t=0.05\) throughout the analysis (see _Supplementary Materials_ section 3.3.2 for justification of this choice and section 3.4.3 for an analysis on the influence of this parameter).
### Variable length Markov chains
We then modeled these dICI sequences using variable length Markov chains (VLMCs). VLMCs provide the large memory advantage of higher-order Markov chains when needed, without the drawback of having too many unnecessary parameters in the model.
Fitting a VLMC is the process of deciding how much memory is necessary to model specific sequences. The criterion for making this decision is the following: longer sequences are discarded if their distribution of transition probabilities is similar to that of shorter subsequences.
This process is often called _context tree estimation_ and consists of two steps.
The first step is to consider \(\mathcal{W}_{D}\) the set of all sequences of maximum length \(D\) (which we set to 10) and to assign the following probability distribution \(q_{w}\) to each sequence:
\[q_{w}=P(X|w), \tag{2}\]
that is, the probability of observing a state \(x\in\mathcal{X}\) given the sequence \(w\).
The second step is to prune the sequences that do not add information. Take two sequences \(u,w\in\mathcal{W}_{D}\), one being the suffix of the other \(w=\sigma u\). The information gained \(H_{w}\) by considering the longer sequence can be measured with a weighted Kullback-Leibler (KL) divergence \(D_{KL}\)[34]. The longer memory sequence \(w\) is kept only if the information gain is greater than some threshold \(K\)[35, 36]
\[\Delta H_{w}=N(w)D_{KL}(q_{w}||q_{u})>K \tag{3}\]
where \(N(w)\) denotes the length of sequence \(w\). Sequences that satisfy this condition are called _contexts_ and sequences that do not are discarded. A VLMC can be defined as the set of these contexts \(w\) and their associated probability distribution \(q_{w}\) (see _Supplementary Materials_ section 3.1 for details).
A VLMC can be visualized as a tree by representing each context \(w\) by a node and setting the root node as the context of length zero. Contexts that are subsequences of each other are then part of the same branches, which end with the longest contexts.
### Quantitative Comparison of VLMCs
If two VLMC models \(T_{1}\) and \(T_{2}\) are built over the same finite set of dICIs \(\mathcal{X}\), there exists a map \(\phi_{1}:\mathcal{W}_{D}\to T_{1}\) that maps any sequence of elements of \(\mathcal{X}\) into the longest sequence present in \(T_{1}\), and similarly for \(T_{2}\). This map also induces a map between the probability distributions of \(T_{1}\) and \(T_{2}\). Given two distributions over the same set \(\mathcal{X}\), we can measure how different they are with the \(KL\) divergence. Therefore, it is possible to define a dissimilarity between \(T_{1}\) and \(T_{2}\) by considering the average \(KL\) divergence over all sequences of \(T_{1}\) and their map \(\phi_{1}(T_{2})\subseteq T_{1}\)
\[d_{KL}(T_{1},T_{2})=\frac{1}{|T_{1}|}\sum_{w\in T_{1}}D_{KL}\left(q_{w}||p_{ \phi_{1}(w)}\right) \tag{4}\]
Refer to the _Supplmentary Materials_ section 3.4 for a more detailed explanation.
This results in a dissimilarity measure that captures not just the difference in emission distribution but also the structural differences of the associated context trees. When comparing the distribution of distances in Fig. 2A and Fig. 3A we performed a _Kolmogorov-Smirnov_ test to test if the distances between social units/repertoires of the same clan and distances between social units/repertoires of different clans had come from the same distribution. For every pair, we can reject the hypothesis of the distances coming from the same distribution with 95% confidence.
### Hierachical Clustering of VLMCs
The dendrograms in Fig. 2B and Fig. 3B were obtained by hierarchical clustering using average linkage on the set of subcoda trees (VLMCs). Since the distance is not symmetric, for agglomerative clustering we considered the symmetric distance:
\[d_{KL}(T_{1},T_{2})=\max\left\{d_{KL}(T_{1},T_{2}),d_{KL}(T_{2},T_{1})\right\}. \tag{5}\]
### Measuring clan overlap
We used the clan spatial overlap values from [10]. Briefly, given two clans A and B, and the repertoires associated to them, the amount of geographical overlap of A in B was measured as the fraction of repertoires belonging to clan A that were recorded within 1000 kilometers of at least one repertoire of clan B.
### Statistical Testing
On Fig. 2 and Fig. 3 we compare the distributions of distances between subcoda trees of of repertoires/social units of the same clan (_within_) and of different clans (_between_). The purpose is to assess whether these distributions originate from the same underlying population. We employ both the _Kolmogorov-Smirnov_ test and the \(T\)-test. The observed \(p\)-values were well below 0.01 for all clans. This allows us to confidently reject the hypothesis that there is no difference between the vocal style between different clans. For more information check the _Supplementary Materials_ in section 3.4.2.
To assess the existence of a relationship between clan overlap and vocal style similarity, we applied an ordinary least squares linear regression model (OLS). We show the resulting \(p\) values of the OLS statistical test at the bottom left of each plot of Fig. 4 along with the observed \(r^{2}\) value.
###### Acknowledgements.
This study was funded by Project CETI via grants from Dalio Philanthropies and Ocean X; Sea Grape Foundation; Rosamund Zander/Hansjorg Wyss, Chris Anderson/Jacqueline Novogratz through The Audacious Project: a collaborative funding initiative housed at TED. TAH was supported by Independent Max Planck Research Group Leader funding to Andrea Ravignani of
the Max Planck Institute for Psycholinguistics. The Dominica coda dataset originates from The Dominica Sperm Whale Project which was supported by a FNU fellowship for the Danish Council for Independent Research supplemented by a Sapere Aude Research Talent Award, a Carlsberg Foundation expedition grant, a grant from Focused on Nature, two Explorer Grants from the National Geographic Society (all to SG), and supplementary grants from the Arizona Center for Nature Conservation, Quarters For Conservation, the Dansk Akustisks Selskab, Oticon Foundation, and the Dansk Tennis Fond. Further funding was provided by Discovery and Equipment grants from the Natural Sciences and Engineering Research Council of Canada to Hal Whitehead (Dalhousie University) and a FNU large frame grant and a Villum Foundation Grant to Peter Madsen (Aarhus University). The publicly accessible Pacific Ocean sperm whale coda dataset we used in this study emanates from the Global Coda Dialect Project, a consortium of scientists conducting sperm whale acoustics research worldwide. Members of the consortium who contributed to the Pacific Ocean dataset include: Luke Rendell, Mauricio Cantor, Lindy Weilgart, Masao Amano, Steve M. Dawson, Elisabeth Slooten, Christopher M. Johnson, Iain Kerr, Roger Payne, Andy Rogan, Ricardo Antunes, Olive Andrews, Elizabeth L. Ferguson, Cory Ann Homb-Weaver, Thomas F. Norris, Yvonne M. Barkley, Karlina P. Merkens, Erin M. Oleson, Thomas Doniol-Valcroze, James F. Pilkington, Jonathan Gordon, Manuel Fernandes, Marta Guerra, Leigh Hickmott and Hal Whitehead.
|
2310.11966 | Flexible Payload Configuration for Satellites using Machine Learning | Satellite communications, essential for modern connectivity, extend access to
maritime, aeronautical, and remote areas where terrestrial networks are
unfeasible. Current GEO systems distribute power and bandwidth uniformly across
beams using multi-beam footprints with fractional frequency reuse. However,
recent research reveals the limitations of this approach in heterogeneous
traffic scenarios, leading to inefficiencies. To address this, this paper
presents a machine learning (ML)-based approach to Radio Resource Management
(RRM).
We treat the RRM task as a regression ML problem, integrating RRM objectives
and constraints into the loss function that the ML algorithm aims at
minimizing. Moreover, we introduce a context-aware ML metric that evaluates the
ML model's performance but also considers the impact of its resource allocation
decisions on the overall performance of the communication system. | Marcele O. K. Mendonca, Flor G. Ortiz-Gomez, Jorge Querol, Eva Lagunas, Juan A. Vásquez Peralvo, Victor Monzon Baeza, Symeon Chatzinotas, Bjorn Ottersten | 2023-10-18T13:45:17Z | http://arxiv.org/abs/2310.11966v1 | # Flexible Payload Configuration for Satellites using Machine Learning
###### Abstract
Satellite communications, essential for modern connectivity, extend access to maritime, aeronautical, and remote areas where terrestrial networks are unfeasible. Current GEO systems distribute power and bandwidth uniformly across beams using multi-beam footprints with fractional frequency reuse. However, recent research reveals the limitations of this approach in heterogeneous traffic scenarios, leading to inefficiencies. To address this, this paper presents a machine learning (ML)-based approach to Radio Resource Management (RRM).
We treat the RRM task as a regression ML problem, integrating RRM objectives and constraints into the loss function that the ML algorithm aims at minimizing. Moreover, we introduce a context-aware ML metric that evaluates the ML model's performance but also considers the impact of its resource allocation decisions on the overall performance of the communication system.
Radio Resource Management, Satellite Communications, Machine Learning.
## I Introduction
Satellite networks offer an appealing solution for delivering ubiquitous connectivity across diverse domains such as the maritime and aeronautical markets and communication services to remote regions [1]. Current Geostationary (GEO) broadband satellite systems use a multibeam footprint strategy to enhance spectrum utilization. In these systems, both power and bandwidth resources are typically allocated uniformly across the various beams. While this uniform allocation simplifies resource management, it may lead to inefficiencies in scenarios with varying traffic demands. Some beams may experience high demand, exceeding their available capacity, while others may have underutilized resources. This challenge has prompted research into more adaptive and dynamic resource allocation methods. In this regard, flexible payloads have emerged as an enabling technology to manage limited satellite resources by dynamically adapting the frequency, bandwidth, and power of the payload transponders according to users' demand [2].
Existing approaches aim to minimize the difference between offered and required capacity while adding constraints in terms of power [3, 4], and co-channel interference [5]. The power allocation derived in [3] is solved using water-filling, whereas a sub-optimal complexity game-based dynamic power allocation (AG-DPA) solution is proposed in [4]. A modified simulated annealing algorithm, as presented in [5], outperforms conventional payload designs in matching requested capacity across beams, emphasizing its effectiveness. However, the intricate computational complexities associated with these algorithms can significantly limit their practical applicability within real-world systems. Moreover, these approaches do not adequately consider the dynamic nature of capacity requests that change over time. In this context, Machine learning (ML) algorithms emerge as a more favorable alternative, as they are able to learn from varying capacity request scenarios.
ML algorithms have gained popularity in satellite communications, particularly in resource allocation [6]. Some studies explored reinforcement learning (RL) techniques [7] to cope with the time-varying capacity; however, they introduced additional delays due to online payload controller training. Also, the RL exploration phase, aimed at discovering optimal strategies through action exploration, can occasionally result in system outages or disruptions when untested actions are selected. In contrast, [8] adopts a multi-objective optimization approach using supervised learning, offering an alternative perspective.
In this work, we extend the ML-based method in [8] which originally employed a convolutional neural network (CNN) for solving the RRM task as a classification problem. In this approach, the ML model's objective is to select the best payload configuration from a discrete set of power and bandwidth combinations, treated as distinct classes. This technique considers 8 beams with 12 configurations each, giving a total of \(4.3\times 10^{8}\) potential payload
configurations. We expand to 10 beams with 9 configurations each, totaling \(3.5\times 10^{9}\) configurations. Although the number of configurations decreases after applying the system constraints, incorporating more beams inevitably increases the number of classes. Having many classes complicates the ML model evaluation as traditional metrics like accuracy can be inadequate, and metrics like recall may not fully depict system performance. The situation worsens when dealing with imbalanced class distributions since the models may favor dominant classes, leading to bias. To address this, we reframe the RRM task as a regression problem, incorporating RRM objectives and constraints into the ML loss function. We also introduce a new metric to assess the ML model's performance, offering an alternative and insightful way to evaluate its effectiveness in the context of RRM.
This paper is organized as follows. Section II introduces the flexible payload architecture and outlines the RRM task. Section III compares regression and classification-based ML methods for flexible payload. In Section IV, we present metrics for evaluating model performance, including a new ML metric for RRM. The methods are evaluated in Section V. Finally some conclusion remarks are included in Section VI.
## II System Model and Problem Formulation
We consider a GEO satellite system with a single multi-beam GEO satellite that covers a wide Earth region via \(B\) spot-beams. We focus on the forward link, considering \(U\) single-antenna user terminals (UTs) distributed across the satellite's coverage area. We assume that the considered payload can adaptably handle per-beam power and bandwidth resources.
### _Link-budget analysis_
The offered capacity \(C_{b}\) can be written as
\[C_{b}=\text{BW}_{b}\cdot\text{SE}_{b}, \tag{1}\]
where \(\text{SE}_{b}\) is the spectral efficiency (SE) for beam \(b\) in bps/Hz [9]. The SE is a function of the carrier to interference plus noise ratio (CINR) in the \(b\)-th beam CINR\({}_{b}\). The CINR in dB can be written as
\[10^{\frac{-\text{CNR}_{b}}{10}}=10^{\frac{-\text{CHR}_{b}}{10}}+10^{\frac{- \text{CNR}_{b}}{10}}, \tag{2}\]
where \(\text{CIR}_{b}\) is the carrier to interference ratio and \(\text{CNR}_{b}\) is the carrier to noise ratio for beam \(b\) in dB. The CIR represents the ratio of the power allocated at \(b\)-th beam (\(P_{b}\), in dBW) to the interference power at \(b\)-th beam (\(I_{b}\), in dBW). The CNR (in dB) can be obtained
\[\text{CNR}_{b}=\text{EIRP3dB}_{b}+G/T-A-k-\text{BW}_{b}, \tag{3}\]
where EIRP3dB\({}_{b}=P_{b}+G_{b}\) is the effective isotropic radiated power in dBW, \(G_{b}\) is the beam gain that depends on the half power beamwidth \(\theta_{\rm 3dB}\) in dBi, \(G/T\) is the merit figure of the user terminal, \(A\) is the free space attenuation in clear sky conditions in dB, and \(k\) is the Boltzmann constant.
With a particular \(P_{b}\), we determine the CINR\({}_{b}\) and subsequently the SE\({}_{b}\). Then, we obtain the capacity as in equation (1) using SE\({}_{b}\) and BW\({}_{b}\).
### _Traffic demand_
To generate instances of satellite traffic demand at specific time instances, we employ the SnT Traffic Emulator [10]. This emulator utilizes three distinct input datasets: population data, aeronautical data, and maritime data. The data are processed to create a matrix representing the traffic demand. In this matrix, each position \(i,j\) corresponds to a geographic location, and the value \(r_{i,j}\) denotes the traffic demand in that specific geographic location in bits per second (bps). From \(r_{i,j}\), the requested capacity \(R_{b}\) is calculated by aggregating all the \((i,j)\) points within the coverage region of beam \(b\).
### _RRM task_
The RRM aims to effectively allocate the available satellite resources such as power \(P_{b}\) and bandwidth BW\({}_{b_{c}}\) so that \(C_{b}\) matches \(R_{b}\) for each beam \(b=1,\cdots B\) over time \(t\), avoiding resource waste. The RRM task can be formulated as the following minimization problem [9]
\[\begin{split}\min_{P_{b}(t),\text{BW}_{b_{c}}(t)}& \frac{\beta_{1}}{B}\sum_{b=1}^{B}|C_{b}(t)-R_{b}(t)|+\\ +\frac{\beta_{2}}{B}\sum_{b=1}^{B}P_{b}(t)+\frac{\beta_{3}}{B} \sum_{c=1}^{N_{c}}\sum_{b_{c}=1}^{B_{c}}\text{BW}_{b_{c}}(t)\end{split} \tag{4}\]
s.t: \[C_{b}(t)\geq R_{b}(t)\text{ if }\{P_{b}(t)<P_{\text{max},b}\] \[\text{ and }\text{ BW}_{b_{c}}<\text{BW}_{\text{max},b}\}\] (5) \[C_{b}(t)=C_{\text{max}}(t)\text{ if }\{P_{b}(t)=P_{\text{max},b}\] \[\text{ and }\text{ BW}_{b_{c}}=\text{BW}_{\text{max},b}\}\] (6) \[\sum_{b=1}^{B}P_{b}(t)\leq P_{\text{max},\text{T}}\] (7) \[\sum_{b_{c}=1}^{B_{c}}\text{BW}_{b_{c}}\leq\text{BW}_{\text{max},c}.\] (8)
The cost function in equation (4) aims to simultaneously minimize three terms: the difference between \(C_{b}\) and \(R_{b}\), the power \(P_{b}\) (in W), and the bandwidth \(\text{BW}_{b_{c}}\) (in Hz) across all beams for each time instant \(t\). The weights \(\beta_{1}\), \(\beta_{2}\), and \(\beta_{3}\) are assigned to each term to indicate its relative importance.
The constraint in equation (5) ensures that offered capacity either meet or surpass the required capacity for each beam, under the condition that both the power and bandwidth allocations for the \(b\)-th beam at time \(t\) do not exceed their upper bounds. On the other hand, if the required capacity in the \(b\)-th beam is greater than the system can provide, the offered capacity within the \(b\)-th beam is inevitably capped at its maximum attainable value as accounted in (6).
To manage the overall power consumption, the total power \(\sum_{b=1}^{B}P_{b}(t)\) is constrained to not surpass the prescribed upper limit \(P_{\text{max},\text{T}}\) in equation (7). Moreover, the constraint in equation (8) imposes an upper limit on the total bandwidth allocated in each color of the frequency plan, ensuring that it does not exceed the available bandwidth per color, \(\text{BW}_{\text{max},c}\). The total bandwidth is allocated to the beams of each color \(c\) within the frequency plan comprising \(N_{c}\) colors where \(B_{c}\) is the number of beams with the same frequency and polarization defined by color \(c\).
## III Proposed ML-based flexible payload methods
In this section, we propose a CNN model to solve the RRM task as a regression problem, which involves determining the optimal payload configuration for specific traffic demands. Subsection III-A introduces the dataset used to train and evaluate the model. In subsection III-B, we summarize the CNN model used for classification in [8] and in subsection III-C we detailed our proposed model.
### _Dataset and preprocessing_
The dataset consists of \(M\) labeled examples, where each example is a data point with associated features and a target label.
The data points are matrices representing the traffic demand at each geographic location in the service area. Preprocessing steps include data reduction via Max-Pooling filters, standardization, and principal component analysis (PCA) to extract relevant features and reduce training complexity.
The target label corresponds to the optimal payload configuration for a given traffic demand matrix. This configuration minimizes a cost function (equation 4) while satisfying the constraints (equations 5-8). The nature of the target label varies depending on whether it's a classification or regression problem. In classification, the label is categorical, denoting the class to which each data point belongs. In regression, it's continuous, representing numeric values. We outline the specific target labels for each ML problem as follows.
### _ML-based flexible payload via classification_
In [8], the RRM task is treated as a classification problem with \(L\) classes representing possible payload configurations. The softmax activation function is applied in the CNN's output layer to convert raw output scores (logits) into a probability distribution over these classes. The final class is obtained by selecting the class with the highest probability. The model is trained by minimizing cross-entropy error, a standard loss function for classification tasks that measures the dissimilarity between predicted class probabilities and actual labels.
In this classification approach, the original optimization problem (as represented in equation (4)) is only considered when generating the training dataset and defining the classes. That is, the optimization problem is not considered when formulating the loss function used to train the neural network. In essence, the loss function used during the neural network's training phase is designed to optimize the model's performance and is distinct from the original optimization problem associated with RRM.
### _ML-based flexible payload via regression_
When approaching the RRM task as a regression problem, the target labels are the desired offered capacity values for each beam. The linear activation
function is used so that the neurons produce continuous values as their outputs: the predicted offered capacity values. In this case, the payload configuration with offered capacity values more similar to the ones obtained by the ML-model is selected. The similarity here is measured in terms of minimum mean absolute error (MAE).
The model is trained by minimizing the mean square error (MSE) between the output and the training label while considering a penalty term to account for the constraint in equation (5). This penalty term ensures that the model's predictions satisfies the constraint, thus aligning this approach with the RRM problem.
It is important to note that the MSE between the output and the training label is related to the first term being minimized in equation (4). This means that our regression-based approach consistently maintains a connection to the core optimization objectives of the RRM task, not only during the generation of the training dataset but also throughout the CNN training process.
## IV Performance assessment
In this section, we assess the model's performance and determine its ability to make accurate predictions when presented with unseen data.
### _Traditional ML metrics_
In this subsection, we evaluate the performance of the CNN models using ML metrics. These metrics offer quantitative measures of model effectiveness across various tasks, such as classification and regression. The choice of metric depends on the specific problem being addressed. For regression, we employ metrics like MSE, MAE, and R-squared (R2) to evaluate the model's predictive accuracy in estimating continuous numeric values. On the other hand, classification tasks rely on metrics such as accuracy, precision, recall, and F1-score to assess the model's ability to correctly categorize data into discrete classes.
The accuracy, for instance, is defined as
\[\eta=\frac{T_{N}+T_{P}}{T_{N}+F_{P}+T_{P}+F_{N}}, \tag{9}\]
for a binary classification problem with positive and negative classes, where \(T_{N}\) and \(T_{P}\) are the true negative and true positive, and \(FN\) and \(FP\) are the false negative and false positive.
When the dataset is imbalanced, recall and balanced accuracy are most suitable to evaluate the model. The recall is a type of accuracy per class defined as
\[\sigma=\frac{T_{P}}{T_{P}+F_{N}}, \tag{10}\]
By averaging the recall for each class, we obtain the balanced accuracy which is defined as
\[\phi=\frac{1}{2}\left(\frac{T_{P}}{T_{P}+F_{N}}+\frac{T_{N}}{T_{N}+F_{P}} \right). \tag{11}\]
### _Proposed ML metric for RRM task_
Our main goal is to fulfill the capacity demand for each beam, prioritizing capacity compliance over the perfect prediction of payload configurations. In this regard, we introduce the concept of flexible accuracy per payload configuration, a metric inspired by the recall equation (10). This flexible accuracy per payload configuration
\[\theta_{l}=\frac{1}{B}\sum_{b=1}^{B}\frac{S}{M_{l}}, \tag{12}\]
evaluates the performance of flexible payload models, where \(S\) is the number of instances in which the offered capacity in beam \(b\) was sufficient, and \(M_{l}\) is the number of samples is class \(l\). As in equation (11), we obtain the average flexible accuracy or balanced flexible accuracy
\[\bar{\theta}=\frac{1}{L}\sum_{l=1}^{L}\theta_{l}. \tag{13}\]
### _System performance metrics_
When evaluating the model in terms of the system performance, we are interested in finding a payload configuration that ensures that the offered capacity satisfies the requested capacity for each beam. The offered capacity \(\mathbf{c}_{m}=[C_{1,m},C_{2,m},C_{B,m}]\) is obtained after acquiring the payload configuration using the ML model. We then use the normalized mean square error (NMSE)
\[\nu_{m}=\frac{\sum[(\mathbf{c}_{m}-\mathbf{r}_{m})^{2}]}{\sum[(\mathbf{r}_{m} )^{2}]},m=1\cdots M_{\mathrm{test}} \tag{14}\]
to measure the similarity between the offered \(\mathbf{c}_{m}\) and requested capacity \(\mathbf{r}_{m}=[R_{1,m},R_{2,m},R_{B,m}]\) for each sample \(m=1,\cdots M_{\mathrm{test}}\) in the test dataset. Then the average NMSE can be computed as
\[\nu_{\mathrm{avg}}=\frac{1}{M_{\mathrm{test}}}\sum_{m=1}^{M_{\mathrm{test}}} \nu_{m}. \tag{15}\]
## V Simulation Results
### _Simulation Setup_
The center frequency is 19 GHz, the satellite is positioned at 13 E, and the satellite altitude is 35786 km. The merit figure is \(G/T=17\) dB/K. The number of beams in the system is \(B=10\) and the beam centres are at
\[\phi_{\text{lat}} =[39.3,42,44.7,47.4,51,53.7,56.4,39.5,42.2,49]\] \[\phi_{\text{long}} =[-5.3,0.5,3.10.6,-0.5,6,12.3,11.4,16.7,17.4]. \tag{16}\]
By varying bandwidth and power, as described in subsection II-A, we obtained the capacity per beam options shown in Table I.
The dataset comprises \(M=30\),\(000\) samples, with \(70\%\) allocated for training and \(15\%\) each for validation and testing. The stochastic gradient descent (SGD) optimizer with learning rate \(\mu=0.01\) is used to minimize the loss function.
### _Model evaluation and system performance_
Due to the utilization of a realistic traffic model, certain payload configurations are more frequently generated within the system. Therefore, the dataset exhibits class imbalance, where some classes have a greater number of samples than others as shown in Figure 1. In cases of imbalanced datasets, recall is the preferred metric for assessing the classification model's performance, as discussed in subsection IV-A. Figure 2 presents the recall (green line) for each class when evaluating the ML-based flexible payload model using the classification approach (CNN_C).
Classes with a higher number of data samples, such as classes 5 and 25, are corrected and classified by the model with an individual accuracy higher than \(95\%\), as indicated by the red line in Figure 2. However, many classes are misclassified when considering CNN_C. Figure 2 also includes the flexible accuracy per class for CNN_C (blue line). We can observe an improvement in terms of accuracy per class for CNN_C with this new metric, but some classes still have an individual accuracy below \(95\%\).
The MSE results for the ML-based flexible payload model using regression are shown in Figure 3. In contrast to CNN_C, CNN_R demonstrates satisfactory performance when evaluated with a traditional ML metric. Figure 2 presents the flexible accuracy per class (in magenta) for the ML-based flexible payload model via regression (CNN_R). In such a case, all payload configurations achieve an accuracy higher than \(95\%\). This emphasizes the effectiveness of combining an appropriate evaluation metric with a model that intricately captures the RRM task. The average offered capacity and requested capacity are compared in Figure 4. On average, both models are able to satisfactorily obtain a payload configuration that leads to an offered capacity that satisfies the demand. In Table II, we compare all the metrics used to evaluate the ML models. The traditional ML metrics used to evaluate the classification model CNN_C failed to effectively capture the model's abil
Fig. 1: Number of samples per class in the validation set.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline Index & BW\({}_{b}\) & \(F_{b}\) & EIRP3dB\({}_{b}\) & CINR\({}_{b}\) & SE\({}_{b}\) & C\({}_{b}\) \\ & [MHz] & [dBW] & [dBW] & & [bps/Hz] & [Mbps] \\ \hline
1 & 150 & 10 & 49.93 & 6.6670 & 1.9246 & 288.6844 \\ \hline
2 & 250 & 10 & 49.93 & 4.4741 & 1.5187 & 379.6639 \\ \hline
3 & 500 & 10 & 49.93 & 1.4831 & 0.9650 & 482.5084 \\ \hline
4 & 150 & 12 & 51.94 & 8.6396 & 2.2897 & 343.4547 \\ \hline
5 & 250 & 12 & 51.94 & 6.4615 & 1.8865 & 471.6312 \\ \hline
6 & 500 & 12 & 51.94 & 3.4817 & 1.3350 & 667.4827 \\ \hline
7 & 150 & 14 & 55.94 & 12.4904 & 3.0025 & 450.3720 \\ \hline
8 & 250 & 14 & 35.94 & 10.3705 & 2.6101 & 652.5215 \\ \hline
9 & 500 & 14 & 55.94 & 7.4357 & 2.0668 & 1033.4 \\ \hline \end{tabular}
\end{table} TABLE I: Possible resource allocations in a beam
Fig. 2: Proposed CNNs.
ity to meet the desired system benchmarks. On the other hand, when evaluating both the classification and regression models with the new ML metric, our expectations align more closely with the system's specific performance requirements.
## VI Conclusions
In this work, we introduced a CNN to solve the RRM task considering flexible bandwidth and power. The RRM objective function and constraints were included in the ML loss function. We also proposed a new metric designed to balance traditional machine learning evaluation metrics and system performance. The simulation results indicate that traditional ML metrics fail to capture the system's requirements, whereas the proposed metric is robust to it.
## Acknowledgment
This work was supported by the European Space Agency (ESA) funded under Contract No. 4000134522/21/NL/FGL named "Satellite Signal Processing Techniques using a Commercial Off-The-Shelf AI Chipset (SPAICE)". Please note that the views of the authors of this paper do not necessarily reflect the views of the ESA. Furthermore, this work was partially supported by the Luxembourg National Research Fund (FNR) under the project SmartSpace (C21/IS/16193290).
|
2307.12678 | Application of Power Flow problem to an open quantum neural hardware | Significant progress in the construction of physical hardware for quantum
computers has necessitated the development of new algorithms or protocols for
the application of real-world problems on quantum computers. One of these
problems is the power flow problem, which helps us understand the generation,
distribution, and consumption of electricity in a system. In this study, the
solution of a balanced 4-bus power system supported by the Newton-Raphson
method is investigated using a newly developed dissipative quantum neural
network hardware. This study presents the findings on how the proposed quantum
network can be applied to the relevant problem and how the solution performance
varies depending on the network parameters. | Ekin Erdem Aygül, Melih Can Topal, Ufuk Korkmaz, Deniz Türkpençe | 2023-07-24T10:33:18Z | http://arxiv.org/abs/2307.12678v1 | # Application of Power Flow problem to an open quantum neural hardware
###### Abstract
Significant progress in the construction of physical hardware for quantum computers has necessitated the development of new algorithms or protocols for the application of real-world problems on quantum computers. One of these problems is the power flow problem, which helps us understand the generation, distribution, and consumption of electricity in a system. In this study, the solution of a balanced 4-bus power system supported by the Newton-Raphson method is investigated using a newly developed dissipative quantum neural network hardware. This study presents the findings on how the proposed quantum network can be applied to the relevant problem and how the solution performance varies depending on the network parameters.
quantum neuron, information reservoir, collisional model, training and learning
## I Introduction
Various mathematical solution proposals have been introduced for the Power Flow (PF) [1, 2, 3, 4, 5, 6, 7, 8], which is an engineering problem related to the distribution of electric energy, still remaining an indispensable energy source in our modern world. The most conventional and fundamental techniques employed with the YBUS matrix encompass Gauss-Seidel power flow, Newton-Raphson (NR) power flow, and Decoupled methods [1]. YBUS methods offer a reliable and efficient solution for a wide range of power systems.
However, these methods have certain limitations. The Newton-Raphson (NR) method incurs high computational costs due to the need for calculating the Jacobian matrix. On the other hand, the Decoupled method, while computationally less expensive, exhibits a higher margin of error [2]. In addition, there exist alternative methods tailored to different power system configurations. For instance, the forward and backward sweep method is employed for radial networks [3]. In the case of loosely interconnected power systems, a compensation method has been developed [4]. Moreover, a novel approach utilizing ZBUS and LU triangularization has been proposed for both balanced and unbalanced power systems [5]. There are numerous variations of the aforementioned methods. For example, a study was conducted on the IEEE 14-bus system, comparing different optimizers that take load reactive and active powers as inputs, with voltage magnitudes being returned as outputs [6].
Artificial neural networks, which are learning systems, constitute another method among those mentioned above. For instance, InterPSS was utilized for generating the training dataset [7]. Pham and Li, on the other hand, compared their ANN studies using the ReLU activation function with the DC power flow method in terms of accuracy and speed [8]. Another study involved creating a dataset using 21 base cases from the Saudi national grid to train an ANN, and the results were then compared with the Newton-Raphson method [9].
In this study, we propose a solution to the power flow (PF) problem by developing a quantum neural network that operates using a dissipative quantum hardware, in which its building block has recently been introduced [10, 11]. To accomplish this, we employ the standard theory of artificial neural learning [12] and adapt our current problem to the hardware parameters that have been introduced.
Although quantum neural networks do not possess proven explicit superiority over their classical counterparts, their widespread use stems from incorporating quantum subroutines and thereby increasing the potential for quantum advantage in various problem-solving applications. In previous work, it was demonstrated that dissipative quantum computation, which is emphasized in this study, is equivalent to the standard quantum circuit model [13].
## II Problem Definition
### _Newton Raphson Power Flow_
Newton-Raphson method is an iterative method which takes advantage of Taylor series expansion and first-order approxi
mation [1]. We can express the general definition as
\[\begin{split} g_{1}\left(x_{1},x_{2},u\right)&=h_{1} \left(x_{1},x_{2},u\right)-b_{1}=0\\ g_{2}\left(x_{1},x_{2},u\right)&=h_{2}\left(x_{1},x_{ 2},u\right)-b_{2}=0\end{split} \tag{1}\]
and the Taylor series expansion of these multivariable functions can be written as
\[\begin{split} g_{1}\left(x_{1}^{*},x_{2}^{*},u\right)=& g_{1}\left(x_{1}^{(0)},x_{2}^{(0)},u\right)\\ &+\left.\Delta x_{1}^{(0)}\frac{\partial g_{1}}{\partial x_{1}} \right|^{(0)}+\left.\Delta x_{2}^{(0)}\frac{\partial g_{1}}{\partial x_{2}} \right|^{(0)}+\cdots\\ g_{2}\left(x_{1}^{*},x_{2}^{*},u\right)=& g_{2}\left(x_{1}^{(0)},x_{2}^{(0)},u\right)\\ &+\left.\Delta x_{1}^{(0)}\frac{\partial g_{2}}{\partial x_{1}} \right|^{(0)}+\left.\Delta x_{2}^{(0)}\frac{\partial g_{2}}{\partial x_{2}} \right|^{(0)}+\cdots\end{split} \tag{2}\]
Since the exact solution of the set of functions \(g_{1}\left(x_{1}^{*},x_{2}^{*},u\right)\) and \(g_{2}\left(x_{1}^{*},x_{2}^{*},u\right)\)is equal to zero, neglecting terms equal or higher than second order differential the Taylor series expansion of this set can be written like
\[\left[\begin{array}{cc}\frac{\partial g_{1}}{\partial x_{1}}&\frac{\partial g _{1}}{\partial x_{2}}\\ \frac{\partial g_{2}}{\partial x_{1}}&\frac{\partial g_{2}}{\partial x_{2}} \end{array}\right]\left[\begin{array}{c}\Delta x_{1}^{(0)}\\ \Delta x_{2}^{(0)}\end{array}\right]=\left[\begin{array}{c}0-g_{1}\left(x_{ 1}^{(0)},x_{2}^{(0)},u\right)\\ 0-g_{2}\left(x_{1}^{(0)},x_{2}^{(0)},u\right)\end{array}\right] \tag{3}\]
If we rewrite the equation in the simpler form
\[J^{(0)}\left[\begin{array}{c}\Delta x_{1}^{(0)}\\ \Delta x_{2}^{(0)}\end{array}\right]=\left[\begin{array}{c}\Delta g_{1}^{(0 )}\\ \Delta g_{2}^{(0)}\end{array}\right] \tag{4}\]
Here, the matrix \(J\) is the Jacobian matrix. After the algorithm starts initial values of \(x^{(0)}\), the jacobian is formed. By taking the inverse of Jacobian, mismatches are found. From mismatch equation new values of \(x\) are found using \(x_{i}^{(k+1)}=x_{i}^{(k)}+\Delta x_{i}^{(k)}\). This process is repeated until mismatch is close to zero or under a specified error rate.
### _Power Flow_
Power networks are composed of buses that elements in the power network are connected and lines that connect these buses. YBUS methods can be used to define Power Flow Problem [12].
\[\begin{split} P_{i}&=\left|V_{i}\right|^{2}G_{ii}+\sum_{ \begin{subarray}{c}n=1\\ n\neq i\end{subarray}}^{N}\left|V_{i}V_{n}Y_{in}\right|\cos\left(\Theta_{in}+ \delta_{n}-\delta_{i}\right)\\ Q_{i}&=-\left|V_{i}\right|^{2}B_{ii}-\sum_{\begin{subarray}{c}n=1\\ n\neq i\end{subarray}}^{N}\left|V_{i}V_{n}Y_{in}\right|\sin\left(\Theta_{in}+ \delta_{n}-\delta_{i}\right)\end{split} \tag{5}\]
where power mismatch is defined as \(\Delta P_{i}=P_{i,sch}-P_{i,calc}\) and \(\Delta Q_{i}=Q_{i,sch}-Q_{i,calc}\). The generalization of the linear matrix system presented below
\[\left[J\right]\left[\begin{array}{c}\Delta\delta\\ \frac{\Delta\left|V\right|}{\left|V\right|}\end{array}\right]=\left[\begin{array} []{c}\Delta P\\ \Delta Q\end{array}\right]. \tag{6}\]
is derived from the application of Taylor series. In the formation of this linear system of equations, the slack bus is excluded. Furthermore, for PV buses, voltage corrections are consistently set to zero, and their reactive power is left unspecified. Consequently, the columns multiplied by zero and the rows with an indeterminate solution are also excluded. As for PQ buses, no omissions are made; however, since the voltage magnitude and phase are unknown for these buses, they are estimated. Considering Eq. (15), the elements of Jacobian can be divided into four matrices \(J_{11},J_{12},J_{21},J_{22}\) where elements of each one of these matrices are \(\frac{\partial P_{i}}{\partial\delta_{j}},\left|V_{j}\right|\frac{\partial P_{ i}}{\partial\left|V_{j}\right|}\), \(\frac{\partial Q_{i}}{\partial\delta_{j}}\), \(\left|V_{j}\right|\frac{\partial Q_{i}}{\partial\left|V_{j}\right|}\) respectively and i and j represent the row and column number. To form the Jacobian these elements must be calculated. Using Eq. (11) and Eq. (12), the formulas below are obtained.
\[\left|V_{j}\right|\frac{\partial Q_{i}}{\partial\left|V_{j}\right|}=-\left|V_{i }V_{j}Y_{ij}\right|\sin\left(\theta_{ij}+\delta_{j}-\delta_{i}\right)=M_{ij} \tag{7}\]
\[-\left|V_{j}\right|\frac{\partial P_{i}}{\partial\left|V_{j}\right|}=-\left|V_{i }V_{j}Y_{ij}\right|\cos\left(\theta_{ij}+\delta_{j}-\delta_{i}\right)=N_{ij} \tag{8}\]
And for _i=j_,
\[\begin{split}\frac{\partial P_{i}}{\partial\delta_{i}}=-\sum_{ \begin{subarray}{c}n=1\\ n\neq i\end{subarray}}^{N}M_{in}=M_{ii}\\ \left|V_{i}\right|\frac{\partial Q_{i}}{\partial\left|V_{i}\right|}=-M_{ii}-2 \left|V_{i}\right|^{2}B_{ii}\end{split} \tag{9}\]
\[\begin{split}\frac{\partial Q_{i}}{\partial\delta_{i}}=-\sum_{ \begin{subarray}{c}n=1\\ n\neq i\end{subarray}}^{N}N_{in}=N_{ii}\\ \left|V_{i}\right|\frac{\partial P_{i}}{\partial\left|V_{i}\right|}=N_{ii}+2 \left|V_{i}\right|^{2}G_{ii}\end{split} \tag{10}\]
next, the obtained Jacobian inverse is used to find mismatches of voltage magnitudes and phases. The updated parameters are given as
\[\left|V_{i}\right|^{\text{new}}=\left|V_{i}\right|^{\text{old}}\,\left(1+ \frac{\Delta\left|V_{i}\right|^{\text{old}}}{\left|V_{i}\right|^{\text{old}}} \right), \tag{11}\]
\[\delta^{new}=\delta^{old}+\Delta\delta^{old}. \tag{12}\]
### _Artificial Neural Network_
Hodgkin and Huxley conducted groundbreaking research on the electrical properties of biological neurons where they were able to describe their electrical behavior [14]. The ability to mathematically describe biological material in this way has led to the idea that brain nerve cells can be represented in this manner. Inspired by the firing behavior of brain cells, Rosenblatt introduced a mathematical binary classifier model called a perceptron [16], which is represented by the following mathematical expression
\[\begin{split} I=\sum_{i=1}^{n}x_{i}W_{i}\\ Y=g(I)\end{split} \tag{13}\]
where, \(x_{i}\) are inputs, \(W_{i}\) are weights, and \(g\) is the activation function.
The perceptron utilizes Hebb's learning rule during its learning procedure [17]. This rule dictates that the weights of the neuron are adjusted when an error occurs between the desired output and the actual output. Conversely, if there is no discrepancy, the weights remain unaltered. The iteration procedure is given as
\[\boldsymbol{W}_{i}^{\text{(current ) }}=\boldsymbol{W}_{i}^{\text{(previous ) }}+\eta\left(d^{(k)}-y^{(k)}\right)\boldsymbol{x}^{(k)} \tag{14}\]
where, \(k\) is the number determining which sample is used, \(\boldsymbol{W}_{i}\) is a vector containing the weights, \(d^{(k)}\) is the desired output for the \(\mathbf{k}^{\text{th }}\) sample, \(\boldsymbol{x}^{(k)}\) is the input vector for the \(\mathbf{k}^{\text{th }}\) sample and \(\eta\) is the learning rate. When it comes to training a multilayer neural network, a new notation and a re-expression of the error function minimization in terms of the structure of the multilayer network are required.
In Fig. 1, \(Y_{i}^{(L)}\) output of the \(i^{\text{th }}\) neuron in layer \(L\), \(I_{i}^{(L)}\) is the Input to activation function of \(i^{\text{th }}\) neuron in layer \(L\), \(W_{ji}^{(L)}\) is the Weight of the connection between \(j^{\text{th }}\) neuron in layer \(L\) and \(i^{\text{th }}\) neuron in layer \((L-1)\), \(g(.)\) is an activation function, \(n_{i}\) is the number of neurons in the \(i^{\text{th }}\) layer. In this architecture, each neuron is densely connected to every neuron in the preceding and subsequent layers. This connectivity pattern is referred to as dense connections. The formulas for the defined variables are provided below.
\[\begin{split}& I_{i}^{(L)}=\sum_{i=1}^{n_{L-1}}Y_{i}^{(L-1)}W_{ji}^{(L )}\\ & Y_{i}^{(L)}=g\left(I_{i}^{(L)}\right)\end{split} \tag{15}\]
#### Ii-B1 Backpropagation Algorithm
Backpropagation algorithm uses gradient descent method. The gradient of the error with respect to \(\mathrm{W_{ji}}\) gives can be written by expanding the formula using chain rule.
\[\nabla E^{(L)}=\frac{\partial E}{\partial W_{ji}^{(L)}}=\frac{\partial E}{ \partial Y_{j}^{(L)}}\frac{\partial Y_{j}^{(L)}}{\partial I_{j}^{(L)}}\frac{ \partial I_{j}^{(L)}}{\partial W_{ji}^{(L)}} \tag{16}\]
Here for the output layer with mean square error loss function,
\[\begin{split}\frac{\partial I_{j}^{(L)}}{\partial W_{ji}^{(L)}} =\partial Y_{j}^{(L-1)};&\frac{\partial Y_{j}^{(L)}}{\partial I_ {j}^{(L)}}=g\left(I_{j}^{(L)}\right)\\ &\frac{\partial E}{\partial Y_{j}^{(L)}}=-\left(d_{j}-Y_{j}^{(L)} \right)\end{split} \tag{17}\]
We define the delta as
\[\delta_{j}^{(L)}=\left(d_{j}-Y_{j}^{(L)}\right)g\left(I_{j}^{(L)}\right)=- \frac{\partial E}{\partial Y_{j}^{(L)}}\frac{\partial Y_{j}^{(L)}}{\partial I _{j}^{(L)}} \tag{18}\]
But for the hidden layers it is harder to find as \(-\frac{\partial E}{\partial Y_{j}^{(L)}}\).
\[\frac{\partial E}{\partial Y_{j}^{(L-1)}}=\sum_{k=1}^{n_{L}}\frac{\partial E} {\partial I_{k}^{(L)}}\frac{\partial\left(\sum_{k=1}^{n_{L}}W_{kj}^{(L)}Y_{j}^ {(L-1)}\right)}{\partial Y_{j}^{(L-1)}} \tag{19}\]
Above in 17, it can be seen that \(\frac{\theta E}{\Delta I_{j}^{(L)}}\) is actually the \(\delta_{j}^{(L)}\) of the layer ahead. As we calculate these \(delta\) values every time they will be stored to calculate the \(delta\) of the previous layer.
\[\frac{\partial E}{\partial Y_{j}^{(L-1)}}=-\sum_{k=1}^{n_{L}}\delta_{j}^{(L)}W _{kj}^{(L)} \tag{20}\]
Now we can calculate the delta for the \(\left(\mathrm{L-\underline{1}^{th}}\) layer.
\[\delta_{j}^{(L)}=\left(\sum_{k=1}^{n_{L}}\delta_{j}^{(L)}W_{kj}^{(L)}\right)g \left(I_{j}^{(L-1)}\right) \tag{21}\]
A generalized version of updating the weight is given as
\[W_{ji}^{(L)}(t+1)=W_{ji}^{(L)}(t)+\eta\delta_{j}^{(L)}Y_{i}^{(L-1)} \tag{22}\]
Here \(\eta\) is the learning rate. The learning rate does not have to be fixed. An adaptive learning rate can be beneficial which will be mentioned in the following. Here since this is plain back propagation, all of the weights are updated at the same time after the error and the gradient for that error is found for every sample.
## III Quantum Neural Network
In the open quantum counterpart of the Artificial Neural Network (ANN) depicted in Figure 1, the nodes are substituted with quantum spins characterized by spin numbers \(J\geq 1/2\), while the input layer is substituted with reservoirs carrying quantum information. The operational mechanism of the introduced quantum neural network relies on a repeated interaction process and Completely Positive and Trace-Preserving (CPTP) maps.
### _Open Quantum Systems_
Assuming a system with the density matrix \(\rho\) and the environment denoted with \(\rho_{E}\) are in a product state. Even though the system is not unitary, total of system and environment is unitary. Therefore, for an arbitrary unitary transformation \(U\) the evaluation of the system+environment is given as
\[\rho_{sys}\otimes\rho_{env}\to U\rho_{sys}\otimes\rho_{env}U^{\dagger}. \tag{23}\]
Fig. 1: Multi Layer Feed Forward Neural Network Scheme with Notations [12]
Next, the system of interest is obtained by a partial trace operation which is a non-unitary evolution
\[\rho_{sys}=\text{Tr}_{\text{env}}\left(\rho_{sys+env}\right). \tag{24}\]
In order to describe the evolution of a state in open quantum environment, we used quantum dynamical map. In general quantum dynamical map can be defined as
\[\rho_{sys}^{\prime}=\varepsilon(\rho)=\text{Tr}_{\text{env}}\left(U\rho_{sys+env }U^{\dagger}\right). \tag{25}\]
There are several conditions that required for the defining physical process using dynamical maps. Firstly a quantum map must preserve unit trace \(\mathrm{Tr}(\varepsilon(\rho))=\mathrm{Tr}(\rho)=1\). Secondly a quantum map must be convex linear. \(\varepsilon\left(\sum_{i}p_{i}\rho_{i}\right)=\sum_{i}p_{i}\varepsilon\left( \rho_{i}\right)\) And lastly the dynamical map must be completely positive [10]. On top of that, if a weak coupling condition is met where the additivity of the quantum dynamical map is valid, a system can be written in a linear combination of dynamical maps [18].
\[\Lambda\left(\rho_{0}\right)=\sum_{i}P_{i}\Phi^{(i)}\left(\rho_{0}\right) \tag{26}\]
### _Multi-layer Quantum Neural Network_
Designing a dissipative quantum neural network composed of perceptrons operating based on the aforementioned principles is a straightforward task. It is important to highlight that the hardware implementation relies on the weak coupling condition, allowing for the dissipative transfer of quantum data from the pure information reservoirs to the network depicted below [19, 10]
\[\rho_{R_{i}}=\bigotimes_{k=1}^{n}\rho_{k}\left(\theta_{i},\phi_{i}\right). \tag{27}\]
As stated in Eq. (27) \(\theta\) and \(\phi\) are used to define a pure quantum state in the Bloch sphere. Information reservoirs are composed of many quantum states of the same state. The dynamical map that defines the interaction between the system and ith information reservoir is given as
\[\begin{split}\Phi_{n\tau}^{(i)}=&\,\mathrm{tr}_{n} \left(U_{0_{i\tau}}\ldots tr_{1}\left(U_{0_{i1}}\left(\rho_{0}\otimes\rho_{R_{ i1}}\right)U_{0_{i1}}^{\dagger}\right)\otimes\ldots\\ &\ldots\otimes\rho_{R_{in}}U_{0_{in}}^{\dagger}\right)\end{split} \tag{28}\]
where, \(U_{0_{ik}}=e^{-iH_{0i}^{k}\tau}\).
In this context, the variable \(n\tau\) represents the time required for \(n\) collisions. With a finite number of collisions, it is assumed that the probe qubit can attain a steady state [20]. To derive the unitary propagator, a master equation is employed, which is based on a micromaser-like repeated interactions approach [19, 10, 11, 21] given below
\[\begin{split} U(\tau)=& 1-i\tau(\sigma_{0}^{+}J_{gi}^{-}+ \sigma_{0}^{-}J_{gi}^{+})\\ &-\frac{\tau^{2}}{2}\left(\sigma_{0}^{+}\sigma_{0}^{-}J_{gi}^{- }J_{gi}^{+}+\sigma_{0}^{-}\sigma_{0}^{+}J_{gi}^{+}J_{gi}^{-}\right).\end{split} \tag{29}\]
The steady-state density matrix of the probe qubit is obtained as [10]
\[\begin{split}\rho_{0}^{ss}=&\frac{1}{\sum_{i}^{N}{g_ {i}}^{2}}\sum_{i=1}^{N}{g_{i}}^{2}\left(\left\langle\sigma_{i}^{+}\sigma_{i}^{ -}\right\rangle\left|e\right\rangle\left\langle e\right|+\left\langle\sigma_{ i}^{-}\sigma_{i}^{+}\right\rangle\left|g\right\rangle\left\langle g\right|\\ &+i\gamma_{1}^{-}\left(\left\langle\sigma_{i}^{+}\sigma_{i}^{-} \right\rangle-\left\langle\sigma_{i}^{-}\sigma_{i}^{+}\right\rangle\right) \left|e\right\rangle\left\langle g\right|+\text{ H.c. }\right)\end{split} \tag{30}\]
In this context, \(\left\langle\sigma_{z}^{0}\right\rangle^{ss}\) corresponds to the outputs of the artificial neurons in an artificial neural network (ANN), while \(\left\langle\sigma_{z}\right\rangle_{i}\) can be interpreted as the outputs of the preceding neurons. Therefore, the formula mentioned above incorporates Eq. (15). Additionally, the weights in the ANN are represented by the coupling strength \(g\).
### _Activation Function_
Activation functions play a vital role in ANNs. However, the specific activation function used in the proposed QNN remains unspecified. Graphical methods are employed to determine the appropriate activation function for the system. Additionally, it has been shown that adjusting the spin number influences the steepness of the resulting hyperbolic tangent function.
Steady state qubit magnetization is the merit quantifier of the quantum neurons as output data. The expectation value is obtained by measuring in the \(z\)-axis giving (Fig. 2)
\[\left\langle\sigma_{z}^{0}\right\rangle^{ss}=\frac{1}{\sum_{i}^{N}g_{i}^{2}} \sum_{i}^{N}{g_{i}}^{2}\left\langle\sigma_{z}\right\rangle_{i}. \tag{31}\]
Using curve fitting techniques, the activation functions are found as dependent on a new variable \(\beta\) which depends on spin number (Fig. 3).
\[g(I^{\prime})=\tanh(\beta I) \tag{32}\]
As we can see from the equation, different spin numbers refer to different beta values. For spin numbers \(1/2,1,3/2,5/2\), we get \(\beta\) values \(2.22,2.78,3.33,4.1\) respectively.
Fig. 2: Obtained activation function [11]. The probe qubit,initially in the \(\ket{+}=(\ket{e}+\ket{g})/\sqrt{2}\) state, made collisional contact with identical reservoir units, \(\ket{\Psi(\theta,\phi)}\), which had \(\theta=0\), \(\phi=0\) and \(\theta=\pi\), \(\phi=0\). \(\Gamma=2\times 10^{-5}\) is the decay rate of the probe qubit. The reservoir coupling strength is \(g=0.01\). The interaction time between the probe qubit and the reservoir is \(\tau=3\). These parameters are dimensionless and scaled by superconducting resonator frequency \(\omega_{r}\)
## IV Simulation
### _Power System_
We chose a simple 4-bus network for simulations in Table 1[5]. YBUS data and other specifications are given in Table 2.
### _Generating Dataset_
In the training process of the neural network, we must first obtain the dataset. It is determined how to create the dataset according to the dataset features. In this study, dataset is created by randomizing the given P and Q values for PQ buses. Features to train the ANN are selected based on their influence and ability to change the results of power flow. The random library of Python is used to generate the dataset as random. Load values of the PQ buses are varied within the range of [0.8-1.2]. Then the produced values are fed into a power flow algorithm to solve the power flow problem via Newton Raphson method according to algorithm given in section 2 With using Scikit Learn, data is split to test and training. The values that are chosen inputs for the training of the neural network are P and Q for PQ buses and V for PV and Slack buses. And the outputs are V for PQ buses and \(\delta\) for PQ and PV buses.
Scaling is one of the most important elements of training a neural network. To determine the scaling technique to be used, other hyperparameters of the neural network are kept constant.
## V Simulation and Results
As mentioned before a new hyperparameter \(\beta\) is introduced. It is worth noting that variations of hyperbolic tangent activation function already exist [22, 23]. With the constant specifications mentioned in Table 3 simulations for different spin numbers; therefore, for different \(\beta\) values were executed. In the study, the training process was completed with multiple \(\beta\) values and the most efficient result was obtained with \(\beta=4.1\).
Hyperparameter specifications of the system are given in Table V.
As can be clearly seen in Fig. 4, for various beta values according to spin numbers, MSE graphs that change during the learning process were obtained. As can be seen from the graph, the lowest MSE value was obtained for the 5/2 spin number which is refers to \(\beta=4.1\).
### _Hyperparameter Tuning_
Through experimentation it has been found that three optimizers prevail for the problem at hand: Adamax, Nadam and Adam. It is also worth noting that to obtain better results weight regularization techniques are introduced. L1-L2 regularization is used with varying hyperparameters for different optimizers. It has been observed that mean squared error
\begin{table}
\begin{tabular}{|c|c c c|c c|} \cline{2-6} \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{Generation} & \multicolumn{2}{c|}{Load} & \\ \hline Bus & _P, MW_ & _Q, Mvar_ & P, MW & Q, Mvar & V, per unit \\ \hline
1 & - & - & 50 & 30.99 & 1.00 / 0\({}^{\circ}\) \\
2 & 0 & 0 & 170 & 105.35 & 1.00 / 0\({}^{\circ}\) \\
3 & 0 & 0 & 200 & 123.94 & 1.00 / 0\({}^{\circ}\) \\
4 & 318 & - & 80 & 49.58 & 1.02 / 0\({}^{\circ}\) \\ \end{tabular}
\end{table}
Table 1: Power Network Bus Data
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline Bus no. & (1) & (2) & (3) & (4) \\ \hline \multirow{3}{*}{(1)} & 8.985190 & -3.815629 & -5.169561 & \multirow{3}{*}{0} \\ & -j 44.835953 & & & \\ \cline{1-1} \cline{2-5} & -j 38.15629 & & 8.985190 & & -5.169561 \\ \cline{1-1} \cline{2-5} & -j 19.07814 & -j 44.835953 & & & +j 25.847809 \\ \cline{1-1} \cline{2-5} & -j 25.847809 & 0 & & 8.193267 & -3.025705 \\ \cline{1-1} \cline{2-5} & -j 25.847809 & & & -j 40.863838 & +j 15.118528 \\ \cline{1-1} \cline{2-5} \multirow{3}{*}{(4)} & 0 & -5.169561 & -3.023705 & 8.193267 & \multirow{3}{*}{8.193267} \\ & & +j 25.847809 & & & +j 15.118528 & -j 40.863838 \\ \hline \end{tabular}
\end{table}
Table 2: Power Network YBUS Data
Figure 4: MSE graph in the training process with different spin numbers.
Figure 3: Change of steepness of tanh based on spin number.
doubles when number of neurons in the hidden layer drops from 100 to 50 revealing a bottleneck for Adam optimizer. In the end, best solution is obtained with the Adamax optimizer with the specifications given in Table IV.
It is also mentioned that even though physical realizations are yet to exist, for higher \(\beta\) values the ANN performed much better.
## VI Conclusions
In conclusion, we explore the hypothesis that quantum reservoirs can serve as information sources. We propose an open quantum network model designed to address a specific engineering problem. The proposed hardware consists of open perceptrons operating at the steady state, with spins characterized by \(J\geq 1/2\). Following the introduction of the power flow problem, we generate relevant parameters and training sets for the quantum network. Through experimentation, we find that \(J\geq 5/2\) is the optimal parameter for minimizing the mean squared error of the quantum network for the given problem.
## Acknowledgment
The authors gratefully accept funding from the TUBITAK (Grant No. 120F353). The authors would also like to thank the Cognitive Systems Lab in the Department of Electrical Engineering for creating a conducive environment for motivating and engaging talks.
|
2306.02742 | Enhanced Robust Motion Control based on Unknown System Dynamics
Estimator for Robot Manipulators | To achieve high-accuracy manipulation in the presence of unknown
disturbances, we propose two novel efficient and robust motion control schemes
for high-dimensional robot manipulators. Both controllers incorporate an
unknown system dynamics estimator (USDE) to estimate disturbances without
requiring acceleration signals and the inverse of inertia matrix. Then, based
on the USDE framework, an adaptive-gain controller and a super-twisting sliding
mode controller are designed to speed up the convergence of tracking errors and
strengthen anti-perturbation ability. The former aims to enhance feedback
portions through error-driven control gains, while the latter exploits
finite-time convergence of discontinuous switching terms. We analyze the
boundedness of control signals and the stability of the closed-loop system in
theory, and conduct real hardware experiments on a robot manipulator with seven
degrees of freedom (DoF). Experimental results verify the effectiveness and
improved performance of the proposed controllers, and also show the feasibility
of implementation on high-dimensional robots. | Xinyu Jia, Jun Yang, Kaixin Lu, Yongping Pan, Haoyong Yu | 2023-06-05T09:50:34Z | http://arxiv.org/abs/2306.02742v2 | # Motion Control based on Disturbance Estimation and Time-Varying Gain for Robotic Manipulators
###### Abstract
To achieve high-accuracy manipulation in the presence of unknown dynamics and external disturbance, we propose an efficient and robust motion controller (named TvUDE) for robotic manipulators. The controller incorporates a disturbance estimation mechanism that utilizes reformulated robot dynamics and filtering operations to obtain uncertainty and disturbance without requiring measurement of acceleration. Furthermore, we design a time-varying control input gain to enhance the control system's robustness. Finally, we analyze the boundness of the control signal and the stability of the closed-loop system, and conduct a set of experiments on a six-DOF robotic manipulator. The experimental results verify the effectiveness of TvUDE in handling internal uncertainty and external static or transient disturbance.
## I Introduction
Robotic manipulators are gradually replacing humans in performing repetitive, monotonous, or hazardous tasks [1]. The quality of manipulation largely depends on the motion control performance. In general, dynamics-based motion tracking control allows the robot to behave more precisely than using kinematics alone, since the former can take into account more physical properties of the robot or the environment [3]. However, the robot dynamics in the real word is too complex to be modelled exactly, such as friction or other non-smooth dynamics [2]. The external force (load, collision or human-robot interaction) can also affect tracking accuracy. Therefore, the effect induced by these internal uncertainty and external disturbance should be addressed to achieve precise motion control.
Many control strategies have been developed to compensate for uncertainty and disturbance. The proportional-integral-derivative (PID) control with dynamics as feedforward compensation is popular in industrial manipulators [3]. The actual link inertia or joint friction is often obtained through system model identification [4]. However, the identified result is rough and easily mismatched from the true model. The learning method can also construct the unknown dynamics, but it will bring heavy computation burden and the parameter tuning is not trivial in general [5]. Recently, observer-based control schemes have shown huge potential in solving uncertainty and disturbance to improve control accuracy [6]. For example, in [7], the disturbance torque of a linear motor system is estimated by the disturbance observer (DOB). The nonlinear disturbance observer (NDO) is also developed and applied on a robotic manipulator in [8]. However, both types of observer require to identify model information and construct extra filters. The extended state observer (ESO) presented in [9] is widely adopted in disturbance estimation, and some ESO-based methods are proposed as well, e.g., adaptive ESO [10][11]. Nevertheless, implementing ESO on high-dimensional robots is relatively difficult due to its complicated parameter tuning process.
The unknown system dynamics estimator (USDE) in [12, 13, 14] is formulated on the basis of low pass filters with simple formulation but demonstrates impressive estimation performance of disturbance. However, the method is validated on a planar robot with only two degrees of freedom (DOF), instead of a general manipulator with six or more DOFs, of which dynamics are more nonlinear and coupled. Besides, there is no discussion in terms of different disturbance conditions, where the fixed control gain might make the robot fail to handle complex external disturbance in practice. In [15], the authors provide an idea of selecting control input gain by a project gradient estimator (PGE) [16] and verify it with the ESO on series elastic actuators (SEAs).
In this paper, a motion controller based on uncertainty and disturbance estimation and time-varying control gain (TvUDE) is proposed for robotic manipulators. The estimator without measurement of acceleration can obtain unknown dynamics and disturbance in real time, while the time-varying gain allows the robot to exhibit more robustness when meeting various external disturbance. The contributions of this work are: 1) we present an efficient and robust motion control framework for general manipulators; 2) we combine the disturbance estimation and the time-varying gain to handle various disturbance in tracking tasks; 3) we verify the proposed approach through a set of experiments on a six-DOF manipulator.
Fig. 1: The proposed motion control framework for robotic manipulators to handle uncertainty and disturbance in tracking tasks.
## II System Model
The screw theory provides concise expression and efficient calculation in robot modelling, especially for high-dimensional rigid robots [17]. It does not require all parameters of the link coordinates as in conventional methods [3]. For a serial manipulator with \(n\) rotational DOFs, the forward kinematics (FK) is given by
\[\mathbf{T}=e^{[\mathcal{S}_{1}]\theta_{1}}e^{[\mathcal{S}_{2}]\theta_{2}}\ldots e^{ [\mathcal{S}_{n}]\theta_{n}}\mathbf{N} \tag{1}\]
where the transform matrix \(\mathbf{T}\in SE(3)\) denotes the end-effector pose, \([\mathcal{S}_{i}]\,\theta_{i}\in se(3)\) is the exponential coordinate of the joint \(i=1,\cdots,n\). \(\mathbf{N}\in SE(3)\) is the end-effector pose when all joint angles are defined to zero as initial.
For the inverse kinematics (IK) solver, by formulating a quadratic programming problem with equality and inequality constraints as Eq. (2), all joint limitations are simultaneously considered. The resulted trajectory is smoother than that of common numerical IK which simply clamps the result after obtaining joint velocity [17].
\[\min_{\mathbf{\dot{q}}} \left\|\mathbf{J}\dot{\mathbf{q}}-\mathbf{V}^{des}\right\|^{2}+\lambda^{2} \|\dot{\mathbf{q}}\|^{2} \tag{2}\] \[s.t. \left\{\begin{array}{l}\dot{\mathbf{q}}_{min}\leq\dot{\mathbf{q}}\leq \dot{\mathbf{q}}_{max}\\ \mathbf{q}_{min}\leq\mathbf{q}\leq\mathbf{q}_{max}\\ \mathbf{q}=\mathbf{q}+\dot{\mathbf{q}}\Delta t\end{array}\right.\]
The first term in Eq. (2) is to minimize the error between \(\mathbf{J}\dot{\mathbf{q}}\) and the designed twist \(\mathbf{V}^{des}=(\mathbf{\omega}^{des},\mathbf{\omega}^{des})\in\mathbb{R}^{6}\), where \(\mathbf{\omega}\in\mathbb{R}^{3},\mathbf{v}\in\mathbb{R}^{3}\) denote the angular velocity and linear velocity of the end-effector, respectively. The mapping between the transform matrix and the twist is \(\dot{\mathbf{T}}\mathbf{T}^{-1}=[\mathbf{V}]\in se(3)\). The second term is to avoid singularity with a damping coefficient \(\lambda\). \(\mathbf{q}\in\mathbb{R}^{n}\) is the vector of joint position with the lower bound \(\mathbf{q}_{min}\) and the upper bound \(\mathbf{q}_{max}\). \(\Delta t\) is the time step of control loop.
Similarly, adopting screw concept in recursive Newton-Euler algorithm (RNEA) can increase efficiency in computing dynamics or equation of motion as Eq. (3), compared with the energy-based Lagrangian method.
\[\mathbf{M}(\mathbf{q})\ddot{\mathbf{q}}+\mathbf{C}(\mathbf{q},\dot{\mathbf{q}})\dot{\mathbf{q}}+\mathbf{g}(\bm {q})=\mathbf{\tau}+\mathbf{\tau}_{e} \tag{3}\]
where \(\mathbf{M}\in\mathbb{R}^{n\times n},\mathbf{C}\in\mathbb{R}^{n\times n},\mathbf{g}\in \mathbb{R}^{n}\) are the inertia matrix, the Coriolis and centrifugal matrix, and the gravity vector, respectively. \(\mathbf{\tau}\in\mathbb{R}^{n}\) is the vector of joint torque and \(\mathbf{\tau}_{e}\in\mathbb{R}^{n}\) is the vector of torque that external force generates on the robot, including friction, load, and etc.
## III Estimation and Control
The control framework proposed is versatile to open chain robotic manipulators with torque-controlled hinge joints, as illustrated in Fig. 1. We first reformulate the dynamic model to avoid directly using acceleration signals, and then estimate uncertainty and external disturbance via observers. The estimated result is then used for the motion controller design, of which the control gain is time-varying that allows the manipulator to adapt to complex disturbance.
### _Dynamic Model Reformulation_
Before designing estimators and controllers, we will reformulate the dynamic model as follows. First, the model obtained from CAD data might be inaccurate in practice due to manufacture error, and the friction or other non-smooth dynamics involved are too complex to be exactly modelled. Considering these uncertainties, the actual dynamics is
\[(\mathbf{M}+\Delta\mathbf{M})\ddot{\mathbf{q}}+(\mathbf{C}+\Delta\mathbf{C})\dot{\mathbf{q}}+(\mathbf{g}+ \Delta\mathbf{g})=\mathbf{\tau}+\mathbf{\tau}_{e} \tag{4}\]
Hence, instead of identifying these uncertainties, we define a lumped disturbance \(\mathbf{d}\in\mathbb{R}^{n}\) that consists of all kinds of uncertainty and disturbance together as
\[\mathbf{d}=\mathbf{\tau}_{e}-(\Delta\mathbf{M}\ddot{\mathbf{q}}+\Delta\mathbf{C}\dot{\mathbf{q}}+\Delta \mathbf{g}) \tag{5}\]
Secondly, directly measuring joint acceleration is relatively difficult in practice. Though differentiating the joint velocity with respect to time can obtain \(\ddot{\mathbf{q}}\), it will also bring undesired noise. Hence, to avoid using \(\ddot{\mathbf{q}}\), the following auxiliary items are given
\[\mathbf{\mathcal{P}}(\mathbf{q},\dot{\mathbf{q}}) =\mathbf{M}\dot{\mathbf{q}} \tag{6}\] \[\mathbf{\mathcal{H}}(\mathbf{q},\dot{\mathbf{q}}) =-\mathbf{C}^{T}\dot{\mathbf{q}}+\mathbf{g} \tag{7}\]
where \(\mathbf{\mathcal{P}}\in\mathbb{R}^{n}\), \(\mathbf{\mathcal{H}}\in\mathbb{R}^{n}\).
As a result, Eq. (4) is redefined as
\[\ddot{\mathbf{\mathcal{P}}}(\mathbf{q},\dot{\mathbf{q}})+\mathbf{\mathcal{H}}(\mathbf{q},\dot{\mathbf{ q}})=\mathbf{\tau}+\mathbf{d} \tag{8}\]
where the skew symmetric property of \(\mathbf{M}\) (i.e., \(\dot{\mathbf{M}}=\mathbf{C}^{T}+\mathbf{C}\)[17]) is leveraged to avoid differential of \(\mathbf{M}\).
The reformulated model of Eq. (8) presents two distinct advantages compared to Eq. (3). First, it is unnecessary to excessively concern about the accuracy of the actual robot model, which might help us skip the parameter identification for joint friction or link inertia in robot development. Then, the acceleration of the joint angle is included in \(\dot{\mathbf{\mathcal{P}}}\). Accordingly, we can avoid using the acceleration signal if the differential of \(\mathbf{\mathcal{P}}\) is not necessary in controller design.
### _UDE Design_
Except for \(\ddot{\mathbf{q}}\), the measurement of \(\dot{\mathbf{q}}\) and \(\mathbf{q}\) is generally available for a manipulator. With the known \(\mathbf{M}\), \(\mathbf{C}\) and \(\mathbf{g}\), the variables \(\mathbf{\mathcal{P}}\) and \(\mathbf{\mathcal{H}}\) can be calculated via Eq. (6) and Eq. (7). Hence, we formulate several filters as
\[\begin{cases}k\dot{\mathbf{\mathcal{P}}}_{f}+\mathbf{\mathcal{P}}_{f}=\mathbf{\mathcal{P}} \\ k\dot{\mathbf{\mathcal{H}}}_{f}+\mathbf{\mathcal{H}}_{f}=\mathbf{\mathcal{H}}\\ k\dot{\mathbf{\tau}}_{f}+\mathbf{\tau}_{f}=\mathbf{\tau}\end{cases} \tag{9}\]
where \(k>0\) is a filter coefficient, \(\mathbf{\tau}\) is the last torque command. \(\mathbf{\mathcal{P}}_{f},\mathbf{\mathcal{H}}_{f},\mathbf{\tau}_{f}\) are the filtered variables set to zero at initial.
Then, applying filtering operations for both sides of Eq. (8) and replacing the differential term with Eq. (9), we have
\[\frac{\mathbf{\mathcal{P}}-\mathbf{\mathcal{P}}_{f}}{k}+\mathbf{\mathcal{H}}_{f}=\mathbf{\tau}_{f }+\mathbf{d}_{f} \tag{10}\]
where \(\mathbf{d}_{f}\) is the filtered lumped disturbance. Finally, the UDE is given as
\[\dot{\mathbf{d}}=\mathbf{d}_{f}=\frac{\mathbf{\mathcal{P}}-\mathbf{\mathcal{P}}_{f}}{k}+\mathbf{ \mathcal{H}}_{f}-\mathbf{\tau}_{f} \tag{11}\]
With the help of filtering operations, the disturbance estimation effectively avoids the differential of variables. Before proving this estimator's convergence, we first give the following lemma.
**Lemma 1**. _The lumped disturbance \(\mathbf{d}\) of a robot system as Eq. (8) can be estimated via the UDE in Eq. (11), and the estimation error \(\tilde{\mathbf{d}}=\mathbf{d}-\tilde{\mathbf{d}}\) has the following property_
\[\dot{\tilde{\mathbf{d}}}=-\frac{1}{k}\tilde{\mathbf{d}}+\dot{\mathbf{d}} \tag{12}\]
\(\mathbf{Proof:}\) Substituting the filter equations of Eq. (9) into the left hand side of Eq. (12), we have
\[\dot{\tilde{\mathbf{d}}} =\dot{\mathbf{d}}-\dot{\tilde{\mathbf{d}}}=\dot{\mathbf{d}}-(\frac{\dot{\mathbf{ \mathcal{P}}}-\dot{\mathbf{\mathcal{P}}}_{f}}{k}+\dot{\mathbf{\mathcal{H}}}_{f}-\dot{ \mathbf{\tau}}_{f})\] \[=\dot{\mathbf{d}}-\frac{1}{k}(\dot{\mathbf{\mathcal{P}}}-\frac{\mathbf{ \mathcal{P}}-\mathbf{\mathcal{P}}_{f}}{k}+\mathbf{\mathcal{H}}-\mathbf{\mathcal{H}}_{f}- \mathbf{\tau}+\mathbf{\tau}_{f})\] \[=\dot{\mathbf{d}}-\frac{1}{k}\left[(\dot{\mathbf{\mathcal{P}}}+\mathbf{ \mathcal{H}}-\mathbf{\tau})-(\frac{\mathbf{\mathcal{P}}-\mathbf{\mathcal{P}}_{f}}{k}+ \mathbf{\mathcal{H}}_{f}-\mathbf{\tau}_{f})\right]\] \[=-\frac{1}{k}\tilde{\mathbf{d}}+\dot{\mathbf{d}} \tag{13}\]
The internal uncertainty or external disturbance to the robot is generally bounded. Hence, we assume that the differential of \(\mathbf{d}\) is bounded, i.e., \(sup_{t\geq 0}||\dot{\mathbf{d}}||\leq d_{0}\) for a constant \(d_{0}>0\).
**Theorem 1**. _The disturbance estimation error \(\tilde{\mathbf{d}}\) of a robot system as Eq. (8) is bounded, \(||\tilde{\mathbf{d}}(t)||\leq\sqrt{||\mathbf{d}(0)||^{2}e^{-t/k}+k^{2}d_{0}^{2}}\), and hence the disturbance estimation \(\dot{\mathbf{d}}\rightarrow\mathbf{d}\) when \(k\to 0\) and/or \(d_{0}\to 0\)._
\(\mathbf{Proof:}\) A Lyapunov function is designed as
\[V_{1}=\frac{1}{2}\tilde{\mathbf{d}}^{T}\tilde{\mathbf{d}} \tag{14}\]
Then, we calculate its differential with respect to time and apply Young's inequality on \(\tilde{\mathbf{d}}^{T}\dot{\mathbf{d}}\).
\[\dot{V}_{1} =\dot{\mathbf{d}}^{T}\dot{\tilde{\mathbf{d}}}=-\frac{1}{k}\tilde{\mathbf{d}}^{ T}\tilde{\mathbf{d}}+\tilde{\mathbf{d}}^{T}\dot{\mathbf{d}}\] \[\leq-\frac{1}{k}\tilde{\mathbf{d}}^{T}\tilde{\mathbf{d}}+\frac{1}{2k} \tilde{\mathbf{d}}^{T}\tilde{\mathbf{d}}+\frac{k}{2}d_{0}^{2}\] \[\leq-\frac{1}{k}V_{1}+\frac{k}{2}d_{0}^{2} \tag{15}\]
Thus, \(V_{1}\) is proven to be bounded as well as the estimation error \(||\tilde{\mathbf{d}}(t)||\). And \(||\dot{\tilde{\mathbf{d}}}(t)||\) will exponentially converge to a residual, \(||\ddot{\mathbf{d}}(t)||\leq\sqrt{||\mathbf{d}(0)||^{2}e^{-t/k}+k^{2}d_{0}^{2}}\), where the bound depends on the constant \(k\) and the upper bound \(d_{0}\).
### _UDE-based Controller Design_
To design a controller to track trajectory with the estimation of lumped disturbance, we define a variable
\[\mathbf{S}=\dot{\mathbf{e}}+\mathbf{\eta}\mathbf{e} \tag{16}\]
where \(\mathbf{S}\in\mathbb{R}^{n}\), \(\mathbf{e}=\mathbf{q}-\mathbf{q}^{des}\) is the vector of joint position error, \(\mathbf{\eta}\in\mathbb{R}^{n\times n}\) is a positive diagonal matrix of control coefficient. Obviously, the tracking error \(\mathbf{e}\) will converge to zero once \(\mathbf{S}\) converges to zero.
Then, we formulate the UDE-based controller as
\[\mathbf{\tau}^{des}=-\mathbf{\mathcal{K}}\mathbf{S}-\dot{\mathbf{d}}+\mathbf{M}\hat{\mathbf{\zeta}}+ \mathbf{C}\mathbf{\zeta}+\mathbf{g} \tag{17}\]
where \(\mathbf{\zeta}=\dot{\mathbf{q}}^{des}-\mathbf{\eta}\mathbf{e}\) is an intermediate variable, \(\mathbf{\mathcal{K}}\in\mathbb{R}^{n\times n}\) is a diagonal positive matrix of control gain, \(\dot{\mathbf{d}}\) is obtained from Eq. (11). Here, \(\mathbf{\mathcal{K}}\) keeps constant throughout the control period, while it will be designed to be time-varying in next section. Note that in the controller Eq. (17), the acceleration \(\tilde{\mathbf{q}}\) is not used, nor is the inverse of the inertia matrix as in [8], since \(\mathbf{M}^{-1}\) may not be always feasible in practice according to [12]. The proof of convergence and boundedness of the proposed controller will be given in Section IV.
By substituting Eq. (17) into Eq. (3), we have the tracking error equation as
\[\mathbf{M}\dot{\mathbf{S}}=-\mathbf{\mathcal{K}}\mathbf{S}-\mathbf{\mathcal{C}}\mathbf{S}+\tilde{\mathbf{d}} \tag{18}\]
### _Time-varying Control Gain_
According to the analysis in [18], the control gain should be selected appropriately. A too low gain will lead to low response to disturbance, while a too high gain will lead to bad damping effect. In addition, the gain value may affect the bandwidth of control system [15]. Hence, we adopt adaptive law and design time-varying control gains as Eq. (19), so that the controller can quickly adapt to complex disturbance.
\[\dot{\dot{\varkappa}}_{i}(t)=\begin{cases}\lambda_{i}(s_{i}-\sigma_{i}\dot{ \varkappa}_{i}),&\text{if }\dot{\varkappa}_{i}\geq\underline{\varkappa}_{i}\\ 0,&\text{otherwise}\end{cases} \tag{19}\]
where \(\dot{\varkappa}_{i}\) denotes the elements on the diagonal of the estimated gain matrix \(\hat{\mathbf{\mathcal{K}}}\) as Eq. (20), and is equal to the lower bound \(\underline{\varkappa}_{i}>0\) at initial. \(\lambda_{i}>0\) is the adaptive gain, \(s_{i}\) corresponds to the element of the vector \(\mathbf{S}\), \(\sigma_{i}>0\) is a constant (i.e., \(\sigma\)-modification [16]). In Eq. (19), the gain \(\dot{\varkappa}_{i}\) will adaptively change as long as it is higher than \(\underline{\varkappa}_{i}\). Besides, we assume the differential of \(\varkappa_{i}\) is bounded, i.e., \(sup_{t\geq 0}|\dot{\varkappa}_{i}|\leq\varkappa_{i0}\) for a constant \(\varkappa_{i0}>0\).
\[\hat{\mathbf{\mathcal{K}}}=\begin{pmatrix}\hat{\varkappa}_{1}&0&\cdots&0\\ 0&\hat{\varkappa}_{2}&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&\hat{\varkappa}_{n}\end{pmatrix} \tag{20}\]
Before proving the stability of Eq. (19), we give a lemma:
**Lemma 2**. _For the robot's \(i^{th}\) joint, the control gain \(\dot{\varkappa}_{i}\) designed by the adaptive law Eq. (19) has the following property with the estimation error \(\tilde{\varkappa}_{i}\)_
\[\tilde{\varkappa}_{i}\dot{\varkappa}_{i}\leq-\frac{2\gamma_{1}-1}{2\gamma_{1}} \tilde{\varkappa}_{i}^{2}+\frac{\gamma_{1}}{2}\varkappa_{i} \tag{21}\]
where \(\gamma_{1}>0\) is an arbitrary constant.
\(\mathbf{Proof:}\) By applying Young's inequality on \(\tilde{\varkappa}_{i}\varkappa_{i}\), we have
\[\tilde{\varkappa}_{i}\dot{\varkappa}_{i} =\tilde{\varkappa}_{i}(\varkappa_{i}-\tilde{\varkappa}_{i})=-\tilde {\varkappa}_{i}^{2}+\tilde{\varkappa}_{i}\varkappa_{i}\] \[\leq-\tilde{\varkappa}_{i}^{2}+\frac{1}{2\gamma_{1}}\tilde{ \varkappa}_{i}^{2}+\frac{\gamma_{1}}{2}\varkappa_{i}^{2}\] \[=-\frac{2\gamma_{1}-1}{2\gamma_{1}}\tilde{\varkappa}_{i}^{2}+ \frac{\gamma_{1}}{2}\varkappa_{i}^{2} \tag{22}\]
## IV Stability Analysis
In this section, the convergence and boundedness of the proposed control system is proven.
**Theorem 2**.: _For a robot system Eq. (3), the closed-loop control system Eq. (17) is stable with the UDE observer Eq. (11) and the time-varying gain Eq. (19). The tracking error \(\mathbf{e}\), the disturbance estimation error \(\tilde{\mathbf{d}}\), the gain estimation error \(\tilde{\mathbf{\varkappa}}\) will exponentially converge to a small residual around zero, and are all ultimately uniformly bound._
**Proof** : A Lyapunov function is designed as
\[V_{2}=\frac{1}{2}\mathbf{S}^{T}\mathbf{M}\mathbf{S}+\frac{1}{2}\tilde{\mathbf{d}}^{T}\tilde{ \mathbf{d}}+\frac{1}{2}\sum_{i=1}^{n}\frac{1}{\lambda_{i}}\tilde{\varkappa}_{i}^{2} \tag{23}\]
where \(\lambda_{i}\) is the adaptive coefficient defined in Eq. (19). Then, we calculate the differential of \(V_{2}\). For the convenience of proof, \(\dot{V}_{2}\) is divided into two portions \(\mathcal{B}_{1}\) and \(\mathcal{B}_{2}\) as below.
\[\dot{V}_{2}=\underbrace{\mathbf{S}^{T}\mathbf{M}\dot{\mathbf{S}}+\frac{1}{2}\mathbf{S}^{T} \dot{\mathbf{M}}\mathbf{S}+\tilde{\mathbf{d}}^{T}\dot{\tilde{\mathbf{d}}}}_{\mathcal{B}_{1}}+ \underbrace{\sum_{i=1}^{n}\frac{1}{\lambda_{i}}\tilde{\varkappa}_{i}\dot{ \tilde{\varkappa}}_{i}}_{\mathcal{B}_{2}} \tag{24}\]
For \(\mathcal{B}_{1}\), substitute Eq. (12), Eq. (18), and apply Young's inequality on \(\mathbf{S}^{T}\tilde{\mathbf{d}}\) and \(\tilde{\mathbf{d}}\dot{\mathbf{d}}\) as
\[\mathcal{B}_{1} =\mathbf{S}^{T}(-\mathcal{K}\mathbf{S}-\mathbf{C}\mathbf{S}+\tilde{\mathbf{d}})+ \frac{1}{2}\mathbf{S}^{T}\dot{\mathbf{M}}\mathbf{S}+\tilde{\mathbf{d}}^{T}(-\frac{1}{k}\tilde{ \mathbf{d}}+\dot{\mathbf{d}})\] \[=-\mathbf{S}^{T}\mathcal{K}\mathbf{S}+\mathbf{S}^{T}\tilde{\mathbf{d}}-\frac{1}{k }\tilde{\mathbf{d}}^{T}\tilde{\mathbf{d}}+\tilde{\mathbf{d}}\dot{\mathbf{d}}\] \[\leq-\gamma_{2}\mathbf{S}^{T}\mathbf{S}+(\frac{\gamma_{2}}{2}\mathbf{S}^{T} \mathbf{S}+\frac{1}{2\gamma_{2}}\mathbf{d}^{T}\mathbf{d})-\frac{1}{k}\tilde{\mathbf{d}}^{T} \tilde{\mathbf{d}}\] \[\quad+(\frac{1}{2k}\tilde{\mathbf{d}}^{T}\tilde{\mathbf{d}}+\frac{k}{2}d_ {0}^{2})\] \[=-\frac{\gamma_{2}}{2}\mathbf{S}^{T}\mathbf{S}-(\frac{1}{2k}-\frac{1}{2 \gamma_{2}})\tilde{\mathbf{d}}^{T}\tilde{\mathbf{d}}+\frac{k}{2}d_{0}^{2} \tag{25}\]
where \(\gamma_{2}=\lambda_{min}(\mathcal{K})\) is the minimum eigenvalue of \(\mathcal{K}\), \(d_{0}\) is the upper bound of differential of disturbance \(||\dot{\mathbf{d}}||\).
For \(\mathcal{B}_{2}\), substitute Eq. (19), Eq. (21), and apply Young's inequality on \(\tilde{\varkappa}_{i}\dot{\varkappa}_{i}\) and \(-\tilde{\varkappa}_{i}s_{i}\) as
\[\mathcal{B}_{2} =\sum_{i=1}^{n}\frac{1}{\lambda_{i}}\tilde{\varkappa}_{i}(\dot{ \varkappa}_{i}-\dot{\tilde{\varkappa}}_{i})=\sum_{i=1}^{n}\frac{1}{\lambda_{i} }\tilde{\varkappa}_{i}\dot{\varkappa}_{i}-\sum_{i=1}^{n}\tilde{\varkappa}_{i} (s_{i}-\sigma_{i}\dot{\varkappa}_{i})\] \[\leq\sum_{i=1}^{n}\frac{1}{\lambda_{i}}(\frac{1}{2\gamma_{3}} \tilde{\varkappa}_{i}^{2}+\frac{\gamma_{3}}{2}\dot{\varkappa}_{i}^{2})+\sum_{ i=1}^{n}(\frac{1}{2\gamma_{4}}\tilde{\varkappa}_{i}^{2}+\frac{\gamma_{4}}{2}s_{i}^{2})\] \[\quad+\sum_{i=1}^{n}\sigma_{i}(-\frac{2\gamma_{1}-1}{2\gamma_{1}} \tilde{\varkappa}_{i}^{2}+\frac{\gamma_{1}}{2}\varkappa_{i}^{2})\] \[\leq-\sum_{i=1}^{n}\frac{1}{2}\underbrace{\left[\frac{(2\gamma_{1} -1)\sigma_{i}}{\gamma_{1}}-\frac{1}{\lambda_{i}\gamma_{3}}-\frac{1}{\gamma_{ 4}}\right]\tilde{\varkappa}_{i}^{2}}_{\mathcal{E}}\] \[\quad+\underbrace{\sum_{i=1}^{n}(\frac{\gamma_{3}}{2\lambda_{i}} \varkappa_{i0}^{2}+\frac{\gamma_{1}\sigma_{i}}{2}\varkappa_{i}^{2})}_{\mathcal{ F}}+\sum_{i=1}^{n}\frac{\gamma_{4}}{2}s_{i}^{2}\] \[=\frac{\gamma_{4}}{2}\mathbf{S}^{T}\mathbf{S}-\sum_{i=1}^{n}\frac{\mathcal{ E}}{2}\tilde{\varkappa}_{i}^{2}+\mathcal{F} \tag{26}\]
where \(\gamma_{3},\gamma_{4}>0\) are arbitrary constants. \(\varkappa_{i0}\) is the upper bound of differential of control gain \(\dot{\varkappa}_{i}\). \(\mathcal{E}\) and \(\mathcal{F}\) are intermediate variables for the convenience of proof.
Hence, combining Eq. (25) and Eq. (26), we have
\[\dot{V}_{2} \leq-\frac{\gamma_{2}-\gamma_{4}}{2}\mathbf{S}^{T}\mathbf{S}-(\frac{1}{2k} -\frac{1}{2\gamma_{2}})\tilde{\mathbf{d}}^{T}\tilde{\mathbf{d}}-\sum_{i=1}^{n}\frac{ \mathcal{E}}{2}\tilde{\varkappa}_{i}^{2}\] \[\quad+(\frac{k}{2}d_{0}^{2}+\mathcal{F})\] \[\leq-\alpha V_{2}+\beta \tag{27}\]
with
\[\alpha=\min\left\{\frac{\gamma_{2}-\gamma_{4}}{\lambda_{max}(\mathbf{M})},\;( \frac{1}{k}-\frac{1}{\gamma_{2}}),\;\lambda_{i}\mathcal{E}\right\} \tag{28}\]
\[\beta=\frac{k}{2}d_{0}^{2}+\mathcal{F}=\frac{k}{2}d_{0}^{2}+\sum_{i=1}^{n}( \frac{\gamma_{3}}{2\lambda_{i}}\varkappa_{i0}^{2}+\frac{\gamma_{1}\sigma_{i}}{2} \varkappa_{i}^{2}) \tag{29}\]
where \(\lambda_{max}(\mathbf{M})\) is the maximum eigenvalue of \(\mathbf{M}\). \(\beta\) is a positive constant, while we set constraints as below to make the constant \(\alpha>0\) as well.
\[\begin{cases}\gamma_{2}\geq\gamma_{4}\\ \gamma_{2}\geq k\\ \sigma_{i}\geq\frac{(\lambda_{i}\gamma_{3}+\gamma_{4})\gamma_{1}}{\lambda_{i} \gamma_{3}\gamma_{4}(2\gamma_{1}-1)}\end{cases} \tag{30}\]
Finally, by solving Eq. (IV-B), we have \(V_{2}(t)\leq e^{-\alpha t}V_{2}(0)+\beta/\alpha(1-e^{-\alpha t})\). Therefore, Theorem 2 is proven.
Fig. 2: Experimental setup. (a) shows a robotic manipulator and accompanying electronics. The bottom pictures are snapshots of three groups of experiments, which correspond to (b) free-motion tracking, (c) static-disturbance tracking, and (d) dynamic-disturbance test, respectively.
## V Verification
The experimental verification is conducted on a six-DOF robotic manipulator with a two-DOF gripper, depicted in Fig. 2(a), which is developed by NUS Biorobotics Laboratory. The robot system weighs approximately 4 kg and is capable of taking a maximum payload of 1 kg at full extension of 600 mm. Each joint consists of a torque-controlled BLDC motor and a harmonic drive with reduction ratio of 50. The motors are connected to a low-level controller Teensy 4.1 via CAN at 2 kHz, and the latter communicates with a Mini PC (Intel Core i7-1165G7 CPU, Linux kernel 5.4.69-rt39) via UDP at 1 kHz. The C++ implementation of our control algorithm can run at a frequency of 500 Hz on the PC.
To demonstrate the effectiveness of the proposed TvUDE controller, we compare it to the other two controllers, i.e., an UDE controller as Eq. (17) but with constant control gains, and a PD feedback controller plus model-based feedforward [17]. Correspondingly, we carry out comparison experiments that are divided into three groups in total. First, the manipulator is required to freely track a straight trajectory from \(P_{0}\) to \(P_{t}\) in Cartesian space, as illustrated in Fig. 2(b). The second group uses a 1 kg dumbbell as an unmodeled static disturbance in the tracking stage, as Fig. 2(c) shown. For the third group in Fig. 2(d), the 1 kg dumbbell is immediately removed from the robot as it is keeping position to simulate transient disturbance.
The control parameters are kept consistent in the three test cases. Here we only discuss the results of joint 2, joint 3 and joint 5 (i.e., \(i=2,3,5\)) due to their obvious effect. For the PD controller, the proportional gain is \((5,5,5)\); the differential gain is \((0.5,0.5,0.5)\); the Coulomb friction compensation is roughly identified as \((1.3,0.9,0.9)\) N.m. Then, for the UDE controller, the filter gain \(k=0.05\); the tracking error gain \(\eta_{i}\) is \((10,10,10)\) while the constant control gain \(\varkappa_{i}\) is \((0.6,0.6,0.4)\). Moreover, the adaptive gain \(\lambda_{i}\) in the TvUDE controller is \((2.5,2.5,0.1)\); the lower bound \(\underline{\varkappa_{i}}\) for activating adaption keeps same as \(\varkappa_{i}\) of the UDE.
Fig. 3 provides the recorded command and measurement in free-motion and static-disturbance tests. Fig. 3 (a) indicates the joint position \(q_{i}^{des}\) and velocity \(\hat{q}_{i}^{des}\) command computed by the IK solver Eq. 2 that allows the end-effector to follow a straight line. In Fig. 3 (b) and Fig. 3 (c), from the top row to the bottom, they are the tracking error \(e_{i}\), the commanded joint torque \(\tau_{i}^{des}\), the estimation of lumped disturbance \(\hat{d}_{i}\), and the control gain \(\varkappa_{i}\). The robot joints start moving in 0-3 seconds, and then stop when the end-effector reaches the targeted location. Both tests present excellent tracking performance of the controllers with estimation, compared to the pure PD. For example, in Fig. 3(b), the blue curve representing the PD shows a slightly large transient tracking error as well as steady-state error, probably due to inaccurate friction model. However, even without friction compensation, the UDE (green curve) and the TvUDE (red curve) only lead to small tracking error. The disturbance \(\hat{d}_{i}\) estimated online also reflects that the friction in real world is too complex to be modelled exactly. Furthermore, after adding unknown payload on the robot, the PD behaves worse while the two controllers with UDE can still handle model uncertainty as
Fig. 3: Plots showing the recorded command and measurement in free-motion and static-disturbance tests. (a) shows the joint position \(q_{i}^{des}\) and velocity \(\hat{q}_{i}^{des}\) command computed by the IK solver. In (b) and (c), from the top row to the bottom, they are the tracking error \(e_{i}\), the commanded joint torque \(\tau_{i}^{des}\), the estimated lumped disturbance \(\hat{d}_{i}\), and the control gain \(\varkappa_{i}\).
well as external disturbance with very small tracking errors, as illustrated in Figure 3(c). Note that the gain \(\varkappa_{i}\) continuously changes, indicating that the adaptive law functions all the time, though there appears to be no improvement in tracking. The time-varying gain does however help the controller continue to work well under transient disturbance.
The result in Fig. 4 corresponds to the third group of experiments, which compares the UDE and the TvUDE on dealing with transient disturbance. In this case, when the dumbbell is taken off at 0.8 seconds, all joints oscillate to some degree, and then return back to their previous position. However, the TvUDE controller presents shorter recovery time and less tracking error than the UDE. The faster convergence rate of TvUDE can be explained from the control gain \(\varkappa_{i}\) plots. When disturbance occurs, the control gain (red curve) rapidly increases to stabilize the robot as soon as possible. Therefore, the TvUDE shows strong robustness to reject transient disturbance. More experiment results can be found in our video submission.
## VI Conclusion and Future Work
In this paper, we propose an efficient and robust motion controller (TvUDE) with uncertainty and disturbance estimation and time-varying control gain for robotic manipulators, in order to achieve precise trajectory tracking. Specifically, the estimator that consists of reformulated robot dynamics and filtering operations can estimate unknown disturbance in real time, without using the acceleration signal. Then, the time-varying control gain based on the gradient method intends to further increase the robustness of the control system. Finally, the convergence performance and boundness of TvUDE is analysed, while a set of real hardware experiments are conducted on a six-DOF robotic manipulator for verification. The experimental results show the effectiveness of the proposed controller on handling internal uncertainty, external static and transient disturbance. In the future, we plan to study the possibility of this controller in collision detection for safe human-robot collaboration.
## Acknowledgment
The authors are grateful to Terry Cavan Chan for his help in mechanical design of the robot.
|
2304.14759 | A runaway T-Tauri star leaving an extended trail | Aims. We address the problem of young stellar objects that are found too far
away from possible star formation sites. Different mechanisms have been
proposed before to explain this unexpected circumstance. The idea of
high-velocity protostars is one of these mechanisms, although observational
support is not always easy to obtain. We aim to shed light on this issue after
the serendipitous discovery of a related stellar system. Methods. Following the
inspection of archival infrared data, a peculiar anonymous star was found that
apparently heads a long tail that resembles a wake-like feature. We conducted a
multiwavelength analysis including photometry, astrometry, and spectroscopy.
Together with theoretical physical considerations, this approach provided a
reasonable knowledge of the stellar age and kinematic properties, together with
compelling indications that the extended feature is indeed the signature of a
high-velocity, or runaway, newborn star. Results. Our main result is the
discovery of a low-mass young stellar object that fits the concept of a runaway
T-Tauri star that was hypothesized several decades ago. In this peculiar star,
nicknamed UJT-1, the interaction of the stellar wind with the surrounding
medium becomes extreme. Under reasonable assumptions, this unusual degree of
interaction has the potential to encode the mass-loss history of the star on
timescales of several $10^5$ years | Josep Martí, Pedro L. Luque-Escamilla, Estrella Sánchez-Ayaso | 2023-04-28T11:04:55Z | http://arxiv.org/abs/2304.14759v1 | # A runaway T-Tauri star leaving an extended trail
###### Abstract
Context:
Aims:We address the problem of young stellar objects that are found too far away from possible star formation sites. Different mechanisms have been proposed before to explain this unexpected circumstance. The idea of high-velocity protostars is one of these mechanisms, although observational support is not always easy to obtain. We aim to shed light on this issue after the serendipitous discovery of a related stellar system.
Methods:Following the inspection of archival infrared data, a peculiar anonymous star was found that apparently heads a long tail that resembles a wake-like feature. We conducted a multiwavelength analysis including photometry, astrometry, and spectroscopy. Together with theoretical physical considerations, this approach provided a reasonable knowledge of the stellar age and kinematic properties, together with compelling indications that the extended feature is indeed the signature of a high-velocity, or runaway, newborn star.
Results:Our main result is the discovery of a low-mass young stellar object that fits the concept of a runaway T-Tauri star that was hypothesized several decades ago. In this peculiar star, nicknamed UJT-1, the interaction of the stellar wind with the surrounding medium becomes extreme. Under reasonable assumptions, this unusual degree of interaction has the potential to encode the mass-loss history of the star on timescales of several \(\sim 10^{5}\) years.
Conclusions:
## 1 Introduction
Star formation usually occurs in the deep cores of molecular clouds (McKee & Ostriker, 2007). However, sometimes infant stars appear to be isolated, and their space velocity is too low (few km s\({}^{-1}\)) for them to have reached their current position from any plausible cradle site within their young age (a few million years; Neuhauser (1997)). While dispersion of the parental molecular cloud might eventually overcome this issue (Hoff et al., 1998), the concept of runaway T-Tauri stars (TTS), also known as RATTS, was offered as a competing alternative scenario (Sterzik & Durisen, 1995), although only a few candidates are reported decades later (Neuhauser et al., 1997; Neuhauser et al., 1996). Moreover, none of these candidates exhibits unambiguous signatures of a fast-moving object. In this paper, we present the discovery of a high-velocity TTS escaping from its natal molecular cloud and leaving behind an extremely long wake that we estimate to be at least \(\sim 10\) pc. This elongated feature rivals the length of similar features in the Galaxy that are associated with a single star (Martin et al., 2007).
This finding not only revives the RATTS paradigm, but might also enable us to recover the mass-loss history of a protostellar object back to several hundred thousand years. In addition, the long tail behind this TTS also provides a unique test bench for studying the interplay between turbulence and instabilities at very large Galactic scales. The present work, dealing with an obscured low-mass star, complements other recent studies on runaway stars using _Gaia_ data that were mostly focused on early-type luminous stars (Hattori et al., 2019; Neuhauser et al., 2020) or giant unobscured stars (Li et al., 2021). In these evolved contexts, dynamical ejection in multiple systems and supernova explosions in close binaries currently appear as the most likely scenarios.
The paper is organized as follows. After describing the discovery circumstances and early observational work, we present the evidence supporting the RATTS nature of the target star. The spectral energy distribution (SED) of the star itself and of its bow-shock tail is addressed in detail. Next, in the discussion section, we devote most of our attention to the mechanical scenario. We scale the properties of the stellar wake, determine the type of instabilities that eventually develop in its flow, and describe stellar wind evolution. Anticipating our main conclusion, we appear to have found a RATTS that looses mass in time due to variable stellar winds and strong interaction with its surrounding interstellar medium (ISM). Finally, four appendices with equation
formalisms of cooling timescales, hydrodynamical phenomena, and speculation about the origin of the star are also included.
## 2 Serendipitous discovery and initial observational follow-up
While inspecting the bow shock of BD+43\({}^{\circ}\)3654, a Cygnus massive runaway star (Benaglia et al., 2010), in the All-Sky Data Release of the Wide-field Infrared Survey Explorer (WISE), we detected an unusual filamentary structure, nearly one degree long and with a measured position angle along \(178^{\circ}\pm 2^{\circ}\), with a conspicuous star-like source at a distance of about 0.35 arcmin from the frontal sharpest edge (Fig. 1). This anonymous object was later realized to have been present, but gone unnoticed in previously published images of the field (Toalfa et al., 2016). It also corresponds to entry 2070567522539111168 in the third _Gaia_ data release (DR3), where it is flagged as variable. No proper motion or parallax information is available, however. Intrigued by these facts, we first conducted intensive astrometric and photometric observations with the 0.4 m University of Jaen Telescope (UTT; MPC code L83, Marti et al. (2017)), during which the nickname UJT-1 was assigned. To overcome the shortage of _Gaia_ parameters, astrometry was performed on different archival images from nearly seven decades ago. This was also complemented with astrometry based on modern CCD images including the most recent image in 2021 with the UJT. The different image sources are collected in the first column of Table 1.
Accurate astrometric solutions for all images, including the historical ones, were established based on numerous _Gaia_ reference stars in the field corrected for proper motions. The IRAF package was used for bias, dark current, and flat-field corrections when needed and to determine the plate solutions. In particular, its tasks daofind, ccxymatch, ccxmap, and cctran were key for this last purpose. The resulting sky coordinates in the International Celestial Reference System (ICRS) are presented in Table 1. They were computed using the full astrometric solutions (usually up to third-order terms) to account for plate distortions in the focal plane. This translates into astrometric residuals of reference stars ranging from about 0.1 arcsecond for historical photographic survey images to \(\sim 0.01\) arcsecond for more recent electronic frames selected among the best seeing conditions. The statistical errors given in Table 1 typically correspond to about 1/10 to 1/20 pixel since the target star was always detected with a good signal-to-noise ratio. The final outcome are the proper motions (\(\mu_{x}\cos\delta=-1\pm 1\) mas yr\({}^{-1}\) and \(\mu_{\delta}=-6\pm 1\) mas yr\({}^{-1}\)) that resulted from a weighted linear least-squares fit to all Table 1 positions (see Fig. 2).
The UJT astrometric run in 2019 also included absolute photometry, yielding \(R=16.48\pm 0.06\) and \(I=14.70\pm 0.03\) using Landolt standard stars (Landolt, 1992). In a similar way, imaging with the 1.23 m telescope at the Centro Astronomic Hispano-Aleman in Calar Alto (CAHA, Spain) on 2019 December 10 provided \(V=19.2\pm 0.1\), but not a simultaneous detection in the blue (\(B\geq 20.5\)).
## 3 Spectral type and distance determination
We also obtained optical spectroscopy of UJT-1 using the ALFOSC spectrograph of the Nordic Optical Telescope (NOT) at the Observatorio del Roque de los Muchachos (ORM) (Fig. 3). The spectra were taken on 2019 June 10 with a total of 1800s exposure time, and using the ALFOSC grism 4 that covers the 3200-9600 A region with medium resolution. Data reduction, also with IRAF, included bias and flat-field correction followed by spectrum extraction with removal of sky background. Wavelength calibration was achieved using Th-Ar lamps. Flux calibration was tied to a nearly simultaneous observation of the standard star BD+17 4708 (a subdwarf star of spectral type F8), at roughly the same air mass. Unfortunately, the target lines in the blue region of the spectrum were not accessible due to high interstellar extinction. From the continuum flux level, we can still estimate the approximate but simultaneous magnitudes \(B=21.4\pm 0.2\) and \(V=18.8\pm 0.1\), indicating a color \(B-V=2.6\pm 0.2\). Beyond the highly reddened and absorbed continuum, the most distinctive target feature was a noticeable, blueshifted H\(\alpha\) emission component. The H\(\alpha\) energy flux and equivalent width amounted to \((4.1\pm 0.1)\times 10^{-15}\) erg s\({}^{-1}\) cm\({}^{-2}\) and \(-10.7\pm 0.2\) A. Other easily recognizable spectral features were the near-infrared Ca II triplet in absorption. This immediately indicates a late spectral-type classification for our target. A second ALFOSC observation on 2019 October 18 using grism 19 confirmed the H\(\alpha\) emission and revealed no trace of lithium I 6707 A.
In order to improve the spectral classification, the first NOT spectrum was continuum rectified to be compared to a library of ionized calcium stellar spectra (Cenarro et al., 2001) convolved to the same resolution. Under a least-squares criterion, the closest match corresponded to a G5 III type (Fig. 4). We point out here that this refers only to a location in Hertzsprung-Russell (HR) diagram and does not necessarily reflect the young evolutionary status of the star. Based on the absolute magnitude and intrinsic \(B-V\) color at this HR position, the most plausible distance range to the source is roughly estimated to be \(d=4.5\pm 1.0\) kpc, with an interstellar absorption as high as \(A_{V}=4.8\pm 0.5\) magnitudes. This is equivalent to a color excess in the range \(E(B-V)=1.6\pm 0.2\) magnitudes. A G-dwarf location in the HR diagram is ruled out because a very nearby distance (less than 1 kpc) would be required, which is clearly inconsistent with the highly reddened spectrum in Fig. 3. Based on the quality of the NOT medium-resolution spectrum, an F-dwarf spectral classification could still be conceivable with similar \(A_{V}\) and \(E(B-V)\) values. This would imply a closer location, in the range 1.5-2.0 kpc, placing UJT-1 in the outskirts of the Cygnus X region. However, according to the tridimensional reddening maps by Capitanio et al. (2017)1, a distance of 2 kpc or higher is needed to reach the estimated color excess for the UJT-1 line of sight. Therefore, the farthest distance of 4.5 kpc appears to be slightly more favored and is preferentially adopted in this work unless stated otherwise. Finally, a late-type supergiant classification does not apply because it would imply a distance far beyond the Milky Way limits.
Footnote 1: [https://stillism.obspm.fr](https://stillism.obspm.fr)
A subsequent observation of UJT-1 on 2019 September 23 with the Isaac Newton Telescope (INT), also at the ORM, and its intermediate dispersion spectrograph, enabled a better measurement of the Ca triplet wavelengths. The R1200R grism with 1200 s exposure time and similar data processing was used, yielding a heliocentric radial velocity estimate of \(-33\pm 4\) km s\({}^{-1}\) under the assumption that this is a single object.
## 4 Stellar run-away velocity
Based on the astrometric and distance parameters found above, it is possible to estimate the peculiar velocity of UJT-1 with respect to its regional standard of rest (RSR). For this purpose, the velocity components due to the rotation of the Milky
Way and the motion of the Sun in the local standard of rest (LSR) have to be subtracted from the observed heliocentric velocity resulting from the previous astrometric and spectroscopic measurements. The equation formalism is well documented in the literature (Moffat et al. 1998) and can be applied to both the tangential and radial velocity components. When the peculiar velocities and associated proper motions are available, it is also possible to compute the position angle of the true stellar motion relative to its RSR, its line-of-sight inclination angle, and the peculiar velocity modulus. In our case, the lack of accurate Gaia proper motions is a handicap that renders our analysis more difficult than usual. To overcome this limitation, we decided to explore the 90% confidence region around the (\(\mu_{\alpha}\) cos \(\delta\), \(\mu_{\delta}\)) values found above, assuming that they follow uncorrelated Gaussian distributions. The low (\(\sim 0.1\)) absolute values of the corresponding covariance matrix elements for Gaia stars in the field justifies this last assumption. The result of this exploration is presented in Fig. 5, where the sampled parameter space overlaps well with the observed position angle of the UJT-1 wake feature (\(178^{\circ}\pm 2^{\circ}\)). A 4.5 kpc distance was used here. The corresponding modulus of the peculiar velocity relative to the RSR reaches values in the range 15 to 80 km s\({}^{-1}\) at the 90% confidence level and agrees with the position angle of the wake. About 75% of this interval is well above the usual threshold (\(\sim 30\) km s\({}^{-1}\)) for the definition of runaway stars (Eldridge et al. 2011). Therefore, UJT-1 is a strong candidate of this category. An intermediate plausible value of 45 km s\({}^{-1}\) is adopted for discussion purposes, and the corresponding line-of-sight angle of 82\({}^{\circ}\) is practically in the plane of the sky. Conversion from angular distance to travel time is achieved using the factor \(v_{\rm s}\sin i/d=2.3\) mas yr\({}^{-1}\).
When a shorter distance (2 kpc) is assumed, a plot similar to Fig. 5 is obtained, but with a lower intermediate plausible velocity of 30 km s\({}^{-1}\) with a line-of-sight angle of 45\({}^{\circ}\). This is still compatible with the runaway classification. Therefore, UJT
\begin{table}
\begin{tabular}{l c c c} \hline \hline Source of image & Epoch & Right Ascension\({}^{a}\) & Declination\({}^{a}\) \\ & (year) & 308\({}^{\circ}\) 07\({}^{\prime}\) + (\({}^{\prime\prime}\)) & 43\({}^{\circ}\) 57\({}^{\prime}\) + (\({}^{\prime\prime}\)) \\ \hline \(1^{\rm st}\) Digitized Sky Survey & 1953.449 & 36.443 \(\pm\) 0.100 & 42.317 \(\pm\) 0.105 \\ \(2^{\rm nd}\) Digitized Sky Survey & 1992.468 & 36.453 \(\pm\) 0.071 & 42.077 \(\pm\) 0.067 \\ Isaac Newton Telescope & 2003.613 & 36.478 \(\pm\) 0.013 & 42.012 \(\pm\) 0.014 \\ Pan-STARRS1 & 2011.772 & 36.476 \(\pm\) 0.015 & 41.962 \(\pm\) 0.018 \\ Gaia DR3 & 2016.000 & 36.459 \(\pm\) 0.001 & 41.937 \(\pm\) 0.001 \\ UJA Telescope & 2019.433 & 36.443 \(\pm\) 0.042 & 41.884 \(\pm\) 0.041 \\ Nordic Optical Telescope & 2019.441 & 36.422 \(\pm\) 0.061 & 41.936 \(\pm\) 0.059 \\ UJA Telescope & 2021.763 & 36.431 \(\pm\) 0.030 & 41.915 \(\pm\) 0.032 \\ \hline \end{tabular} 10
\end{table}
Table 1: Log of astrometric observations
Figure 1: View of UJT-1 and its environments. This image uses the WISE All-Sky data release at 3.4, 4.6, and 12 \(\mu\)m, coded as blue, green, and red layers, respectively. The dashed circle shows the position of the star, and the arrow vector represents the proper motion direction corrected for Galactic rotation. Axes are labeled in equatorial coordinates; north is rotated to the right and east is up. The stellar bow shock wake is clearly visible extending northward. The inset shows a zoomed view of the stand-off region of the bow shock.
1 remains consistent with being a high-velocity object despite the distance uncertainties.
## 5 Spectral energy distribution of the star
The SED of UJT-1 can be assembled from observations of our own work (UJT, NOT, and CAHA), together with cataloged data obtained from different infrared telescopes such as the Two Micro All Sky Survey (2MASS), WISE, AKARI, Spitzer, Midcourse Space Experiment (MSX), and Planck. The resulting SED (Fig. 6) is dominated by two clear maxima that are usually interpreted in terms of a central protostar and disk or ambient dust emission, respectively. This figure also includes a tentative attempt to model fit the data points using libraries of young stellar objects (YSOs) and thermal dust spectra (Draine & Li, 2007; Robitaille et al., 2006). Although this fitting exercise was hampered by high optical extinction, strong variability (Heinze et al., 2018), and poor angular resolution at long wavelengths, a plausible agreement with a YSO interpretation is supported. In
Figure 4: Comparison of the spectrum of UJT-1 with different spectral templates. Templates are taken from a CaII triplet spectral library (Cenarro et al., 2001). All spectra are rest-frame corrected.
Figure 5: Range of peculiar velocities and position angles of peculiar motion. The gray and brown shaded areas cover the 90% and 50% confidence regions that are consistent with the estimated proper motions. The vertical lines illustrate the narrow interval of position angles that is consistent with the sky orientation of the UJT-1 trail.
Figure 3: Optical spectrum of UJT-1. Data obtained with the NOT telescope. The most prominent stellar spectral lines, H\(\alpha\) in emission, and the CaII triplet absorption are marked together with \(H_{2}O\) and \(O_{2}\) telluric absorption features.
Figure 2: Least-squares fit to astrometric observations. Data from Table 1 are plotted both in right ascension (top) and declination (bottom) as a function of time. The two celestial coordinates are expressed in the ICRS system. The dashed lines represent linear least-squares fits yielding the estimated proper motions. Error bars correspond to one standard deviation.
this context, it was difficult to distinguish among spectral templates, but acceptable fits with the central protostar temperature in agreement with a late spectral type (below 7500 K) were obtained.
At X-ray energies, a marginal detection of UJT-1 was obtained from standard processing of archival XMM-Newton data (Obs. Id. 0653690101) using the science analysis system of this observatory (SAS). Assuming a simple power-law spectrum, with the hydrogen column density set according to \(A_{V}=4.5\) mag, we estimated the 0.5-8 keV luminosity to be \(3\times 10^{32}\) erg s\({}^{-1}\) (\(d\)/4.5 kpc)\({}^{2}\). Because our knowledge of the distance is limited, this places UJT-1 at the high end of or well within average typical TTS X-ray luminosities. Moreover, it indicates an object with a few solar masses based on the known X-ray luminosity versus mass correlation for TTSs (Preibisch et al. 2005).
## 6 Spectral energy distribution of the bow-shock tail and dust properties
The infrared emission from the whole structure is mainly due to heated dust in the ISM that is swept up to the shocked layer. WISE and Spitzer observations are taken at wavelengths corresponding to small particles (0.001 - 0.01 \(\mu\)m) that are stochastically heated, while AKARI and Planck observations detect emission from colder, larger particles (\(\geq 0.1\)\(\mu\)m) in thermal equilibrium with the starlight. From these satellite images, we built the SED of the bow shock from 3.35 to 550 \(\mu\)m (see Fig. 7) for ten different regions along the limb-brightened bow shock, each of them with \(\sim 2\%\) of the whole solid angle with emission. We found a clear constancy in the shape of the spectra, so that mean values are shown. To obtain an order of magnitude of the dust properties, we first tried to fit the \(\lambda\geq 100\)\(\mu\)m part of the spectrum, which is attributed to the larger particles, because they cause most of the infrared emission and dust mass. The simplified model we used here (Dwek & Werner 1981) is based on a modified blackbody with emission efficiency \(\sim\lambda^{-1}\) and a density of dust grains of 3 g cm\({}^{-3}\). For each of our ten regions, the fit gives a dust mass \(M_{d}\sim 0.03\)\(M_{\odot}\) and a radiative equilibrium temperature of \(T_{d}\sim 35\) K. When we take into account that the analyzed regions contain about 20 % of the total shocked volume, the overall dust mass in the tail is estimated to be about \(M_{d}\sim 1.7\)\(M_{\odot}\). When the distance to the star is reduced to 2 kpc, the total amount of dust mass becomes about \(M_{d}\sim 0.34M_{\odot}\).
Using the same model (Dwek & Werner 1981), we can estimate the infrared luminosity for each of the ten regions we analyzed. As a result, a total amount of \(L_{IR}\sim 5.4\times 10^{37}\) erg s\({}^{-1}\) [\(d\)/4.5 kpc]\({}^{2}\) is estimated for the whole tail emission. We also tried to fit the complete SED to a more sophisticated dust model (Draine & Li 2007) that considers the contribution from a size distribution of grains of different composition (including emission lines from polycyclic aromatic hydrocarbons, or PAHs) exposed to a range of radiation intensities. We find the best agreement for a dust distribution with 1.49% in mass of PAH particles containing fewer than 1000 carbon atoms, and a range of intensities from 1 to \(10^{4}\) times the starlight in the solar neighborhood ISM. However, Fig. 7 displays an excess in the \(\sim\) 5-20 \(\mu\)m range that cannot be attributed to the dust model. This appears at all epochs of the bow-shock shell and might be tentatively attributed to a new light dust component that in this wavelength might crudely be approximated interval by a modified blackbody with a \(\lambda^{-2}\) emissivity law at \(T\sim 300\) K (not shown in the figure).
## 7 Wake growth scaling determination and period search
The infrared wake behind the protostar grows with the distance from UJT-1, as seen in the detailed view of Fig. 8. To determine the scaling law of this behavior, we used the WISE band 3 image as the best image. Angular widths in this section are expressed in arcseconds. The wake axis was aligned with the observed position angle. Its border pixels were selected by visual inspection, which gives similar values values as automated extraction, but they are more accurate. The estimated uncertainty is about 6 arcseconds (a few pixels). We determined the width \(2R\) of the wake at a certain distance \(z\) in from the star by measuring the span between two pixels. Because we searched for a self-similar scaling, only points far from the head of the bow shock (\(z>300\) arcseconds) were considered. The determination coefficient of the resulting power-law scaling \(2R=a(z/z_{0})^{b}\) is \(r^{2}=0.991\), with \(a=28\pm 3\) arcseconds and \(b=0.33\pm 0.02\) (see Fig. 9).
Figure 6: SED of the star UJT-1. It is based on CAHA + UJT photometry, NOT spectroscopy, and different multiwavelengths surveys listed in the figure inset. The thick line represents a tentative SED fit based on libraries of theoretical spectra for YSO (dashed line) and dust (dotted line) components (Draine & Li 2007; Robitaille et al. 2006). Error bars correspond to one standard deviation.
Figure 7: Averaged SED for the different regions along the limb-brightened bow-shock shell. The proposed tentative fit is plotted as the continuous red line from a dust grain model. The dotted line shows the fit of a single radiation field to the largest wavelengths. Error bars correspond to one standard deviation. See text for details.
Here the reference value \(z_{0}\) is one arcsecond. These data were also used for a space periodicity analysis with the phase dispersion minimization method (Stellingwerf 1978). As a result, we obtained a possible 4-5 arcminute wavelength periodicity, which for the projected \(v_{\rm s}\) and the proposed distance gives a shedding frequency \(f\sim 10^{-12}\) Hz.
## 8 Discussion
Based on the H\(\alpha\) emission line with an equivalent width above 10 A (in absolute value) and the SED of the source (Fig. 6) with a spectral index \(-1.33\pm 0.03\) (in the 4-22 \(\mu\)m range), UJT-1 fulfills all the common taxonomy criteria for a class II YSO, or classical TTS (Appenzeller & Mundt 1989; Lada 1987). Additional confirmation of the TTS nature comes from the location of UJT-1 in the color-color diagram of Fig. 10 based on the work by Meyer et al. (1997), which was created using the 2MASS photometry as starting point. In this plot, the red cross displays the observed UJT-1 infrared colors, and the blue cross corresponds to the estimated intrinsic colors after correcting for the estimated interstellar absorption (\(A_{V}=4.8\) mag). It is reassuring that UJT-1 lies almost perfectly near the so-called TTS locus. For further discussion, we can then assume typical TTS values for the mass and radius of the star of \(M_{*}\sim 2~{}M_{\odot}\) and \(R_{*}\sim 2~{}R_{\odot}\).
### Bona fide RATTS
At this point, from the obtained evidence, we propose that UJT-1 is an excellent prototype of RATTS systems. This also supports the hypothesis that they are a class. This is further reinforced by the remarkably long path traced by the star along its trajectory, which is shown at infrared wavelengths in Fig. 1. The path consists of a detached shell surrounding the star together with a wavy, asymmetric, limb-brightened tail that extends downward by almost one degree. An abrupt width change is observed in the middle of the structure, and the tail is finally distorted and diffuses into the interstellar medium (ISM). The closer view in Fig. 8 also highlights some wiggles and disturbances. If it is located at 4.5 kpc and when we consider a line-of-sight angle almost in the plane of sky, the observed wake would extend over the considerable distance of 50 pc. This exceeds the size of the remarkable turbulent wake behind the famous star Mira by more than a factor of 10 (Martin et al. 2007). The length of the UJT-1 bow shock then translates into a crossing lifetime of \(\sim 1\) Myr, which is compatible with the age of a typical TTS (see also Appendix A about its original natal site). The absence of lithium in NOT spectroscopy consistently suggests that UJT-1 is not extremely young. It is highly remarkable that the effect of a modest, individual star in the ISM appears to be traceable so many parsecs away, and this is rivaled in the Milky Way only by the more energetic relativistic jets of microquasar sources (Fabrika 2004).
The strength of these long-range effects needs to be revised if UJT-1 is closer to us. Even at 2 kpc, the observed wake would remain more than four times longer than its Mira counterpart, however.
### Mechanical scenario
This notorious large wake feature resembles the familiar turbulent wake observed behind common bodies placed in a free stream, characterized by a self-similar behavior beyond a certain downstream distance \(z\). In particular, this turbulent wake width
Figure 8: Detailed view of the bow shock and model fit of its geometry. The UJT-1 star is the brightest object on the left side of this IRAC Spitzer image at the 3.6 \(\mu\)m wavelength. The fitted Wilkin profile is overplotted in blue. The one-arcminute scale bar is equivalent to about 1.3 pc at the proposed distance of 4.5 kpc.
Figure 10: Infrared color-color diagram showing the location of UIT-1 before (red cross) and after (blue cross) correction for interstellar absorption. The dashed line corresponds to the TTS locus as defined by Meyer et al. (1997). Black squares and triangles indicate the location of ordinary dwarf and giant stars according to Bessell & Brett (1988). The three parallel lines indicate the direction of the reddening vectors in steps of \(\Delta A_{V}=1\) mag following the Rieke & Lebofsky (1985) extinction law.
Figure 9: Scaling law for the growth of the width of the bow-shock wake. The width is defined as the distance between the two border points of the wake at the same distance from the star. The power-law fit with exponent \(0.33\pm 0.020\) is shown as the solid orange line. Error bars represent an estimated standard deviation of about 6 arcsecond.
scales according to Landau & M. (1987) as \(z^{1/3}\). Remarkably, this is the observed growing rate in UJT-1 tail (with a power-law index \(0.33\pm 0.02\); see Section 7). Thus, we might be tempted to assume that it is a clear example of a classical turbulent wake. We might even think of a possible TTS in a bright rimmed cloud (Hosoya et al. 2019). However, the scenario is much more complex. The forming of a detached shell in the front of the star indicates that it moves at supersonic velocities, but the star itself also emits strong, supersonic winds. Therefore, from the point of view of the star, the incoming ISM and the wind collide in a contact discontinuity in which the gas and dust accumulate (van Marle et al. 2011). The heated ISM and wind material expand outward from either side of the discontinuity, the former up to a front bow shock ahead, the latter into the stellar wind to form a reverse shock. This is a common scenario in the Universe and has been reported, among others, in runaway massive, early-type stars (van Buren & McCray 1988), evolved supergiants (Jorissen et al. 2011), asymptotic giant branch (AGB) stars (Martin et al. 2007; Noriega-Crespo et al. 1997), or even pulsars (Kargaltsev et al. 2017). It is unprecedented in pre-main-sequence stars like ours to this extent, however.
The mechanical scenario of a runaway star moving supersonically in a uniform ISM depends on its velocity in the ISM rest frame, \(v_{\star}\), the stellar wind terminal velocity, \(v_{u}\), the stellar wind mass-loss rate, \(\dot{M}_{w}\), and the density of the ambient ISM, \(\rho_{a}\). These variables define a characteristic length, the stand-off distance from the star to the bow-shock apex, \(R_{0}\sim(v_{\star}^{2}M_{w}\rho_{a}^{-2}v_{u}^{-2})^{1/2}\), and the governing parameter \(\eta=v_{w}/v_{u}\). It is also possible to determine a characteristic dynamical timescale \(t_{dyn}=R_{0}/v_{u}\). In addition, to determine the whole physical scenario, we must consider the cooling properties of the ISM and wind shocked layers (Comeron & Kaper 1998), so that a new characteristic velocity arises, the sound speed \(c_{s}\), and its corresponding governing parameter, the Mach number \(\mathcal{M}=v_{u}/c_{s}\). In the case of shocked gas, we must use the post-shocked velocity. The characteristic timescale is therefore better defined as \(t_{dyn,s}=4R_{0}/v_{u}\). Cooling timescales may also be defined for both the shocked ISM and the wind (Appendix B). When any of these cooling timescales is shorter than the shocked dynamical timescale, we can assume that the corresponding layer is radiative, and it is adiabatic in the opposite case.
In order to test these hypotheses, we need some numerical measurements and estimates. As stated in Section 4, a distance of 4.5 kpc and a peculiar velocity of \(v_{u}=45\) km s\({}^{-1}\) is preferentially adopted. Then, the stand-off distance from infrared images of UJT-1 (e.g., Fig. 1), subtending \(\sim 0.35\) arcminute, is equivalent to \(R_{0}\sim 0.46\) pc given the line-of-sight angle as well. On the other hand, we consider a neutral Galactic ISM with number density \(n_{H}=2.5\) cm\({}^{-3}\), which is slightly higher than the typical 1 cm\({}^{-3}\) because of the cloudy environment that is shown in infrared maps. We then estimate \(\rho_{a}=\mu n_{H}=5.0\times 10^{-24}\) g cm\({}^{-3}\), with \(\mu=2.3\times 10^{-24}\) g being the adopted mean gas mass per hydrogen atom. Finally, as measuring \(v_{w}\) for a TTS is challenging, here we estimate \(v_{w}\sim 0.5v_{esc}\sim 450\) km s\({}^{-1}\), where \(v_{esc}\) is the escape velocity, while the prefactor that accounts for the uncertainties in the acceleration processes is taken for a main-sequence star of the same spectral type as UJT-1 (Eldridge et al. 2006). This \(v_{w}\) value is about the same magnitude as the winds of several hundredths km s\({}^{-1}\) observed in TTSs (Kwan et al. 2007), and it gives \(\eta\sim 10\).
We thus obtain a cooling time for the shocked ISM \(\tau_{c,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
However, these instabilities are on the order of the width of the shocked layer, so that larger departures from the theoretical shape observed in the UJT-1 tail might be attributed to other causes. One possible mechanism is the vortex shedding (see Appendix D), which has been proposed in some simulations of AGB star wakes (Wareing et al., 2007). Although this cannot be strictly ruled out from the calculations, a detailed view of the images does not support the typical regular, periodic shedding of vortices, as was already pointed out (Wareing et al., 2007). This irregular shedding might be attributed to a fluctuating bow-shock shape, but the whole picture resembles a turbulent behavior more closely, especially considering the uncertainties in the viscosity estimate. Therefore, another mechanism for the large wavelength distortions in the bow-shock shape is probably at work, especially at the strong, abrupt change in width at approximately the middle of the tail (Fig. 11). Here, the bow-shock radius so suddenly expands by a factor of \(R_{1}/R_{2}\sim 2\) that the more natural explanation seems to be the result of UJT-1 passing through the frontier between two regions of different density. This rapid contraction is very similar to the contraction obtained in simulations of AGB star bow shocks passing through different ISM densities (Toropina et al., 2019; Yoon & Heinz, 2017). Assuming the bow shocks are in pressure equilibrium, and that the ISM density is higher at the current position of UJT-1, then \(c_{s}\) there is lower, \(\mathcal{M}\) is higher, and the corresponding Mach cone is narrower, as it appears. In Fig. 11, two different power-law \(\lambda^{1/3}\) Wilkin bow-shock profiles are fit to the infrared Spitzer image. As the radius jump occurs so abruptly, it must evolve as \(\sim(n_{1}/n_{2})^{1/3}\), so that the required ISM density increase is about one order of magnitude. Moreover, these anisotropic properties of the ISM around UJT-1 might explain (Yoon & Heinz, 2017) the observed east-west asymmetry in infrared emission. The stand-off distance obtained from the Wilkin relation \(\bar{R}=(3\pi\bar{\rm z})^{1/3}\) above, together with the fitting of the wake to \(\bar{R}\sim\bar{\rm z}^{1/3}\), results in \(R_{0}\) of about the values reported in the previous section.
### Role of mass-loss variability
On the other hand, intermediate-scale disturbances in the bow-shock profile might also be attributed to actual changes in the stellar wind mass-loss rate. Its current value can be estimated from the measured \(R_{0}\) considering the balance of ISM ram pressure and stellar wind flux momentum as
\[\dot{M}_{w}=\dot{M}_{\rm w0}\left[\frac{n_{H}}{2.5\ {\rm cm^{-3}}}\right] \left[\frac{R_{0}}{0.46\ {\rm pc}}\right]^{2}\left[\frac{v_{*}}{45\ {\rm km\ s^{-1}}}\right]\left[\frac{\eta}{10}\right]^{-1}, \tag{1}\]
where \(\dot{M}_{\rm w0}\) is the current value of the mass loss (\(1.1\times 10^{-6}\ M_{\odot}\) yr\({}^{-1}\) for d = 4.5 kpc and \(1.8\times 10^{-7}\ M_{\odot}\) yr\({}^{-1}\) for d = 2 kpc). These values are consistent with the upper limit of observed outflows and neutral winds in TTSs (Hartigan et al., 1995; Ruiz et al., 1992). Assuming that the ISM density remains constant up to the sudden radius jump above and that the star and wind velocities remain constant as well, we can estimate the mass-loss rate at any distance \(z\) far enough from the star for the Wilkin power-law equation (Wilkin, 1996) to apply. If \(\dot{M}_{w}\) is the current mass-loss rate and \(R^{\prime}\), \(R\) are the measured and theoretical bow-shock radii at a certain distance, the corresponding mass-loss is \(\dot{M}_{w}^{\prime}=\dot{M}_{w}(R^{\prime}/R)^{3}\). Therefore, in a variable-wind scenario, we can reconstruct the stellar wind mass-loss history over a huge time lapse encoded in the remarkably long wake of UJT-1. A plot of the result is shown in Fig. 12, where we go back \(\sim 5\times 10^{5}\) yr in the past, witnessing the history of the mass-loss evolution of a pre-main-sequence star with unprecedented coverage and detail.
### Toward a consistent large-scale view
The most intriguing characteristic of the wake structure of UJT-1 is its very extended length. This suggests that the star is passing through ISM regions that are devoid of strong winds that could destroy its long-lasting morphology. In addition, the observed infrared emission must remain persistent. In bow shocks, this radiation is dominated by thermal dust that was previously heated by absorption of starlight photons and collisions with ions and electrons (Draine, 2003). In Section 6, we obtained SEDs from 3.35 to 550 \(\mu\)m for different regions along the bow-shock tail using available data. The shape of the spectrum is highly consistent, revealing that the smaller-grain dust-gas collisions are maintained in time. Large-(\(\gtrsim 0.1\ \mu\)m) emission with \(T_{d}\sim 35\) K causes the far-infrared part of the SEDs. It is noteworthy that the fitted temperature is slightly higher than that expected at a distance \(\sim R_{0}\) from our TTS (\(R_{*}\sim 2\ R_{\odot}\), and effective temperature \(T_{\rm eff}\sim 5400\) K for the G5 spectral type) given by Dwek & Werner (1981)\(T_{d}=T_{eff}(R_{*}/R_{0})^{2/4(+\star)}\) = 8-25 K, with \(s\) being the slope of the opacity in the infrared regime, which varies from 1 (silicates) to 2 (graphite). The dust mass in the wake is \(M_{d}\sim 1.7\ M_{\odot}\), similar to the snowplow model estimate of \(M_{d}\sim 1.9\ M_{\odot}\) obtained assuming that the wind sweeps up all the dust inside the bow-shock volume, as calculated from the Wilkin analytical model for a total length of the shell of \(\bar{L}\sim 100\), and a constant dust-to-gas ratio of 0.01, for which the gas density and stand-off distance are also constant. If the star were an F0 dwarf with \(T_{\rm eff}\sim 7400\) K and located at 2 kpc, the dust temperature estimated with the Dwek & Werner (1981) relation above would be between 14 and 40 K, in agreement with the temperature obtained from fitting the SED. The total dust mass in the wake would then be \(M_{d}\sim 0.34\ M_{\odot}\), again in accordance with the mass obtained with the snowplow approximation, \(M_{d}\sim 0.26\ M_{\odot}\). However, the obscured appearance of the UJT-1 wake might render the higher dust-mass values preferable.
The infrared luminosity of the tail we derived in Section 6 is not consistent with the upper limit emission assuming a constant mass-loss rate and the local ISM thermalizing the whole kinetic energy when it collides with the stellar wind in the bow shock, \(L_{IR}<\frac{1}{2}\dot{M}_{w}(v_{*}^{2}+v_{w}^{2})\sim 6.9\times 10^{34}\) erg s\({}^{-1}\), which changes to \(L_{IR}<1.2\times 10^{34}\) erg s\({}^{-1}\) when the distance to the star is only
Figure 12: Reconstruction of the past wind mass-loss in the TTS UJT-1 as inferred from its trailing wake. The plot is scaled according to the currently estimated value of \(\dot{M}_{\rm w0}=1.8\times 10^{-7}\ M_{\odot}\) yr\({}^{-1}\). Error bars indicate one standard deviation. Mass-loss changes by a factor of \(\sim 5\) over \(\sim 0.5\) Myr are recovered.
2 kpc. In spite of the crudeness of the calculations, the UJT-1 tail appears to be heated from external sources. As no clear O/B stars are identified in the near vicinity of the region, we speculate that a close encounter with the shell of an unnoticed supernova remnant causes the maintained infrared emission of the dust in the wake and its estimated temperature, which is slightly above the 8-25 K that our TTS could maintain at a distance \(R_{0}\). The supernova remnant may be the asymmetric shell that was detected in radio maps from the Canadian Galactic Plane Survey (CGPS), which remarkably matches a deep minimum in HI emission from the Effelsberg-Bonn HI Survey (EBHIS) at a similar kinematic distance as the proposed UJT-1 natal cloud (Fig. 13). Therefore, the interaction might be plausible, although it has to be very recent because the Wilkin \(R\sim z^{1/3}\) structure of the wake is not lost. On the other hand, the proposed new small, light dust component at 5-20 \(\mu\)m found by fitting the Draine & Li (2007) model should be consistent with metallic or carbon grain composition, rather than dielectric or silicate composition (Bohren & R. 1983), and should remain in the shocked layers of the bow shock for a long time at \(\sim 300\) K, which is the temperature expected in a dust that its stochastically heated by hot gas (\(\sim 10^{7}\) K as in the shocked wind layer) (Dwek & Werner 1981).
## 9 Conclusions
We summarize our conclusions grouped into two classes: solid findings, and tentative interpretations. As our first solid finding, we reported UJT-1 as an outstanding example of a RATTS that moves supersonically and leaves a conspicuous bow-shock shell behind with dimensions that are rarely found in stellar contexts. This remains true despite the uncertainty in distance, which ranges from 2 to 4.5 kpc. Second, this fast-moving TTS is seen through an optical extinction of about five magnitudes. After we corrected for this effect, its location in the infrared color-color diagram is shifted toward the well-known TTS locus, thus confirming its YSO nature. Third, the discovered system provides an excellent laboratory for studying large-scale interactions of turbulent wind and the ISM. In particular, the bow-shock profile grows remarkably close to the predictions of the Wilkin model, thus justifying a connection between the star and the wake.
As more speculative distance- and model-dependent conclusions, we find that an extrapolation of the UJT-1 motion backward apparently leads to a nearby molecular cloud at roughly the high end of the distance range. In addition, the possibility that intermediate-scale disturbances are due to different types of instabilities, vortex shedding, or ISM clump interaction appears to be conceivable, but is difficult to distinguish. Alternatively, a simpler scenario is to invoke a TTS with a time-variable stellar wind. If this is the case, the reported system has the potential to open a window of hundreds of thousands of years over the mass-loss history in YSOs that otherwise would remain inaccessible. Finally, we also propose that the large-scale conditions of UJT-1 involve an external heating source that accounts for the hot-dust component required to better fit the SED of the bow-shock tail. This is tentatively attributed to a possible supernova remnant, suggested by a neutral hydrogen cavity and extended radio features in the field, but it remains to be confirmed.
###### Acknowledgements.
We acknowledge support by Agencia Estatal de Investigacion of the Spanish Ministerio de Ciencia, Innovacion y Universidades grant PID2019-105510GB-C32/AEI / 10.13039/5010011003 and FEDER "Una camera de Incorupora", as well as Programa Operativo FEDER 2014-2020 Conspiera de Economia y Conocimica de la Junta de Andalucia (Refs. A1123060E00010 and FQM-322). This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. This work is partly based on observations made in the Observatorios de Canarias del LAC with the Nordic Optical Telescope and the Isaac Newton Telescope operated on the island of La Palma by NOTSA and the Isaac Newton Group of Telescopes, and on observations collected at the Centro Astronomico Hispano-Aleman (CMA) at Calar Altao, operated jointly by Junta de Andalucia and Consejo Superior de Investigaciones Cientifica (IAA-CSIC). Authors are also grateful to all other astronomical data repositories used here that space limits do not allow to enumerate. We additionally express our gratitude to our colleagues Josese M. Torrelles (ICE-CSIC) and Luis F. Rodriguez (IRya-UNAM) for valuable discussions during this work.
|
2301.06583 | Diamond-optic enhanced photon collection efficiency for sensing with
nitrogen-vacancy centers | We present a design to increase the amount of collected fluorescence emitted
by nitrogen-vacancy color centers in diamond used for quantum-sensing. An
improvement was measured in collected fluorescence when comparing oppositely
faced emitting surfaces by a factor of 3.8(1). This matches ray-tracing
simulation results. This design therefore improves on the shot noise limited
sensitivity in optical read-out based measurements of for instance magnetic and
electric fields, pressure, temperature and rotations. | Muhib Omar, Andreas Conta, Andreas Westerhoff, Raphael Hasse, Georgios Chatzidrosos, Dmitry Budker, Arne Wickenbrock | 2023-01-16T19:49:42Z | http://arxiv.org/abs/2301.06583v1 | # Diamond-optic enhanced photon collection efficiency for sensing with nitrogen-vacancy centers
###### Abstract
We present a design to increase the amount of collected fluorescence emitted by nitrogen-vacancy color centers in diamond used for quantum-sensing. An improvement was measured in collected fluorescence when comparing oppositely faced emitting surfaces by a factor of 3.8(1). This matches ray-tracing simulation results. This design therefore improves on the shot noise limited sensitivity in optical read-out based measurements of for instance magnetic and electric fields, pressure, temperature and rotations.
1Helmholtz-Institut, GSI Helmholtzzentrum fur Schwerionenforschung, Mainz, Germany
2Department of Physics, Mathematics and Computer Science, Johannes Gutenberg-Universitat Mainz, Mainz, Germany
3Department of Physics, University of California, Berkeley, Berkeley, CA, United States
*[email protected]
## 1 Introduction
Nitrogen-vacancy centers (referred to as NV-centers in the following) are used in various applications ranging from high-precision temperature measurements [1], magnetic-field measurements in various modalitites, with [2, 3] or without employing microwaves [4] or bias fields [5] for instance, electric-field measurements [6] to quantum computing [7], gyroscopy [8] as well as bio-sensing [9].
A NV-center is a point defect in diamond, where a pair of neighbouring carbon atoms is replaced by a nitrogen atom and a vacancy. It is an atom-like system, that can be in different charge states, called NV\({}^{+}\), NV\({}^{0}\) and NV\({}^{-}\), where the latter one is favoured for applications due to its optical spin read-out [10]. The NV\({}^{-}\) center has a total electron spin of 1, the corresponding electrons being contributed by the NV-center nitrogen itself, the open bonds from the carbon atoms and of another substitutional nitrogen atom in the lattice. The electron spins can be optically pumped into one of the NV's Zeeman sublevels by illuminating the diamond with, for example, 532 nm laser light driving transitions into the phonon broadened excited state. The spin state can be read out by detecting the amount of (infra)red fluorescence light, due to spin selective non-radiative transitions [10]. Driving microwave transitions between the various spin states leads to observable changes in fluorescence. Those can be used to measure magnetic fields over the respective transition frequency shifts via the Zeeman effect.
A fundamental noise limit of fluorescence detection arises from photon shot noise, depending on the amount of collected light which usually dominates over spin-projection noise as another fundamental limit. Therefore, by increasing the amount of collected light, the signal-to-noise ratio of such measurements can be improved. Several different techniques have been developed to improve photon-collection efficiency, in both single-NV setups [11, 12, 13] and ensemble experiments [2, 14].
In this work, the diamond containing the NV centers, referred to as the sensing diamond, was glued to a cone-shaped diamond piece, referred to as the diamond anvil, which increases the
amount of collected fluorescence light. The sensing diamond was cut to direct side emitted fluorescence from the sensing volume into the back direction via total internal reflection, see Fig. 1. The curved back surface of the diamond anvil reduces losses due to total internal reflection at the diamond to air interface. Detection with a photodiode confirms an improvement factor of 3.8(1) expected from simulations compared to the other exiting surface.
## 2 Sample preparation
The sensing diamond, a high-pressure high-temperature (HPHT) sample (Element Six DNV-B14), is specified to contain 13 ppm nitrogen, 3.7 ppm NV\({}^{-}\)-centers and 1.4 ppm NV\({}^{0}\)-centers. This specific sample is \({}^{13}\)C-depleted (99.999% \({}^{12}\)C). The sample was irradiated with 5 \(\mathrm{M}\mathrm{e}\mathrm{V}\) electrons at a dose of \(2\times 10^{19}\,\mathrm{c}\mathrm{m}^{-2}\) and then annealed at 700 \({}^{\circ}\)C for eight hours. Its measured minimal linewidth in a pulsed optically detected magnetic resonance (ODMR) experiment is around 300 kHz.
The shape of the diamond anvil and sensing diamond pieces was optimised using the COMSOL Multiphysics software. The simulations were used to evaluate the improvement in fluorescence collection between the back and front side.
The science diamond is a trapezoid with a back surface being a square 0.5 mm on the side, a height of 0.18 mm, and the upper square surface being 0.15 mm on the side, see Fig. 1. The base angle for this shape is close to 45 degrees to match with the single-crystal diamond anvil manufactured by Dutch Diamond Technologies. This limits the angular distribution of about 90% of rays exiting the diamond construction to below 45 degrees with respect to the symmetry axis, see Fig. 2. This means 90% of the light can be picked up by a lens with numerical aperture of 0.7. A very weak requirement. The two diamond pieces were joined with a thin layer of Norland Optical Adhesive 170 with refractive index of 1.7 (the highest-index material that we could find) applied between the anvil and the back surface of the sensing diamond while pressing the pieces together. Effects as for instance etaloning due to a significant glue layer thickness were not observed.
In the COMSOL simulation within the ray-tracing module, the number of collected rays were
Figure 1: Diamond assembly images: (a) Sketch of the NV-bearing diamond pyramid at the front (science diamond) dimensions on top of the diamond anvil. (b) Photo of the science diamond, glued to the diamond anvil. c) Image of fluorescence light collected from the back of the diamond anvil used for alignment purposes and taken with a CCD camera. The circle in the center of the cross is the apex of the fluorescence cone from the laser beam focal spot and the four side beams arise due to the side reflections of the anvil. All measures are in mm.
given a cylindrical distribution of ray sources inside the sensing diamond to mimic the excitation volume shape by the laser light inside the diamond. Three 30 \(\upmu\)m spaced point sources arranged along the symmetry axes of the sensing diamond emitted isotropically each 2000 rays. The ratio of collected rays between the front and back side on a photodiode using an 8 mm focal length, 12.7 mm diameter aspheric condenser lens was 3.8.
The diamond was mounted on a custom made peek mount during the experiment. Trying to test thermal durability of the glue joint, we applied around 1.8 W of green laser light in a 0.9 mm diameter beam focused using 8 mm focal length lens for around 10 s on the sensing diamond from the back side. No degradation of the diamond optical assembly was observed at temperatures at which the peek material started to deform which are estimated to be around 150 \({}^{\circ}\)C based on its glass transition temperature [15].
## 3 Characterisation measurements
To verify the simulation results we built a setup to measure the amount of fluorescence light collected from the front and back sides of the diamond simultaneously.
### Experimental setup
The setup is sketched in Fig. 3. A 532 nm laser beam was focused into the sensing diamond using a plano-convex lens with a focal length of \(f=8\) mm. Behind the diamond we placed initially another lens and a notch filter to separate out green light from the (infra)red fluorescence light emitted from the diamond detected with a charge-coupled device (CCD) camera. That way we were able to verify that the diamond was well centred illuminated with the 532 nm light. The camera was positioned on the back side producing the characteristic cross shape shown in Fig. 1 (c). This shape originates from reflections of the side surfaces of the sensing diamond and allows for precise positioning of the diamond relative to the laser beam using a XYZ-stage.
Figure 2: Simulated fractional cumulative angular ray distribution as a function of \(\alpha\), the ray angle with respect to the symmetry axis of the diamond anvil with the sensing diamond. The cumulative ray fraction per single side collection is indicated as fraction of total number of emitted rays per side. The dashed red line indicates the numerical aperture of the collecting lens used in this note, the grey one the anvil opening angle of 45 degrees.
After alignment, the optics at the back side were replaced with the same type of aspheric condenser lens with \(f\)=8 mm focal length, a notch filter and a photodiode. The fluorescence was compared in both front and back direction simultaneously. Integrated over the expected fluorescence spectrum the notch filter (Thorlabs NF 533-17) transmits about 2% more than the dichroic mirror (Thorlabs DMLP-567 ), negligible within the measurement error.
### Measurements
Measuring on both sides simultaneously, for five laser powers equally spaced between 50 and 150 mW gave a mean increase of collected fluorescence light by a factor of 3.8(1) between the back and front side. Next, a magnetic field was applied with Helmholtz coils and we used microwaves to obtain ODMR spectra of the NV centers to visualize the difference, see Fig. 4.
Figure 3: Experimental setup: the 532 nm laser light is focused on a NV-center doped sensing diamond, attached to a diamond anvil. The emitted light is focused onto the same model of photodiodes on each side. A dichroic mirror is used to block the reflected green light going towards the first photodiode, which is used to to record the intensity of the fluorescence light. An interference filter is employed to reject the laser light and transmit the fluorescence light. The right photodiode and lens were replaced with a CCD camera and a longer focal length lens respectively to capture images used to align the light beam with respect to the diamond.
## 4 Conclusion
We described a design to improve the amount of collected fluorescence light emitted by a nitrogen-vacancy center ensemble in diamond. We were able to experimentally measure an increase by a factor of 3.8(1) between the improved design (back) with respect to the not improved opposing facet (front). This increase is supported by ray tracing simulations. An additional feature of the design is the improved angular distribution of the fluorescence. It would allow for over 90% of the emitted fluorescence to be collected by a lens with a numerical aperture bigger than 0.7. These lenses are widely available. In sensing applications relying on the collected fluorescence this improvement results in a lowered shot noise limit by a factor of nearly 2. Further improvement in overall light collection are possible for example deploying reflective coating on the front surface and anti-reflective coating on the back surface. Including the coating and neglecting losses this optic then allows to collect all emitted photons, which amounts to an additional increase of more than 40%.
## 5 Funding
This work was supported by the European Commission's Horizon Europe Framework Program under the Research and Innovation Action MUQUABIS GA no.10107054 and by the German Federal Ministry of Education and Research (BMBF) within the MILIQUANT project no. 13N15062 and DIAQNOS project no. 13N16455.
## 6 Acknowledgements
We thank Dr. Till Lenz, Joseph Shaji Rebeirro and Omkar Dhungel for the many and fruitful discussions concerning this project.
## 7 Disclosures
The authors declare no conflicts of interest.
Figure 4: Comparison of the signal improvement due to the higher photon collection efficiency exemplified by an optically detected magnetic resonance spectra, both normalised to the back surface peak signal value.
Data availability
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
|
2303.10598 | StyleRF: Zero-shot 3D Style Transfer of Neural Radiance Fields | 3D style transfer aims to render stylized novel views of a 3D scene with
multi-view consistency. However, most existing work suffers from a three-way
dilemma over accurate geometry reconstruction, high-quality stylization, and
being generalizable to arbitrary new styles. We propose StyleRF (Style Radiance
Fields), an innovative 3D style transfer technique that resolves the three-way
dilemma by performing style transformation within the feature space of a
radiance field. StyleRF employs an explicit grid of high-level features to
represent 3D scenes, with which high-fidelity geometry can be reliably restored
via volume rendering. In addition, it transforms the grid features according to
the reference style which directly leads to high-quality zero-shot style
transfer. StyleRF consists of two innovative designs. The first is
sampling-invariant content transformation that makes the transformation
invariant to the holistic statistics of the sampled 3D points and accordingly
ensures multi-view consistency. The second is deferred style transformation of
2D feature maps which is equivalent to the transformation of 3D points but
greatly reduces memory footprint without degrading multi-view consistency.
Extensive experiments show that StyleRF achieves superior 3D stylization
quality with precise geometry reconstruction and it can generalize to various
new styles in a zero-shot manner. | Kunhao Liu, Fangneng Zhan, Yiwen Chen, Jiahui Zhang, Yingchen Yu, Abdulmotaleb El Saddik, Shijian Lu, Eric Xing | 2023-03-19T08:26:06Z | http://arxiv.org/abs/2303.10598v3 | # StyleRF: Zero-shot 3D Style Transfer of Neural Radiance Fields
###### Abstract
3D style transfer aims to render stylized novel views of a 3D scene with multi-view consistency. However, most existing work suffers from a three-way dilemma over accurate geometry reconstruction, high-quality stylization, and being generalizable to arbitrary new styles. We propose StyleRF (Style Radiance Fields), an innovative 3D style transfer technique that resolves the three-way dilemma by performing style transformation within the feature space of a radiance field. StyleRF employs an explicit grid of high-level features to represent 3D scenes, with which high-fidelity geometry can be reliably restored via volume rendering. In addition, it transforms the grid features according to the reference style which directly leads to high-quality zero-shot style transfer. StyleRF consists of two innovative designs. The first is sampling-invariant content transformation that makes the transformation invariant to the holistic statistics of the sampled 3D points and accordingly ensures multi-view consistency. The second is deferred style transformation of 2D feature maps which is equivalent to the transformation of 3D points but greatly reduces memory footprint without degrading multi-view consistency. Extensive experiments show that StyleRF achieves superior 3D stylization quality with precise geometry reconstruction and it can generalize to various new styles in a zero-shot manner. Project website: [https://kunhao-liu.github.io/StyleRF/](https://kunhao-liu.github.io/StyleRF/)
## 1 Introduction
Given a set of multi-view images of a 3D scene and an image capturing a target style, 3D style transfer aims to generate novel views of the 3D scene that have the target style consistently across the generated views (Fig. 1). Neural style transfer has been investigated extensively, and state-of-the-art methods allow transferring arbitrary styles in a
zero-shot manner. However, most existing work focuses on style transfer across 2D images [24, 15, 21] but cannot extend to a 3D scene that has arbitrary new views. Prior studies [39, 37, 22, 19] have shown that naively combining 3D novel view synthesis and 2D style transfer often leads to multi-view inconsistency or poor stylization quality, and 3D style transfer should optimize novel view synthesis and style transfer jointly.
However, the current 3D style transfer is facing a three-way dilemma over accurate geometry reconstruction, high-quality stylization, and being generalizable to new styles. Different approaches have been investigated to resolve the three-way dilemma. For example, multiple style transfer [11, 22] requires a set of pre-defined styles but cannot generalize to unseen new styles. Point-cloud-based style transfer [19, 37] requires a pre-trained depth estimation module that is prone to inaccurate geometry reconstruction. Zero-shot style transfer with neural radiance fields (NeRF) [8] cannot capture detailed style patterns and textures as it implicitly injects the style information into neural network parameters. Optimization-based style transfer [17, 39, 63] suffers from slow optimization and cannot scale with new styles.
In this work, we introduce **StyleRF** to resolve the three-way dilemma by performing style transformation in the feature space of a radiance field. A radiance field is a continuous volume that can restore more precise geometry than point clouds or meshes. In addition, transforming a radiance field in the feature space is more expressive with better stylization quality than implicit methods [8], and it can also generalize to arbitrary styles. We construct a 3D scene representation with a grid of deep features to enable feature transformation. In addition, multi-view consistent style transformation in the feature space could be achieved by either transforming the whole feature grid or transforming the sampled 3D points. We adopt the latter as the former incurs much more computational cost during training to stylize the whole feature grid in every iteration, whereas the latter can reduce computational cost through decreasing the size of training patch and the number of sampled points. However, applying off-the-shelf style transformations to a batch of sampled 3D points impairs the multi-view consistency as they are conditioned on the holistic statistics of the batch. Beyond that, transforming every sampled 3D point is memory-intensive since NeRF needs to query hundreds of sampled points along each ray for rendering a single pixel.
We decompose the style transformation into sampling-invariant content transformation (SICT) and deferred style transformation (DST), the former eliminating the dependency on holistic statistics of sampled point batch and the latter deferring style transformation to 2D feature maps for better efficiency. In SICT, we introduce volume-adaptive normalization that learns the mean and variance of the whole volume instead of computing them from a sampled batch. In addition, we apply channel-wise self-attention to transform each 3D point independently to make it conditioned on the feature of that point regardless of the holistic statistics of the sampled batch. In DST, we defer the style transformation to the volume-rendered 2D feature maps based on the observation that the style transformation of each point is the same. By formulating the style transformation by pure matrix multiplication and adaptive bias addition, transforming 2D feature maps is mathematically equivalent to transforming 3D point features but it saves computation and memory greatly. Thanks to the memory-efficient representation of 3D scenes and deferred style transformation, our network can train with \(256\times 256\) patches directly without requiring sub-sampling like previous NeRF-based 3D style transfer methods [8, 11, 22].
The contributions of this work can be summarized in three aspects. _First_, we introduce StyleRF, an innovative zero-shot 3D style transfer framework that can generate zero-shot high-quality 3D stylization via style transformation within the feature space of a radiance field. _Second_, we design sampling-invariant content transformation and deferred style transformation, the former achieving multi-view consistent transformation by eliminating dependency on holistic statistics of sampled point batch while the latter greatly improves stylization efficiency by deferring style transformation to 2D feature maps. _Third_, extensive experiments show that StyleRF achieves superior 3D style transfer with accurate geometry reconstruction, high-quality stylization, and great generalization to new styles.
## 2 Related Work
**Neural scene representations.** 3D scene representation has been extensively studied in recent years with different ways of representations such as volumes [23, 46, 49, 59, 27], point clouds [1, 45], meshes [25, 55], depth maps [20, 30], and implicit functions [7, 34, 41, 61]. These methods adopt differentiable rendering which enables model optimization by using 2D multi-view images. Among them, Neural Radiance Field (NeRF) [36] can render a complex 3D scene with high fidelity and accurate geometry. It represents scenes with an implicit coordinate function that maps each 3D coordinate to a density value and a color value, and employs volume rendering to generate images of novel views. However, the implicit coordinate function is represented by a large multilayer perceptron (MLP) that is often hard to optimize and slow to infer. Serval studies adopt a hybrid representation [33, 4, 10, 14, 31, 38, 44, 52, 62] to speed up the reconstruction and rendering. They employ explicit data structures such as discrete voxel grids [14, 52], decomposed tensors [3, 13, 4], hash maps [38], etc. to store features or spherical harmonics, enabling fast convergence and inference.
Although most existing work extracts features as middle-level representations of scenes, the extracted features are usually an intermediate output of neural networks which have little semantic meanings and are not suitable for the style transfer task. We introduce decomposed tensors [4] to store high-level features extracted by pre-trained CNNs, which enables transformations in feature space as well as efficient training and inference. Though [3, 16, 40] also render feature maps instead of RGB maps, they are computationally intensive and usually work with low-resolution feature maps. StyleRF can instead render full-resolution feature maps (the same as the output RGB images) efficiently and it uses high-level features largely for transformation only.
**Neural style transfer.** Neural style transfer aims at rendering a new image that contains the content structure of one image and the style patterns of another. The seminal work in [15] shows that multi-level feature statistics extracted from intermediate layers of pre-trained CNNs could be used as a representation of the style of an artistic image, but it treats style transfer as a slow and iterative optimization task. [9, 21, 24, 28, 29, 32, 50, 58] utilize feed-forward networks to approximate the optimization procedure to speed up rendering. Among them, [9, 21, 28, 29, 32, 43, 50, 58] can achieve zero-shot style transfer by applying transformations to the high-level features extracted by pre-trained CNNs, where the feature transformations can be achieved by matching second-order statistics [21, 29], linear transformation [28, 58], self-attention transformation [9, 32, 43], etc. Video style transfer extends style transfer to videos for injecting target styles consistently across adjacent video frames. Several studies leverage optical flow [5, 5, 18, 47, 57] as temporal constraints to estimate the movement of video contents. They can produce smooth videos, but have little knowledge of the underlying 3D geometry and cannot render consistent frames in arbitrary views [19, 37].
Huang et al. first tackle stylizing complex 3D scenes [19]. They construct a 3D scene by back-projecting image features into the 3D space to form a point cloud and then perform style transformation on the features of 3D points. Their method can achieve zero-shot style transfer, but requires an error-prone pre-trained depth estimator to model scene geometry. [37] also constructs a point cloud for stylization but it mainly focuses on monocular images. Instead, [6, 8, 11, 22, 39, 63] use NeRF [36] as the 3D representation which can reconstruct scene geometry more faithfully. [6] is a photorealistic style transfer method that can only transfer the color tone of style images. [39, 63] achieve 3D style transfer via optimization and can produce visually high-quality stylization, but they require a time-consuming optimization procedure for every reference style. [11, 22] employ latent codes to represent a set of pre-defined styles, but cannot generalize to unseen styles. [8] can achieve arbitrary style transfer by implicitly instilling the style information into MLP parameters. However, it can only transfer the color tone of style images but cannot capture detailed style patterns. StyleRF can transfer arbitrary style in a zero-shot manner, and it can capture style details such as strokes and textures as well.
## 3 Method
The overview of StyleRF is shown in Fig. 2. For a batch of sampled points along a ray \(\mathbf{r}\), the corresponding features \(F_{i},i\in[1,2,...,N]\) are first extracted from the feature grid described in Sec. 3.1, each of which is transformed to \(\bar{F}_{i}\) independently via _Sampling-Invariant Content Transformation (SICT)_ described in Sec. 3.2.1, regardless of the holistic statistics of the point batch. \(\bar{F}_{i}\) is then transformed to a feature map \(\bar{F}_{c}\) via _Volume Rendering_. After that, the _Deferred Style Transformation (DST)_ described in Sec. 3.2.2 transforms \(\bar{F}_{c}\) to the feature map \(F_{cs}\) adaptively using the sum weight of the sampled points \(w_{\mathbf{r}}\) along the ray \(\mathbf{r}\) and the style information \(T,\mu(F_{s}),\) and \(\sigma(F_{s})\). Finally, a stylized novel view is generated via a CNN decoder.
Figure 2: **The framework of StyleRF.** For a batch of sampled points along a ray \(\mathbf{r}\), the corresponding features \(F_{i},i\in[1,2,...,N]\) are first extracted, each of which is transformed to \(\bar{F}_{i}\) independently via _Sampling-Invariant Content Transformation_, regardless of the holistic statistics of the point batch. \(\bar{F}_{i}\) is then transformed to a feature map \(\bar{F}_{c}\) via _Volume Rendering_. After that, the _Deferred Style Transformation_ transforms \(\bar{F}_{c}\) to the feature map \(F_{cs}\) adaptively using the sum weight of the sampled points \(w_{\mathbf{r}}\) along the ray \(\mathbf{r}\) and the style information \(T,\mu(F_{s}),\) and \(\sigma(F_{s})\). Finally, a stylized novel view is generated via a CNN decoder.
### Feature Grid 3D Representation
To model a 3D scene with deep features, we use a continuous volumetric field of density and radiance. Different from the original NeRF [36], for every queried 3D position \(x\in\mathbb{R}^{3}\), we get a volume density \(\sigma(x)\) and a multi-channel feature \(F(x)\in\mathbb{R}^{C}\) instead of an RGB color, where \(C\) is the number of the feature channels. Then we can get the feature of any rays \(\mathbf{r}\) passing through the volume by integrating sampled points along the ray via approximated volume rendering [36]:
\[F(\mathbf{r})=\sum_{i=1}^{N}w_{i}F_{i}, \tag{1}\]
\[\text{where}\quad w_{i}=\exp\left(-\sum_{j=1}^{i-1}\sigma_{j}\delta_{j}\right) \left(1-\exp\left(-\sigma_{i}\delta_{i}\right)\right), \tag{2}\]
where \(\sigma_{i},F_{i}\) denotes the volume density and feature of sampled point \(i\), \(w_{i}\) denotes the weight of \(F_{i}\) in the ray \(\mathbf{r}\),and \(\delta_{i}\) is the distance between adjacent samples. We disable view-dependency effect for better multi-view consistency.
Then we can generate feature maps capturing high-level features and map them to RGB space using a 2D CNN decoder. However, unlike [3, 16, 40], we render full-resolution feature maps which have the same resolution as the final RGB images rather than down-sampled feature maps. Rendering full-resolution feature maps has two unique features: **1)** it discards up-sampling operations which cause multi-view inconsistency in general [16], **2)** it removes aliasing when rendering low-resolution feature maps [2] which causes severe flickering effects in stylized RGB videos.
Directly using 3D voxel grid to store features is memory-intensive. We thus adopt vector-matrix tensor decomposition [4] that relaxes the low-rank constraints for two modes of a 3D tensor and factorizes tensors into compact vector and matrix factors, which lowers the space complexity from \(\mathcal{O}(n^{3})\) to \(\mathcal{O}(n^{2})\), massively reducing the memory footprint. We employ a density grid to store volume density and a feature grid to store multi-channel features respectively.
### Feature Transformation for Style Transfer
Once we have the feature grid representation of a scene, we can tackle the task of stylizing 3D scenes. Given a reference style image, our goal is to render stylized novel views of the 3D scene with multi-view consistency. To achieve this, we apply transformations to the features of the grid.
One plausible solution to this task is to apply style transfer to the feature grid directly. This solution is efficient in evaluations as it can render any stylized views with a single style transfer process only. However, it is impractical to train such transformation as it needs to stylize the whole feature grid in every iteration. Another solution is to apply an off-the-shelf zero-shot style transfer method to the features of the sampled 3D points. While this solution can reduce computational cost through decreasing the size of training patch and the number of sampled points, it has two problems: **1)** vanilla zero-shot style transformation is conditioned on holistic statistics of the sampled point batch [21, 32, 28], which violates multi-view consistency in volume rendering as the feature transformation of a specific 3D point will vary across different sampled points; **2)** volume rendering requires sampling hundreds of points along a single ray, which makes transformation on the point batch memory-intensive.
Motivated by the observation that style transformation is conditioned on both content information and style information, we decompose the style transformation into sampling-invariant content transformation (SICT) and deferred style transformation (DST). After the decomposition, SICT will be conditioned solely on the content information while DST conditioned solely on the style information, more details to be elaborated in the ensuing subsections.
#### 3.2.1 Sampling-invariant Content Transformation
Given a batch of sampled points, we can get their corresponding features \(F_{i}\in\mathbb{R}^{C},i\in[1,2,...,N]\) from the feature grid, where \(N\) is the number of the sampled points along a ray and \(C\) is the number of the feature channels. The goal of SICT is to transform the extracted features \(F_{i}\) so that they can be better stylized. We formulate SICT as a channel-wise self-attention operation to the features after instance normalization (IN) [54]. Specifically, we formulate \(Q\)(query), \(K\)(key), and \(V\)(value) as:
\[Q =q(Norm(F_{i})), \tag{3}\] \[K =k(Norm(F_{i})),\] (4) \[V =v(Norm(F_{i})), \tag{5}\]
where \(q,k,v\) are \(1\times 1\) convolution layers which reduce the channel number from \(C\) to \(C^{\prime}\) for computational efficiency, and \(Norm\) denotes the IN. However, as shown in Fig. 3,
Figure 3: Comparison between vanilla instance normalization (IN) in (a) and volume-adaptive IN in (b). During evaluation, volume-adaptive IN uses learned mean and standard-deviation, discarding dependency over the sampled point batch’s holistic statistics (indicated by the red arrows in the left graph).
vanilla IN calculates per-dimension mean and standard-deviation of the batch of sampled points, which varies with different sampled points and incurs multi-view inconsistency accordingly. Thus we design volume-adaptive IN which, during training, keeps running estimates of the computed mean and standard-deviation, and uses them for normalization during evaluations (instead of computing from the sampled point batch). Through volume-adaptive IN, we can ensure that the content transformation is consistent regardless of the sampled point batch's holistic statistics.
Channel-wise self-attention can thus be implemented by:
\[\bar{F}_{i}=V\otimes\mathrm{Softmax}\left(\widetilde{cov}(Q,K)\right), \tag{6}\]
where \(\otimes\) denotes matrix multiplication and \(\widetilde{cov}(Q,K)\in\mathbb{R}^{N\times C^{\prime}\times C^{\prime}}\) denotes the covariance matrix in the channel dimension.
#### 3.2.2 Deferred Style Transformation
After applying SICT to the features of each 3D point, we apply DST to the volume-rendered 2D feature maps \(\bar{F}_{c}\) rather than 3D point features \(\bar{F}_{i}\). To ensure multi-view consistency, we formulate the transformation as matrix multiplication and adaptive bias addition as illustrated in Fig. 4.
Specifically, we first extract feature maps \(F_{s}\) of the reference style \(S\) using a pre-trained VGG [51], and then generate the style transformation matrix \(T\in\mathbb{R}^{C^{\prime}\times C^{\prime}}\) using feature covariance \(cov(F_{s})\) following [28]. Next, we apply matrix multiplication with \(T\) to the feature maps \(\bar{F}_{c}\) and use a \(1\times 1\) convolution layer \(conv\) without bias to restore the channel number from \(C^{\prime}\) to \(C\). Though these operations can partially instill style information, they are not expressive enough without bias addition containing style information [58]. Thus following [21], we multiply the feature maps with the standard-deviation value \(\sigma(F_{s})\) and add the mean value \(\mu(F_{s})\). To ensure it is equivalent when applying the transformation to either 3D point features or 2D feature maps, we adaptively modulate the mean value \(\mu(F_{s})\) with the sum weight of sampled points along each ray \(w_{\mathbf{r}}\). DST can be mathematically formulated by:
\[F_{cs}=conv\left(T\otimes\bar{F}_{c}\right)\times\sigma(F_{s})+w_{\mathbf{r} }\times\mu(F_{s}), \tag{7}\]
\[\text{where}\quad\bar{F}_{c}=\sum_{i=1}^{N}w_{i}\bar{F}_{i},w_{\mathbf{r}}= \sum_{i=1}^{N}w_{i},\mathbf{r}\in\mathcal{R} \tag{8}\]
where \(w_{i}\) denotes the weight of sampled point \(i\) (Eq. (2)), \(\bar{F}_{i}\) denotes the feature of sample \(i\) after SICT, and \(\mathcal{R}\) is the set of rays in each training batch.
Note \(conv\) is a \(1\times 1\) convolution layer without bias, so it is basically a matrix multiplication operation. And \(\sigma(S),\mu(S)\) are scalars. Together with the adaptive bias modulation \(w_{\mathbf{r}}\), Eq. (7) can be reformulated by:
\[F_{cs}=\sum_{i=1}^{N}w_{i}\left(\underbrace{conv\left(T\otimes\bar{F}_{i} \right)\times\sigma(F_{s})+\mu(F_{s})}_{\text{(i)}}\right), \tag{9}\]
where part (i) can be seen as applying style transformation on every 3D point feature independently before volume rendering. This proves that applying DST on 2D feature maps is equivalent to applying the transformation on 3D points' features, maintaining multi-view consistency. The full derivation of Eq. (9) is provided in the appendix.
Finally, we adopt a 2D CNN decoder to project the stylized feature maps \(F_{cs}\) to RGB space to generate the final stylized novel view images.
### Two-stage Model Training
The training of our model is divided into the _feature grid training stage_ and the _stylization training stage_, the former is trained with the target of novel view synthesis, and the latter is trained with the target of style transfer.
Feature grid training stage (First stage).We first learn the feature grid 3D representation for the novel view synthesis task, in preparation for performing feature transformation for style transfer. We train the feature grid and the 2D CNN decoder simultaneously, with the supervision of both RGB images and their bilinearly up-sampled feature maps extracted from ReLU3_1 layer of pre-trained VGG [51]. By aligning the VGG features with the feature grid, the reconstructed features acquire semantic information. We use density grid pre-trained solely on RGB images since the supervising feature maps are not strictly multi-view consistent. The training objective is the mean square error (MSE) between the predicted and ground truth feature maps and RGB images. Following [19, 37], we use perceptual
Figure 4: **Deferred style transformation. We apply the style transformation to the volume-rendered feature maps \(\bar{F}_{c}\) according to the style feature maps \(F_{s}\). To ensure multi-view consistency, we modulate the bias (e.g. the mean value of the style feature maps \(\mu(F_{s})\)) with the sum weight of sampled points along each ray \(w_{\mathbf{r}}\).**
loss [24] as additional supervision to increase reconstructed image quality. The overall loss function is:
\[\mathcal{L}_{grid}=\sum_{r\in\mathcal{R}}\left\|\hat{F}(\textbf{r}) -F(\textbf{r})\right\|_{2}^{2}+\left\|\hat{I}_{\mathcal{R}}-I_{ \mathcal{R}}\right\|_{2}^{2}\] \[+\sum_{l\in I_{p}}\left\|\mathcal{F}^{l}(\hat{I}_{\mathcal{R}})- \mathcal{F}^{l}(I_{\mathcal{R}})\right\|_{2}^{2}, \tag{10}\]
where \(\mathcal{R}\) is the set of rays in each training batch, \(\hat{F}(\textbf{r}),F(\textbf{r})\) are the predicted and ground truth feature of ray **r**, \(\hat{I}_{\mathcal{R}},I_{\mathcal{R}}\) are the predicted and ground truth RGB image, \(l_{p}\) denotes the set of VGG layers that compute perceptual loss, \(\mathcal{F}^{l}\) denotes the feature maps of the \(l\)th layer of pre-trained VGG network.
**Stylization training stage (Second stage).** Our model learns to stylize novel views in the second stage. We freeze the feature grid, train the style transfer module, and fine-tune the CNN decoder. Thanks to the memory-efficient representation of 3D scenes and DST, unlike [8, 11, 48], our model can be trained directly on \(256\times 256\) patches, making patch sub-sampling algorithm [8, 11, 22, 48] unnecessary. We use the same loss as [21] where the content loss \(\mathcal{L}_{c}\) is the MSE of the feature maps and the style loss \(\mathcal{L}_{s}\) is the MSE of the channel-wise feature mean and standard-deviation:
\[\mathcal{L}_{stylization}=\mathcal{L}_{c}+\lambda\mathcal{L}_{s}, \tag{11}\]
where \(\lambda\) balances the content preservation and the stylization effect.
## 4 Experiments
We evaluate StyleRF extensively with qualitative experiments in Sec. 4.1, quantitative experiments in Sec. 4.2 and ablation studies in Sec. 4.3. We demonstrate two applications of StyleRF in Sec. 4.4. The implementation details are provided in the appendix.
### Qualitative Experiments
We evaluate StyleRF over two public datasets including LLFF [35] that contains real scenes with complex geometry structures and Synthetic NeRF [36] that contains \(360^{\circ}\) views of objects. In addition, we benchmark StyleRF with two state-of-the-art zero-shot 3D style transfer methods LSNV [19] and Hyper [8] with their released codes. We perform comparisons on LLFF dataset [35].
Fig. 5 shows qualitative comparisons. We can see that StyleRF achieves clearly better stylization with more precise geometry reconstruction. Specifically, StyleRF can generate high-definition stylization with realistic textures and patterns of style images. The superior stylization is largely attributed to our transformation design that allows working in the feature space with full-resolution feature maps. As illustrated in the highlight boxes, StyleRF can successfully restore the intricate geometry of complex scenes thanks to its radiance field representations. In addition, only StyleRF faithfully transfers the squareness texture in the second style image. Furthermore, StyleRF can robustly generalize to new styles in a zero-shot manner and can adapt well to \(360^{\circ}\) dataset as illustrated in Fig. 1. As a comparison, LSNV [19] fails to capture fine-level geometry like the bones of the T-Rex and the petals of the flower while Hyper [8] produces very blurry stylization.
### Quantitative Results
3D style transfer is a very new and under-explored task and there are few metrics for quantitative evaluation of stylization quality. Hence, we manage to evaluate the multi-view consistency only. In our experiments, we warp one view to the other according to the optical flow [53] using softmax splatting [42], and then computed the masked RMSE score and LPIPS score [64] to measure the stylization consistency. Following [8, 11, 19], we compute the short-range and long-range consistency scores which compare adjacent views and far-away views respectively. We compare StyleRF against two state-of-the-art zero-shot 3D style transfer methods Hyper [8] and LSNV [19], one SOTA single-frame-based video style transfer method CCPL [60], one SOTA multi-frames-based video style transfer method ReReVST [57], and one classical image style transfer method AdaIN [21].
It can be seen from Tab. 1 that StyleRF significantly outperforms image style transfer approach [21] and video style transfer approach [57, 60] which capture little information about the underlying 3D geometry. In addition, StyleRF achieves better consistency than point-cloud-based 3D style transfer [19] as well. Note Hyper [8] achieves slightly better LPIPS and RMSE scores than our method, largely because it produces over-smooth results and inadequate stylization as shown in Fig. 5.
### Ablation Studies
We design two innovative techniques to improve the stylization quality and maintain the multi-view consistency.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{Short-range Consistency} & \multicolumn{2}{c}{Long-range Consistency} \\ \cline{2-5} & **LPIPS** & **RMSE** & **LPIPS** & **RMSE** \\ AdaIN [21] & 0.152 & 0.123 & 0.220 & 0.186 \\ CCPL [60] & 0.110 & 0.106 & 0.191 & 0.174 \\ ReReVST [57] & 0.098 & 0.080 & 0.186 & 0.146 \\ LSNV [19] & 0.093 & 0.092 & 0.181 & 0.155 \\ Hyper [8] & 0.084 & 0.068 & 0.131 & 0.101 \\
**Ours** & 0.072 & 0.082 & 0.149 & 0.137 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Results on consistency.** We compare StyleRF with the state-of-the-art on consistency using LPIPS (\(\downarrow\)) and RMSE (\(\downarrow\)).
The first is volume-adaptive instance normalization which uses the learned mean and variance of the whole volume during inference, eliminating the dependency on holistic statistics of the sampled point batch. The second is the adaptive bias addition in DST, which improves the stylization quality using bias capturing style information. We evaluate the two designs to examine how they contribute to the overall stylization of our method.
**Volume-adaptive instance normalization.** We compare our volume-adaptive instance normalization (IN) with vanilla IN and StyleRF without IN. As Fig. 6 (c) shows, vanilla IN produces severe block-shape artifacts as the transformation of each batch is conditioned on the holistic statistics of itself, thus each batch (i.e. block in the image) produces inconsistent stylization which leads to the artifacts. However, if we discard IN as shown in Fig. 6 (d), the multi-view consistency can maintain but the stylization quality compromises a lot, failing to capture the correct color tone of the reference style image. This is because IN removes the original style information of the content image which facilitates the transfer of the reference style [21].
**Adaptive bias addition.** As illustrated in Fig. 6 (b), the stylization quality degrades a lot if we eliminate the adaptive bias addition in DST (Sec. 3.2.2), producing unnatural
Figure 5: Comparison of StyleRF with two state-of-the-art zero-shot 3D style transfer methods LSNV [19] and Hyper [8]. For each of the two sample _Scenes_ and reference _Styles_, StyleRF produces clearly better 3D style transfer and depth estimation. Check zoom-in for details.
stylization compared to the stylization of our full pipeline in Fig. 6 (a). This is because bias usually contains crucial style information such as the overall color tone [58]. StyleRF employs bias addition that is adaptively modulated by the weight of each ray, improving the stylization quality and keeping multi-view consistency concurrently.
### Applications
StyleRF can be easily extended along different directions with different applications. We provide two possible extensions in the ensuing subsections.
**Multi-style interpolation.** StyleRF can smoothly interpolate different styles thanks to its high-level feature representation of a 3D scene. As illustrated in Fig. 7, we linearly interpolate the feature maps of a specific view by using four different styles at four corners. Unlike previous NeRF-based 3D style transfer that supports style interpolation by interpolating one-hot latent vectors [11], StyleRF can interpolate arbitrary numbers of unseen new styles by interpolating features of the scene, yielding more smooth and harmonious interpolation. Hence, StyleRF can not only transfer arbitrary styles in a zero-shot manner but also generate non-existent stylization via multi-style interpolation.
**Compositional 3D style transfer.** Thanks to its precise geometry reconstruction, StyleRF can be seamlessly integrated with NeRF-based object segmentation [12, 26, 65] for compositional 3D style transfer. As shown in Fig. 8, we apply 3D-consistent segmentation masks to the feature maps and apply different styles to stylize the contents inside and outside the masks separately. We can see that the edges of the masks can be blended more softly by applying the segmentation masks to the feature maps instead of RGB images. Due to its zero-shot nature, StyleRF can create infinite combinations of styles without additional training, producing numerous artistic creations and inspirations.
## 5 Conclusion
In this paper, we present StyleRF, a novel zero-shot 3D style transfer method that balances the three-way dilemma over accurate geometry reconstruction, high-quality stylization, and being generalizable to arbitrary new styles. By representing the 3D scene with an explicit grid of high-level features, we can faithfully restore high-fidelity geometry through volume rendering. Then we perform style transfer on the feature space of the scene, leading to high-quality zero-shot stylization results. We innovatively design sampling-invariant content transformation to maintain multiview consistency and deferred style transformation to increase efficiency. We demonstrate that StyleRF achieves superior 3D stylization quality than previous zero-shot 3D style transfer methods, and can be extended to various interesting applications for artistic 3D creations.
## Acknowledgement
This project is funded by the Ministry of Education Singapore, under the Tier-1 project scheme with project number RT18/22.
Figure 8: **Compositional 3D style transfer.** Given the 3D-consistent segmentation masks, StyleRF can create infinite combinations of styles by spatial composition.
Figure 6: **Ablation studies.** (a) shows the stylization of our full pipeline. (b) shows the stylization without the adaptive bias. (c) shows the stylization when replacing the volume-adaptive instance normalization (IN) with vanilla IN. (d) shows the stylization without any IN.
Figure 7: **Multi-style interpolation.** StyleRF can smoothly interpolate between arbitrary styles by interpolating features of the scene. |
2301.07689 | Mixed states for neutral current neutrino oscillation | The theory of neutrino oscillation predicts that if both neutrino and
antineutrino coming from $Z_0$ decay are detected, one can observe an
oscillation pattern between the corresponding detectors. This prediction is
based on two properties; the neutrino-antineutrino pairs are produced
coherently and they are detected with definite flavor in detectors. In this
paper, we reanalyze this problem by considering some massive neutrinos which
are mixed with light neutrinos but they either participate incoherently or are
decoupled in the production and detection processes. In fact, neutrinos whose
masses are larger than the upper bound on the mass uncertainty to be compatible
with the coherence conditions (we will see it is about 1 keV) must be treated
incoherently. Very heavy neutrinos whose masses are much larger than the
neutrino energy in the neutrino production process are decoupled. Under these
conditions, the created neutrino-antineutrino state as well as the states of
detected neutrino and antineutrino is mixed. We see that the oscillation
pattern cannot be observed for incoherent neutrinos and the standard
oscillation pattern is recovered if the light neutrino masses are ignored in
the production and detection processes. Moreover, since the $Z_0$ decay process
is performed blindly with respect to flavors, the oscillating contributions in
the event rates are independent of the $Z_0$ decay width. | M. M. Ettefaghi, Z. Askaripour Ravari | 2023-01-18T18:18:39Z | http://arxiv.org/abs/2301.07689v1 | # Mixed states for neutral current neutrino oscillation
###### Abstract
The theory of neutrino oscillation predicts that if both neutrino and antineutrino coming from \(Z_{0}\) decay are detected, one can observe an oscillation pattern between the corresponding detectors. This prediction is based on two properties; the neutrino-antineutrino pairs are produced coherently and they are detected with definite flavor in detectors. In this paper, we reanalyze this problem with considering some massive neutrinos which are mixed with the light neutrinos but they either participate incoherently or are decoupled in the production and detection processes. In fact, neutrinos whose masses are larger than the upper bound on the mass uncertainty to be compatible with the coherence conditions (we will see it is about 1 keV) must be treated incoherently. Very heavy neutrinos whose masses are much larger than the neutrino energy in the neutrino production process are decoupled. Under these conditions, the created neutrino-antineutrino state as well as the states of detected neutrino and antineutrino is mixed. We see that the oscillation pattern cannot be observed for incoherent neutrinos and the standard oscillation pattern is recovered if the light neutrino masses are ignored in the production and detection processes. Moreover, since the \(Z_{0}\) decay process is performed blindly with respect to flavors, the oscillating contributions in the event rates are independent of the \(Z_{0}\) decay width.
Neutral current neutrino-antineutrino production, Neutrino oscillation, Mixed and pure state, Coherent and incoherent process.
## I Introduction
Neutrino oscillation is one of the most interesting phenomena in quantum mechanics which has been experimentally established [1]. The quantum approaches to neutrino oscillation are based on the existence of nonzero and non-degenerate neutrino masses. However, the differences are so smaller than the energy uncertainty in the creation and detection processes that the neutrino mass eigenstate cannot be distinguished. This point is used in the quantum mechanics approach; the states of the created and detected neutrinos (well known as flavor eigenstates) are written as a coherent superposition of mass eigenstates [2]. Of course, the states of neutrinos participating in weak interactions are not exactly identical to the flavor eigenstates [3; 4]. But if the mass of neutrinos can be ignored, their weak interaction states can be considered as a flavor eigenstate. Moreover, flavor neutrinos produced or detected in processes which involve more than one neutrino cannot be separately described by pure states, but require a density matrix description [5]. However, they can be approximated with a density matrix of a pure state only when the differences of the neutrino masses are neglected in the interaction process. In most studies, light active neutrinos (standard model neutrino) have been considered. Therefore, their states are pure and the oscillation probabilities can be obtained in the framework of either quantum mechanics or quantum field theory [6]. Meanwhile, the effects of mixing of the three standard light neutrinos with heavy neutrinos, which are either decoupled because their masses are much larger than the maximum neutrino energy in the production and detection processes or produced and detected incoherently because their mass differences are larger than the related energy uncertainties, have been investigated in Ref. [5]. In fact, the standard neutrino oscillation probability is recovered provided that the masses of light neutrino are ignored in the production and detection processes.
The neutrino and anti-neutrino state coming from a real or virtual \(Z_{0}\) decay is a coherent superposition of either the flavor eigenstates or mass eigenstates. In fact, every flavor eigenstate as well as every mass eigenstate is created with the same probability. Therefore, one can write, in general, the state of the created neutrino-antineutrino as follows:
\[|\nu_{Z}\rangle=\frac{1}{\sqrt{N_{l}}}\sum_{i=1}^{N_{l}}|\nu_{i}\rangle\,| \bar{\nu}_{i}\rangle=\frac{1}{\sqrt{N_{l}}}\sum_{\alpha=e,\mu,\tau,...}^{N_{l} }|\nu_{\alpha}\rangle|\bar{\nu}_{\alpha}\rangle, \tag{1}\]
"..." denotes all other flavor states until the \(N_{l}\)'th one which all of them are created coherently during the \(Z_{0}\) decay process. The second equality is satisfied provided that the mixing matrix is unitary. If we considered any other
neutrinos either being created incoherently or being decoupled of electroweak interactions, the mixing matrix including only light coherent neutrinos would not be unitary. Furthermore, in the usual condition that only either neutrino or antineutrino in \(|\nu_{Z}\rangle\) can be detected, the other one must be traced out. Therefore, the related density matrix is completely classical and it is impossible to observe usual neutrino oscillation in this condition. However, if both neutrino and anti-neutrino are detected in a coherent manner, a oscillation pattern can be observed between detectors [7]. This problem has been restudied by considering the localization properties in Refs. [8; 9]. Two proper conditions play fundamental roles in obtaining the oscillation pattern; \(|\nu_{Z}\rangle\) is a coherent mixture of neutrino-antineutrino pairs \(\nu_{i}\bar{\nu}_{i}\) and they are detected with definite flavor. Let us consider neutrinos-antineutrinos whose masses are smaller than their energy but larger than the coherence upper bound on neutrino mass uncertainty, which is about 1 keV as we will see in the next section. In this case, their state must be added incoherently to the density matrix given in Eq. (1). Moreover, we assume that there exist heavy neutrino-antineutrinos whose masses are much larger than the neutrino-antineutrino energies coming from \(Z_{0}\) decay. This situation occurs, for example, in see-saw models [10; 11; 12; 13]. These neutrinos and antineutrinos are decoupled but they might affect the oscillation pattern by mixing with active neutrinos. So, under these conditions, we face a new situation compared to the theoretical framework considered in Refs. [7; 8; 9] and it is the scope of this paper.
In the next section, we give an appropriate quantum state describing both coherently and incoherently created neutrino-antineutrino due to the \(Z_{0}\) decay process and take into account the corresponding time evolution. In the section III, we consider two mechanisms for neutrino and antineutrino detection via charged current interactions; both of them are detected by scattering off nucleons and neutrino is scattered off electron and antineutrino is done like before. Accordingly, an appropriate state is written for every case and we discuss on the corresponding oscillation probability. Finally, in the section IV, we summarize our results.
## II Neutrino-antineutrino states due to \(Z_{0}\) decay
According to Eq. (1), a pair of neutrino and anti-neutrino which are entangled and blind with respect to the flavor might be produced during a \(Z_{0}\) decay process. In fact, we disregard the neutrino mass differences and use the unitarity of mixing matrix in Eq. (1). Let us consider that the masses of neutrinos are so large that they cannot be ignored. Therefore, the neutrino state must be given by:
\[|\nu_{Z}\rangle=\frac{1}{\sqrt{R_{l}^{P}}}\sum_{k\leq N_{l}}M_{kk}^{P}|\nu_{k }\rangle|\bar{\nu}_{k}\rangle, \tag{2}\]
where \(R_{l}^{P}=\sum_{i\leq N_{l}}|M_{ii}^{P}|^{2}\) and \(M_{ii}^{P}\) denote the amplitude of \(Z_{0}\) decay into a neutrino-antineutrino pair with mass \(m_{i}\). It is clear that if we ignore the mass difference of neutrinos, we reach Eq. (1) that is the standard expression.
Now, if we consider in addition to \(N_{l}\) neutrinos being produced coherently, there exist \(N_{h}\) heavy neutrinos which are produced incoherently, the initial state is mixed and must be described by the following density matrix:
\[\rho=\frac{1}{R_{l}}\sum_{k,k^{\prime}\leq N_{l}}M_{kk}^{P}M^{P} {}^{*}_{k^{\prime}k^{\prime}}|\nu_{k},\bar{\nu}_{k}\rangle\langle\nu_{k^{ \prime}},\bar{\nu}_{k^{\prime}}|+\frac{1}{R_{h}^{P}}\sum_{k=N_{l}+1}^{N_{l}+ N_{h}}|M_{kk}^{P}|^{2}|\nu_{k},\bar{\nu}_{k}\rangle\langle\nu_{k},\bar{\nu}_{k}|, \tag{3}\]
in which
\[R_{h}^{P}=\sum_{i=N_{l}+1}^{N_{l}+N_{h}}|M_{ii}^{P}|^{2}. \tag{4}\]
The first sentence of Eq. (3) describes the state of neutrino-antineutrino produced coherently (it contains off-diagonal elements). The second sentence is related to heavy neutrino-antineutrinos whose mass differences are larger than the quantum-mechanical energy uncertainty and being produced incoherently. Indeed, from the relativistic energy momentum dispersion relation, the mass uncertainty of neutrino (antineutrino) can be estimated by \(\sigma_{m^{2}}\simeq 2\sqrt{2}E\sigma_{E}\), where \(E\) and \(\sigma_{E}\) are energy and the energy uncertainty, respectively [15]. Given that the \(Z_{0}\) interaction with environment is neglected, \(\sigma_{E}\) is given by the \(Z_{0}\) decay width. For instance, in the \(Z_{0}\) rest frame we have \(\sigma_{m^{2}}\sim(7\rm GeV)^{2}\). Therefore neutrino (antineutrino) mass eigenstates \(\nu_{i}\) (\(\bar{\nu}_{i}\)) and \(\nu_{j}\) (\(\bar{\nu}_{j}\)) are created incoherently provided that \(|m_{i}^{2}-m_{j}^{2}|>(7\rm GeV)^{2}\). However, to observe the oscillation phenomenon, the neutrino-antineutrino state must preserve its coherence until the detection processes. To explain the loss of coherence, we need to consider the localization properties of neutrinos and antineutrinos. Accordingly, they must be described by localized wave packets of width \(\sigma_{x}\), which propagate with
group velocities \(v_{g}\) given by \(v_{g}=\frac{\partial E}{\partial p}=\frac{p}{E}\). The coherence loss takes place during the time \(t_{\rm coh}\) when the overlaps of the wave packets of various mass eigenstate are diminished i.e.
\[t_{\rm coh}\simeq\frac{\sigma_{x}}{\Delta v_{g}}, \tag{5}\]
where \(\Delta v_{g}\) is the group velocity difference of two mass eigenstates with masses \(m_{i}\) and \(m_{j}\):
\[\Delta v_{g}=|\frac{p_{i}}{E_{i}}-\frac{p_{j}}{E_{j}}|\simeq 2\frac{|m_{i}^{2}-m _{j}^{2}|}{m_{Z_{0}}^{2}}. \tag{6}\]
Here, in the last step, we consider the \(Z_{0}\) rest frame and use \(\frac{m_{i(j)}^{2}}{m_{Z_{0}}^{2}}\ll 1\) which is reasonable according to the above discussion. Therefore, the coherence length in the \(Z_{0}\) rest frame is defined by
\[x_{\rm coh}\simeq v_{g}\frac{\sigma_{x}}{\Delta v_{g}}\simeq\frac{m_{Z_{0}}^{ 2}}{2\Gamma_{Z_{0}\to\nu\bar{\nu}}|m_{i}^{2}-m_{j}^{2}|}, \tag{7}\]
in which we use \(\sigma_{x}\simeq\sigma_{p}^{-1}\simeq(\frac{E}{p}\sigma_{E})^{-1}\simeq( \frac{E}{p}\Gamma_{Z_{0}\to\nu\bar{\nu}})^{-1}\). Now, let us suppose the distance between two detectors to be about 1m. Therefore, the coherence of neutrino-antineutrino state is preserved up to this distance provided that \(|m_{i}^{2}-m_{j}^{2}|<(1{\rm keV})^{2}\). But in real conditions, the detector distance is much larger than this value, so the above bound is much more restricting. However, if we consider the \(Z_{0}\) boson in flight, this restricting bound is relaxed a bit. In this case, the coherence length scales as \(x_{\rm coh}\to\gamma^{3}x_{\rm coh}\), where \(\gamma\) is the Lorentz factor [15]. For instance, if the \(Z_{0}\) energy is about 1TeV, \(x_{\rm coh}\) becomes about 1km.
If we ignore the mass differences for the light neutrinos, the density matrix in Eq. (3) is simplified as follows:
\[\rho=\frac{1}{N_{l}}\sum_{k,k^{\prime}\leq N_{l}}|\nu_{k},\bar{\nu}_{k}\rangle \langle\nu_{k^{\prime}},\bar{\nu}_{k^{\prime}}|+\frac{1}{R_{h}^{P}}\sum_{k=N_ {l}+1}^{N_{l}+N_{h}}|M_{kk}^{P}|^{2}|\nu_{k},\bar{\nu}_{k}\rangle\langle\nu_{k },\bar{\nu}_{k}|. \tag{8}\]
After propagation in plane wave approximation, the density matrix given in Eq. (3) is transformed as follows:
\[\rho(t,L;\bar{t},\bar{L})=\frac{1}{N_{l}}\sum_{k,k^{\prime}\leq N_{l}}e^{-i( E_{k}-E_{k^{\prime}})(t+\bar{t})+i(p_{k}-p_{k^{\prime}})(L+\bar{L})}|\nu_{k}, \bar{\nu}_{k}\rangle\langle\nu_{k^{\prime}},\bar{\nu}_{k^{\prime}}|+\frac{1}{ R_{h}^{P}}\sum_{k=N_{l}+1}^{N_{l}+N_{h}}|M_{kk}^{P}|^{2}|\nu_{k},\bar{\nu}_{k} \rangle\langle\nu_{k},\bar{\nu}_{k}|, \tag{9}\]
where \(t\) and \(\bar{t}\) are the neutrino and anti-neutrino travel time from the source to the corresponding detectors in the distances \(L\) and \(\bar{L}\), respectively. Here, we chose the \(Z_{0}\) rest frame and \(E_{k}\) and \(p_{k}\) denote the energy and momentum of the \(k\)'th mass eigenstate. With a realistic assumption, one can suppose that the light neutrinos are extremely relativistic. Therefore, their mass eigenstate energy and momentum are approximated by the following relations [14]:
\[E_{k}\approx E+\xi\frac{m_{k}^{2}}{2E}, \tag{10}\]
\[p_{k}\approx E-(1-\xi)\frac{m_{k}^{2}}{2E}, \tag{11}\]
where \(E\) is the neutrino energy in the limit of zero mass and \(\xi\) is a dimensionless quantity that can be estimated from energy-momentum conservation in the production processes. In the case of \(Z_{0}\) decay in the rest frame, \(E\) and \(\xi\) are \(m_{Z_{0}}/2\) and \(0\), respectively. Thus, without losing the generality of the problem, one can write Eq. (9) as follows:
\[\rho(L,\bar{L})=\frac{1}{N_{l}}\sum_{k,k^{\prime}\leq N_{l}}e^{-i\frac{\Delta m _{kk^{\prime}}^{2}}{2E}(L+\bar{L})}|\nu_{k},\bar{\nu}_{k}\rangle\langle\nu_{k ^{\prime}},\bar{\nu}_{k^{\prime}}|+\frac{1}{R_{h}^{P}}\sum_{k=N_{l}+1}^{N_{l}+ N_{h}}|M_{kk}^{P}|^{2}|\nu_{k},\bar{\nu}_{k}\rangle\langle\nu_{k},\bar{\nu}_{k}|. \tag{12}\]
Here, we see that only the state corresponding to the coherent production has evolved during the propagation.
## III Detection processes
As was said, both neutrino and antineutrino coming from a \(Z_{0}\) decay process must be detected in order to observe intuitively oscillation pattern between detectors. Detection processes can usually be performed by an interaction involving one or two neutrinos. When the detection process is done through neutrino scattering off nucleus via the charged current interaction, one neutrino is involved in the detection processes
\[\nu_{\alpha}+D_{I}\to D_{F}+l_{\alpha}^{-}, \tag{13}\]
\[\bar{\nu}_{\beta}+\bar{D}_{I}\rightarrow\bar{D}_{F}+l_{\beta}^{+}. \tag{14}\]
As an example of two neutrinos participating in the detection process, let us consider neutrinos are detected via the following charged current process:
\[\nu_{\alpha}+e^{-}\rightarrow\nu_{e}+l_{\alpha}^{-}, \tag{15}\]
Of course, for non-electron neutrinos, this process is not used in practice for neutrino oscillation experiments, because the neutrino energy threshold is high (about 10.92 GeV) and the cross section is about one thousand times smaller than that of the corresponding charged current scattering on neutron. In the case of the neutral current neutrino oscillation, where both neutrino and anti-neutrino are to be detected, the probability of oscillation, in the context of density matrix theory is given by
\[P_{\alpha\beta}(L,\bar{L})=tr[\rho(L,\bar{L})\rho_{\alpha}^{D}\otimes\bar{\rho }_{\beta}^{\bar{D}}], \tag{16}\]
where \(\rho_{\alpha}^{D}\) and \(\bar{\rho}_{\beta}^{\bar{D}}\) are the density matrices of a detected neutrino with flavor \(\alpha\) and a detected antineutrino with flavor \(\beta\), respectively. Similar to the production process, we assume that in addition to \(N_{l}\) light neutrinos (antineutrinos) coherently involving in the detection processes, \(N_{h}-N_{l}\) heavy neutrinos (antineutrinos) also participate incoherently in detection processes. Also, it may be possible a mixing between light neutrinos and very heavy neutrinos which are decoupled because their masses are much larger than the maximum energy in the corresponding process.
We consider two conceivable situations for combining two detection processes:
* We assume that the both detection processes are done through interaction with the nucleus (see Eqs. (13) and (14)). Hence, the density matrix of the detected neutrino and antineutrino will be as follows: \[\rho_{\alpha}^{D}=\frac{|M_{\alpha}^{0}(E)|^{2}}{R_{l\alpha}^{D}}\sum_{j,j^{ \prime}\leq N_{l}}U_{\alpha j}^{*}U_{\alpha j^{\prime}}|\nu_{j}\rangle\langle \nu_{j^{\prime}}|+\frac{1}{R_{h\alpha}^{D}}\sum_{j=N_{l}+1}^{N_{l}+N_{k}}|U_{ \alpha j}|^{2}|M_{j}^{D}|^{2}|\nu_{j}\rangle\langle\nu_{j}|,\] (17) and \[\bar{\rho}_{\alpha}^{\bar{D}}=\frac{|\bar{M}_{\alpha}^{0}(E)|^{2}}{\bar{R}_{l \alpha}^{\bar{D}}}\sum_{j,j^{\prime}\leq N_{l}}U_{\alpha j}U_{\alpha j^{\prime }}^{*}|\bar{\nu}_{j}\rangle\langle\bar{\nu}_{j^{\prime}}|+\frac{1}{R_{h \alpha}^{D}}\sum_{j=N_{l}+1}^{N_{l}+N_{k}}|U_{\alpha j}|^{2}|\bar{M}_{j}^{\bar {D}}|^{2}|\bar{\nu}_{j}\rangle\langle\bar{\nu}_{j}|,\] (18) respectively. Here, we have ignored the mass difference for light neutrinos, so \[R_{l\alpha}^{D}=|M_{\alpha}^{0}(E)|^{2}\sum_{j\leq N_{l}}|U_{\alpha j}|^{2}\] (19) \[\bar{R}_{l\alpha}^{\bar{D}}=|\bar{M}_{\alpha}^{0}(E)|^{2}\sum_{j\leq N_{l}}|U_{ \alpha j}|^{2}\] (20) and for incoherently involving heavy neutrinos and anti-neutrinos, we have \[R_{h\alpha}^{D}=\sum_{j=N_{l}+1}^{N_{l}+N_{h}}|U_{\alpha j}|^{2}|M_{j}^{D}|^{2},\] (21) \[\bar{R}_{h\alpha}^{\bar{D}}=\sum_{j=N_{l}+1}^{N_{l}+N_{h}}|U_{\alpha j}|^{2}| \bar{M}_{j}^{\bar{D}}|^{2}.\] (22)
In Eqs. (17) and (18), we see that if the incoherent neutrinos are not considered the detected states are pure even though we do not ignore the neutrino masses. Inserting the density matrix operators given in Eqs. (12), (17) and (18) in Eq. (16), one can obtain the oscillation probability as follows:
\[P_{\alpha\beta}(L,\bar{L}) = \frac{|M^{0}_{\alpha}(E)|^{2}|\bar{M}^{0}_{\beta}(E)|^{2}}{N_{l}R^ {D}_{l\alpha}\bar{R}^{D}_{l\beta}}\sum_{k,k^{\prime}\leq N_{l}}U^{*}_{\alpha k }U_{\beta k}U_{\alpha k^{\prime}}U^{*}_{\beta k^{\prime}}e^{-i\frac{\Delta m^{2 }_{kk^{\prime}}}{2E}(L+\bar{L})} \tag{23}\] \[+ \frac{1}{R^{P}_{h}R^{D}_{l\alpha}\bar{R}^{D}_{h\beta}}\sum_{k=N_{ l}+1}^{N_{l}+N_{h}}|U_{\alpha k}|^{2}|U_{\beta k}|^{2}|M^{P}_{kk}|^{2}|M^{D}_{k}|^{2 }|\bar{M}^{\bar{D}}_{k}|^{2}.\]
The first sentence in this equation does not show the standard oscillation probability between flavors \(\alpha\) and \(\beta\) because the new mixing matrix elements are \(U_{\alpha k}/\sqrt{\sum_{j\leq N_{l}}|U_{\alpha j}|^{2}}\) which do not constitute an unitary matrix. The second term is related to neutrinos that do not represent oscillation behavior since their production and detection are performed incoherently. Meanwhile, the rate of oscillation observation can be written as follows:
\[{\cal R}_{\alpha\beta}(L,\bar{L},E)\propto\int d{\rm PS}\ (R^{D}_{\alpha})( \bar{R}^{\bar{D}}_{\beta})\,P_{\alpha\beta}(L,\bar{L}), \tag{24}\]
where the integration over \(d{\rm PS}\) denotes schematically the integration over the phase space. \(R^{D}_{\alpha}\) and \(\bar{R}^{\bar{D}}_{\beta}\) are the probability of detection processes for neutrino and antineutrino, respectively. Thus, they are given by:
\[R^{D}_{\alpha}=\sum_{k}|U_{\alpha k}|^{2}|M^{D}_{k}|^{2}, \tag{25}\]
and
\[\bar{R}^{\bar{D}}_{\beta}=\sum_{k}|U_{\beta k}|^{2}|\bar{M}^{\bar{D}}_{k}|^{2}, \tag{26}\]
Ignoring the mass differences for light neutrinos, one can write:
\[R^{D}_{\alpha}=|M^{0}_{\alpha}(E)|^{2}\sum_{k\leq N_{l}}|U_{\alpha k}|^{2}+ \sum_{k=N_{l}+1}^{N_{l}+N_{h}}|U_{\alpha k}|^{2}|M^{D}_{k}|^{2}=R^{D}_{l\alpha }+R^{D}_{h\alpha}, \tag{27}\]
and
\[\bar{R}^{\bar{D}}_{\beta}=|\bar{M}^{0}_{\beta}(E)|^{2}\sum_{k\leq N_{l}}|U_{ \beta k}|^{2}+\sum_{k=N_{l}+1}^{N_{l}+N_{h}}|U_{\beta k}|^{2}|\bar{M}^{\bar{D} }_{k}|^{2}=\bar{R}^{\bar{D}}_{l\beta}+\bar{R}^{\bar{D}}_{h\beta}, \tag{28}\]
where \(M^{0}_{\alpha}(E)\) and \(\bar{M}^{0}_{\beta}(E)\) are the amplitudes of the detection processes for massless neutrino and antineutrino. Therefore, using the probability of transition given in Eq. (23) and above issues, one can write the event rate as follows:
\[{\cal R}_{\alpha\beta}(L,\bar{L},E) \propto \sigma^{0}_{\alpha}(E)P^{\rm eff}_{\alpha\beta}(L,\bar{L})\bar{ \sigma}^{0}_{\beta}(E)+\frac{\sum_{k,k^{\prime}=N_{l}+1}^{N_{l}+N_{h}}|U_{ \alpha k}|^{2}|U_{\beta k^{\prime}}|^{2}\sigma^{k}_{\alpha}\bar{\sigma}^{k^{ \prime}}_{\beta}}{(\sum_{j\leq N_{l}}|U_{\alpha j}|^{2})(\sum_{j\leq N_{l}}|U _{\beta j}|^{2})}P^{\rm eff}_{\alpha\beta}(L,\bar{L}) \tag{29}\] \[+ \sum_{k=N_{l}+1}^{N_{l}+N_{h}}\left(\frac{|U_{\alpha k}|^{2}| \sigma^{k}_{\alpha}\bar{\sigma}^{0}_{\beta}}{\sum_{j\leq N_{l}}|U_{\alpha j}|^ {2}}+\frac{|U_{\beta k}|^{2}\sigma^{0}_{\alpha}\bar{\sigma}^{k}_{\beta}}{\sum_ {j\leq N_{l}}|U_{\beta j}|^{2}}\right)P^{\rm eff}_{\alpha\beta}(L,\bar{L})\ldots\]
where \(\sigma^{0}_{\alpha}(E)\) (\(\bar{\sigma}^{0}_{\alpha}(E)\)) and \(\sigma^{k}_{\beta}(E)\) (\(\bar{\sigma}^{k}_{\beta}(E)\)) are the detection cross sections for neutrino (antineutrino) with masses zero and \(m_{k}\), respectively. Here, "..." denotes all no oscillating terms which are related to the incoherent neutrinos. \(P^{\rm eff}_{\alpha\beta}\) is similar to the usual oscillation formula (without considering heavy neutrinos) which is given by
\[P^{\rm eff}_{\alpha\beta}=\frac{1}{N_{l}}\sum_{k,k^{\prime}\leq N_{l}}U^{*}_{ \alpha k}U_{\beta k}U_{\alpha k^{\prime}}U^{*}_{\beta k^{\prime}}e^{-i\frac{ \Delta m^{2}_{kk^{\prime}}}{2E}(L+\bar{L})}. \tag{30}\]
Eq. (29) is reduced to the usual expected oscillation pattern if the incoherent neutrinos are not considered. Moreover, the scattering of incoherent neutrinos and antineutrinos in detectors contribute in the observation rate of oscillation pattern. Given that the transition probability given by Eq. (23), only the no oscillating terms, which are not written explicitly in Eq. (29), are dependent on the incoherent neutrino-antineutrinos producing through the \(Z_{0}\) decay process.
* As another possibility, we assume that neutrinos and anti-neutrinos are detected by processes (15) and (14), respectively. In the neutrino detection process, an electron neutrino is produced though the interaction of incoming neutrino with an electron. Given that no coherent superposition of outgoing neutrino mass eigenstate is detected, the cross section of the process (15) is the incoherent sum of the cross sections with the different massive neutrinos in the final state \[\sigma(\nu_{\alpha}+e^{-}\rightarrow\nu_{e}+l_{\alpha}^{-})=\sum_{i}\sigma(\nu_ {\alpha}+e^{-}\rightarrow\nu_{i}+l_{\alpha}^{-}),\] (31) Therefore, we consider the creation of a neutrino with mass \(m_{j}\) as a final state and construct the corresponding density matrix operator similar to Eq. (17) \[\rho_{j}^{D}=\frac{|M^{D}_{\ 0,j}|^{2}}{R_{el}^{\ D}_{\ \alpha j}}\sum_{k,k^{\prime} \leq N_{l}}U^{*}_{\alpha k}U_{\alpha k^{\prime}}|\nu_{k}\rangle\langle\nu_{k^{ \prime}}|+\frac{1}{R_{e_{h}^{\ D}\alpha j}^{D}}\sum_{k=N_{l}+1}^{N_{l}+N_{h}}|U _{\alpha k}|^{2}|M^{D}_{\ kj}|^{2}|\nu_{k}\rangle\langle\nu_{k}|,\] (32) where \[R_{e_{l}^{\ D}\ \alpha j}^{D}=|M^{D}_{\ 0,j}|^{2}\sum_{k\leq N_{l}}|U_{ \alpha k}|^{2},\] (33) and \[R_{e_{h}^{\ D}\ \alpha j}^{D}=\sum_{k=N_{l}+1}^{N_{l}+N_{h}}|U_{ \alpha k}M^{D}_{\ k,j}|^{2}.\] (34) Now, we should sum over mass eigenstates of the outgoing neutrino with coefficients \(U_{e,j}\) in order to write the detection density matrix operator appropriate for the process Eq. (15). So we have \[\rho^{D} = \sum_{j=1}^{N_{l}+N_{h}}|U_{ej}|^{2}\rho_{j}^{D}\] (35) \[= \sum_{j=1}^{N_{l}+N_{h}}\frac{|U_{ej}|^{2}}{\sum_{j\leq N_{l}}|U _{\alpha j}|^{2}}\sum_{k,k^{\prime}\leq N_{l}}U^{*}_{\alpha k}U_{\alpha k^{ \prime}}|\nu_{k}\rangle\langle\nu_{k^{\prime}}|+\sum_{j=1}^{N_{l}+N_{h}}\frac{| U_{ej}|^{2}}{R_{e_{h}^{\ D}\alpha j}^{D}}\sum_{k=N_{l}+1}^{N_{l}+N_{h}}|U_{\alpha k}|^{2}|M^{D}_{\ kj}|^{2}|\nu_{k}\rangle \langle\nu_{k}|,\] where the mass differences are ignored for light neutrinos. It should be noted here that even if we do not consider incoherent neutrinos, the neutrino state detected through the charged current leptonic process is mixed provided that the difference in the mass of coherent neutrinos is not ignored. Since we consider that anti-neutrinos are detected by the mechanism given by Eq. (14), the state of detected antineutrinos is given by the density matrix operator given in Eq. (18). Therefore, according to Eq. (16), the transition probability is obtained as follows: \[P_{\alpha\beta}(L,\bar{L})= \frac{1}{\sum_{j\leq N_{l}}|U_{\beta j}|^{2}}\sum_{j=1}^{N_{l}+N_ {h}}\frac{|U_{ej}|^{2}}{\sum_{i\leq N_{l}}|U_{\alpha i}|^{2}}\sum_{k,k^{\prime }\leq N_{l}}U^{*}_{\alpha k}U_{\beta k}U_{\alpha k^{\prime}}U^{*}_{\beta k^{ \prime}}e^{-i\frac{\Delta m^{2}_{k^{\prime}}}{2\bar{L}}(L+\bar{L})}\] (36) \[+\frac{1}{R_{h}^{P}\bar{R}_{h\beta}^{D}}\sum_{j=1}^{N_{l}+N_{h}} \frac{|U_{ej}|^{2}}{R_{e_{h}^{\ D}}}\sum_{\alpha j}^{N_{l}+N_{h}}|U_{\alpha k}|^ {2}|U_{\beta k}|^{2}|M^{D}_{\ kj}|^{2}|M^{\bar{D}}_{kk}|^{2}|M^{\bar{D}}_{kk} |^{2}.\] The probability of process in neutrino detector is given by \[R^{D}_{\alpha e}=\sum_{k,k^{\prime}}|U_{\alpha k}|^{2}|U_{ek^{\prime}}|^{2}|M^{D }_{k,k^{\prime}}|^{2}.\] (37) Therefore, according to Eq. (24), the event transition rate can be written \[\mathcal{R}_{\alpha\beta}(L,\bar{L},E)\propto \sum_{j=1}^{N_{l}+N_{h}}|U_{ej}|^{2}\Bigg{(}\sum_{k\leq N_{l}}|U_{ek }|^{2}\sigma_{\alpha}^{0,0}+\sum_{k=N_{l}+1}^{N_{l}+N_{h}}|U_{ek}|^{2}\sigma_{ \alpha}^{0,k}+\frac{\sum_{i\leq N_{l}}|U_{ei}|^{2}}{\sum_{i\leq N_{l}}|U_{ \alpha i}|^{2}}\sum_{k=N_{l}+1}^{N_{l}+Nh}|U_{\alpha k}|^{2}\sigma_{\alpha}^{k,0}\] (38) \[+\frac{1}{\sum_{i\leq N_{l}}|U_{\alpha i}|^{2}}\sum_{k=N_{l}+1}^{N _{l}+Nh}\sum_{k^{\prime}=N_{l}+1}^{N_{l}+Nh}|U_{\alpha k}|^{2}|U_{ek^{\prime}}| ^{2}\sigma_{\alpha}^{k,k^{\prime}}\Bigg{)}\] \[\times\Bigg{(}\bar{\sigma}_{\beta}^{0}+\sum_{i=N_{l}+1}^{N_{l}+Nh }\frac{|U_{gi}|^{2}\bar{\sigma}_{\beta}^{i}}{\sum_{i^{\prime}\leq N_{l}}|U_{ \beta i^{\prime}}|^{2}}\Bigg{)}P_{\alpha\beta}^{\rm eff}+\ldots,\]
where "..." denotes all no oscillating terms which are related to the incoherent neutrinos. This relation is very similar to Eq. (29). The differences are related to the existence an outgoing neutrino in the neutrino detector. In fact, we have treated a mixed state as detected neutrino even though the incoherent neutrinos are not considered. The sum over the outgoing neutrino mass eigenstates has appeared for this reason.
## IV Summary and conclusion
Although the neutrinos created in neutral interactions do not have a specific flavor, due to the entanglement between the neutrino and antineutrino, the pattern of neutrino oscillation between two detectors is expected to occur if both of them are detected [7]. This expectation is based on the assumption that neutrinos-antineutrinos are produced coherently in these processes and that they are detected in flavor states. In this paper, we challenge both of these assumptions by considering heavy neutrino-antineutrinos that are either incoherently created and detected or, despite mixing with active ones, their mass exceeds the available energy and they do not participate in the creation and detection processes. The existence of such neutrinos is predicted in some models such as the seesaw model [10; 11; 12; 13]. It is clear that the incoherent part of the created neutrino-antineutrino state does not evolve with time. Using the plane wave approach, we reanalyzed the issue of neutral current neutrino oscillation and obtained the most general expression for the event rate for two combination of detectors; both neutrino and antineutrino are detected by charge current nucleon scattering without neutrino in final states and neutrinos are detected by charged current leptonic process involving outgoing neutrino but antineutrinos are detected as the previous case (please see Eqs. (29) and (38), respectively). For both cases, the detected states are mixed because we consider the massive neutrinos and antineutrinos involving incoherently in the corresponding detection processes. However, in case of the later, since there is an outgoing neutrino which does not detected, the detected state is a mixed state even though the incoherent neutrinos do not considered. But this state transforms to a pure state provided that we ignore neutrino masses in the scattering process. In general, the standard oscillation pattern is recovered provided that the neutrino masses are ignored in the production and detection processes. As a main result, this study shows again that the oscillation pattern is predicted between neutrinos and antineutrinos with definite flavor states. So the \(Z_{0}\) decay width which is blind with respect to flavor does not appear in the oscillating terms in the event rates.
|
2305.12015 | Inventing art styles with no artistic training data | We propose two procedures to create painting styles using models trained only
on natural images, providing objective proof that the model is not plagiarizing
human art styles. In the first procedure we use the inductive bias from the
artistic medium to achieve creative expression. Abstraction is achieved by
using a reconstruction loss. The second procedure uses an additional natural
image as inspiration to create a new style. These two procedures make it
possible to invent new painting styles with no artistic training data. We
believe that our approach can help pave the way for the ethical employment of
generative AI in art, without infringing upon the originality of human
creators. | Nilin Abrahamsen, Jiahao Yao | 2023-05-19T21:59:23Z | http://arxiv.org/abs/2305.12015v2 | # Inventing painting styles through natural inspiration
###### Abstract
We propose two procedures to create painting styles using models trained only on natural images, providing objective proof that the model is not plagiarizing human art styles. In the first procedure we use the inductive bias from the artistic medium to achieve creative expression. Abstraction is achieved by using a reconstruction loss. The second procedure uses an additional natural image as inspiration to create a new style. These two procedures make it possible to invent new painting styles with no artistic training data. We believe that our approach can help pave the way for the ethical employment of generative AI in art, without infringing upon the originality of human creators.
## 1 Introduction
Recent advances in AI raise important questions about the essence of human creativity and the future trajectory of art work. In the field of visual arts, products such as Midjourney and Dall-E are generating images that arguably pass as human-made art with little effort from the user. These models have been trained on millions of images and artworks from the internet, and many are of the opinion that the models essentially plagiarize the art styles that they have consumed through their training process. Different approaches have been proposed in response to the concern of AI plagiarizing art, including:
1. **Cloaking with adversarial perturbations [19; 16].** Artists who wish to protect themselves from plagiarism by AI may attempt to perturb their artworks in a way that is imperceptible to the human viewer but is meant to foil the AI training.
2. **Combing through training data.** Since many artists do not consent to the use of their artworks to train AI, services have appeared which offer to search through datasets to expose use of an artist's work as training data [7].
3. **Through copyright law.** A recent class-action lawsuit [22; 5] sued Midjourney Inc, DeviantArt Inc, and Stability A.I. for using artists' work without their consent.
The cloaking method is likely to be brittle in the long term as it relies on assumptions about how the AI processes its training data and how this differs from the intended human audience. Meanwhile the pushback and lawsuits against training by AI illustrate a long-standing discussion about the internal workings in a neural network. The plaintiffs' side of the previously mentioned lawsuit claims that _"Stable Diffusion relies on a mathematical process called diffusion to store compressed copies of these
training images, which in turn are recombined to derive other images. It is, in short, a 21st-century collage tool."_[2] Although this notion that an AI model "recombines" the training data is generally not considered accurate by AI researchers and practitioners, it is however very difficult to rule out that this occurs at least for a small subset of the training data. This is especially true as modern models frequently have billions of parameters in which training data could hide. Indeed, [3] was able to extract training images from diffusion models such as Stable Diffusion using text prompts.
### Our contribution
We propose two procedures to create painting styles using models trained only on natural images. This provides objective proof that the model does not plagiarize art styles made by humans.
The first procedure achieves creative expression through the _inductive bias_ from a chosen _artistic medium_. We combine this with the flexibility of using a reconstruction loss to allow _abstraction_. This first procedure can be viewed as a variant of image-to-image translation for a setting where we have no samples from the target domain. That is, the style itself is trainable and is generated by the artist through experimenting with the artistic medium. The preferred styles will be those which are able to be decoded to reconstruct the input image under the constraints of the artistic medium. We call this the medium+perception-driven procedure.
Our second procedure allows the algorithm to make use of natural images as _inspiration_ to create new painting styles. The use of inspiration from the natural world means that the creation of art styles can be guided by the user even though the model does is not exposed to human-made art. We call this the inspiration procedure.
Generative AI models are currently under attack for plagiarizing training data. Ironically, our proposal illustrates that they can in principle be used in a way to objectively avoid plagiarism by restricting their training data, something that would be infeasible for human creators. We include a discussion about possible implications at the end of the paper.
### Prior work
**Algorithmic painting.** The concept of computer-generated artwork emerged as early as 1990, when [12] implemented a brush engine and devised various ways of deciding the parameters (position, direction, color, etc.) of the brush strokes. This innovative approach included several methods: (1)
Figure 1: A tower painted with a combination of the two procedures proposed in this paper. Both inspiration and subject images are photographs by the authors.
interactively chosen brush strokes through user input, (2) randomly positioned brush strokes with color and direction based on the reference image. More advanced techniques included: (3) a painting with a 3D model as a reference where the direction of the brush strokes was based on the orientation of the 3D surface as determined by ray tracing and (4) which used iterative relaxation to approximate the subject image in L2-norm with rectangles or Dirichlet domains.
Our medium+perception-driven procedure can be viewed as an analogue of method (4) described earlier. We employ a reconstruction loss to facilitate _abstraction_ in the artwork while learning a mapping from subject images to paintings. The latter requires us to create an encoding of the artist's actions which can be produced as the output of a convolutional neural network.
**Style Transfer.** The concept of style transfer was first advanced by Gatys et al. [8]. Their method leverages convolutional neural networks to transfer the stylistic features of one image, referred to as the style source, onto another, known as the subject image. This technique effectively amalgamates the style and content from different images to create novel visual outputs. In addition, CycleGAN [25] is an _unpaired_ image-to-image translation model which generates a _bijection_ between two domains (or styles) \(\mathcal{X}\) and \(\mathcal{Y}\). It was revolutionary for its success in achieving this task using unpaired data, thereby eliminating the need of a one-to-one mapping between source and target domain images in the training set. CycleGAN learns two maps \(G:\mathcal{X}\rightarrow\mathcal{Y}\) and \(F:\mathcal{Y}\rightarrow\mathcal{X}\) and leverages _cycle consistency losses_\(d(F(G(x),x)\) and \(d(G(F(y),y)\) to ensure that the maps are the inverses of each other. The method employs adversarial discriminators [11] on each of \(\mathcal{X}\) and \(\mathcal{Y}\) to ensure that the distribution of data \(x\in\mathcal{X}\) and generated images \(y\in\mathcal{Y}\) are matched within their appropriate domains.
A number of works have employed style transfer with more direct control of the geometry to control the rendering of pen and brush strokes [17; 6; 4]. The Stroke Control Multi-Artist Style Transfer framework [6] features an Anisotropic Stroke Module that allows for dynamic style-stroke adjustments. It also introduces a novel Multi-Scale Projection Discriminator for texture-level conditional generation. This enables the transformation of a photograph into various artistic style oil paintings, while preserving unique artistic style and anisotropic semantic information. Additionally, the work by Chan et al. [4] proposes an unpaired method for generating line drawings from photographs. This process incorporates a geometry loss to predict depth information and a semantic loss to match features between a line drawing and its corresponding photograph.
Our inspiration-driven method is related to these works but differs in that it does not require examples of existing art styles.
**Generative models.** Generative modeling is a machine learning approach that aims to either generate new samples that are similar to the training data or learn the underlying probability density from the data. It is often categorized as a form of unsupervised or self-supervised learning. Prominent examples of generative models include Variational Autoencoders (VAEs) [15], Generative Adversarial
Figure 2: A dandelion and a pigeon painted using the medium+perception-driven procedure. Subject images are shown in the corner of each painting. All training images and subject images were photographs by the authors, ensuring that no artworks were present in the training data.
Networks (GANs) [11], and Normalizing Flows [18] and Vector-quantized Image Modeling (VIM) approach like VQGAN, VQVAE [21, 24].
**Diffusion models.** Diffusion models are currently among the tools at the forefront of generative modelling. The pioneering work by Johnson Ho, et al. introduced some of the first diffusion models [13]. Diffusion models alternate between injecting noise and projecting onto the space of valid samples. They are generally considered easy to train to get high-quality data. Variants of the diffusion are Denoising Diffusion Implicit Models (DDIM) [20] and cascaded diffusion models [14].
## 2 Medium+perception-driven procedure
In our first approach, the creation of artistic styles is guided by the _artistic medium_. We model the medium as a fixed function \(M:\mathcal{A}\mapsto\mathcal{P}\) which maps a set of _actions_\(a\in\mathcal{A}\) (say, the coordinates of brush strokes) to a finished product \(p\in\mathcal{P}\). Let \(\mathcal{S}\) be the domain of subjects (natural images). To illustrate our ideas the focus on representational art using a paintbrush as the chosen medium. The elements of our first procedure are:
1. **Deliberate use of artistic medium.** Staying with the paintbrush as an example, it would be possible to recreate a subject image to high accuracy by essentially _printing_ individual pixels of the image using small dabs of the paintbrush. However, this approach does not make efficient use of the brush and the shapes that it is able to make, resulting in a set of actions \(a\) of high complexity. We propose that the process of optimizing a loss \(\ell(p)=\ell(M(a))\) under the constraints imposed by the artistic medium is an element of creative expression which which we can simulate by bounding the number of latent variables \(\dim(a)\).
2. **Interpretable abstraction.** In order to ensure that the painting \(p\) is an abstract representation of the subject \(s\) we employ a _reconstruction_ loss defined as \(\tilde{\ell}_{\theta}(p,s)=d(D_{\theta}(p),s)\) where \(d\) is a distance function on the space of images and \(D_{\theta}\) is a trainable decoder. We also add a tuneable \(l_{1}\)-loss directly between the painting and the subject and the painting where the parameter \(\beta\) is tuned to adjust the realism of the painting.
The decoder in principle models a bijection between the set of paintings \(M\circ A_{\theta}(\mathcal{S})\) and the set of subjects \(\mathcal{S}\)1, and we think of this bijection as a simple proxy for the artist's _perception_[10]. Gombrich also argued for the appeal of simplicity in art [9], motivating item 1.
Footnote 1: That is, it is a bijection in the reconstruction loss is \(0\).
We train a parameterized _artist_\(A_{\theta}\) to generate the actions \(a\) given a subject image \(s\). The full loss function is thus:
\[\ell_{\theta}(s)=d_{\mathrm{rec}}(D_{\theta}(M\circ A_{\theta}(s)),s)+\beta d _{\mathrm{realism}}(M\circ A_{\theta}(s),s).\]
Figure 3: Creating artistic expression through abstract representation under the constraint of the artistic medium
For fig. 2 we used \(d_{\mathrm{rec}}(A,B)=\log\left\|A-B\right\|_{1}+\log\mathrm{Dirdist}(A,B)\) and \(d_{\mathrm{realism}}=\log\left\|A-B\right\|_{1}\), \(\beta=1\). where \(\mathrm{Dirdist}\) is a distance measure that compares the local geometry of \(A\) and \(B\) and which we describe in detail in section 4. Note that \(\mathrm{Dirdist}\) is only applied between the subject image and the reconstruction which should both exist in the space of natural images, so we are not directly guiding the style of the painting by adding this loss term.
**Relation to autoencoders and CycleGAN**
Our setting can be viewed as a version of this image translation problem where we have no samples from domain \(\mathcal{Y}\). Instead we have a map \(M\) (the artistic medium) whose outputs are in \(\mathcal{Y}\). Put differently, the elements of \(\mathcal{Y}\) are generated by the artist through the \(M\circ A_{\theta}\) and are trainable. Thus, our representation loss is analogous to the cycle consistency loss \(d(F(G(x)),x)\) in CycleGAN. We do not need an analogue of the style discriminators because
* Our outputs belong to the domain of paintings by design, and
* The painting style not fixed but a product of the training dynamics.
Optionally, we could train a discriminator to learn the divergence distance between the reconstructed images and the original ones. The realism loss has no analogue in CycleGAN, and we add it to compensate for the large flexibility from the lack of a target style. We find that it helps preserve the coloring of the images. For example, without it the artist would invert the brightness values with 50 percent chance.
Our procedure can also be viewed as a version of an _auto-encoder_ where the latent variables are interpretable either as the actions of the artist or as a painting. In the former case the decoder factors through the artistic medium \(M\), and in the latter case the encoder factors through \(M\).
## 3 Inspiration-driven procedure
We propose a procedure to paint a _subject_ image using another natural image as _inspiration_. In this procedure we apply the style transfer of [8] from the inspiration image onto the subject image to create a new image which we call _imagination_. We then apply a _baseline technique_ to create an artwork based on the imagination image.
In fig. 1 we have taken the baseline technique to be the model trained through our medium-perception-driven procedure described above. This way the art style is created without being trained on human-made art.
Figure 4: Different painting styles with the same baseline technique and different inspiration photographs (the bottom-right frame). To illustrate the inspiration procedure in an isolated manner we use a simple hard-coded baseline technique in fig. 4 and fig. 5.
## 4 Technical details
### Convolutional brush engine
To implement our medium-driven procedure we implement a paintbrush engine whose input, the _action_ is represented by a set of 3-tensors \(a\in\mathbb{R}^{n\times n\times d}\) with the first two dimensions being the spatial dimensions of the image. This allows us to represent the artist \(A_{\theta}\) as a convolutional neural network.
The main features in our representation of a brush stroke are:
1. A _direction field_ which associates a \(2\times 2\) projection matrix \(P_{\mathbf{x}}\) to each (discretized) planar coordinate \(\mathbf{x}=(x,y)\). The \(\lambda=1\) eigenspace of the projection \(P_{\mathbf{x}}\) represents the direction of a brush stroke through \(\mathbf{x}\), if one exists. We use this projection-valued direction field instead of a vector field because we wish to let the distinction between the forward/backward directions be decided at the start of the brush stroke. To generate \(P_{\mathbf{x}}\) in a way that respects this symmetry of the direction field we generate a \(n\times n\times 2\times 2\) tensor of symmetric matrices \((A_{\mathbf{x}})_{ij}\) and define \(P_{\mathbf{x}}\) by shifting the spectrum of \(P_{\mathbf{x}}=(A_{\mathbf{x}}-\lambda_{0}(A_{\mathbf{x}})I)/(\lambda_{1}(A_{ \mathbf{x}})-\lambda_{0}(A_{\mathbf{x}}))\).
2. A sequence of starting coordinates and starting directions.
To generate a starting position with a convolutional network we let the output of the artist \(A_{\theta}\) include a scalar field. We take the softmax of this field to obtain a probability distribution \(\pi\) and obtain the starting position by sampling from \(\pi\).
### Differentiability
There are several points in the construction of the brush engine where a naive approach would render the medium non-differentiable with respect to the action \(a\). We now describe these points and how we circumvent them. The _straight-through_ operation [1, 23, 21] is defined by applying a function \(f\) in the forward pass but skipping it in the gradient computation. Define the _two-input_ straight-through operation as
\[\mathrm{straight\text{-}thru}(x,y)=x-\mathrm{stop\text{-}grad}(x)+\mathrm{ stop\text{-}grad}(y).\]
Figure 5: Example of the inspiration-imagination procedure: An imagination image is generated by applying style transfer from the inspiration image onto the subject image. Subsequently, the baseline technique is employed to create an artwork based on the imagination image. In this example the coloring from the original subject was mapped onto the imagination image by using the hue and saturation of the subject image in HSV space.
That is, the forward pass of \(z=\mathrm{straight}\mathrm{-}\mathrm{thru}(x,y)\) is computed as if \(z=y\) while the back-propagation is computed as if \(z=x\). The standard definition of the stop-gradient corresponds to letting \(y=f(x)\) for some function. Let \(h\) be a scalar field representing the pixel values of the brush stroke. We replace \(h\) with \(\mathrm{straight}\mathrm{-}\mathrm{thru}(f,h)\) where \(f\) is a corresponding brush stroke with soft edges. More interestingly, to make the probability distribution \(\pi\) trainable we replace the brush stroke \(h\) with
\[\mathrm{straight}\mathrm{-}\mathrm{thru}(\pi(\mathbf{x}_{0})*h,h),\]
where \(\pi(\mathbf{x}_{0})\) is the probability of the sampled starting point. This allows the loss gradient for the brush stroke to propagate back through the probability distribution for the starting point.
To trace out a brush stroke \(\mathbf{x}_{0},\dots,\mathbf{x}_{k}\) starting at \(\mathbf{x}_{0}\) we iteratively read the direction field \(P_{\mathbf{x}_{i}}\) at the current position \(\mathbf{x}_{i}\) to obtain the next direction \(v_{i+1}\). To do this we transform \(\mathbf{x}_{i}\) into a one-hot representation and take the overlap with the direction field \(P\) along the spatial dimension. It is important to compute \(\mathbf{x}_{i}\) as \(\mathbf{x}_{i}=\mathbf{x}_{0}+v_{1}+\dots+v_{i}\) and not using the one-hot representation of \(\mathbf{x}_{i-1}\) in order to let the gradient propagate through all the the directions of the brush stroke.
#### Iterative artist
In order to facilitate training we model the artist as a parameterized map \(\tilde{A}_{\theta}(c,s)\) that takes two inputs: the subject and a canvas with the artist's own unfinished painting fig. 6. \(M\circ\tilde{A}_{\theta}\) is iteratively applied, beginning from a blank canvas, in order to create the painting. The artist also chooses the color of the blank background. The fact that the background color is not fixed helps the artist learn to take the brackground into account when determining the next action. The trained iterative artist can be applied for any number of iterations and at different length scales, resulting in variations on the style (fig. 7).
#### Directional loss
In our medium-perception-driven procedure we optionally apply a direction loss between the subject image and the reconstruction, which we define in this section.
Given a scalar-valued function \(f\) on the plane, let \(\nabla f\) be its gradient and let \(\Gamma f=(-\partial_{y}f,\partial_{x}f)\) be the rotation of \(\nabla f\) by 90 degrees. We view \(\Gamma f\) as a row vector. Given a function \(f\) with (color) channels \(f_{c}\), define a matrix-valued function \(\rho\) as the sum of outer products:
\[\rho(\mathbf{x})=\sum_{\text{channel}\,c}\Gamma f_{c}(\mathbf{x})^{T}\Gamma f_ {c}(\mathbf{x}).\]
Let \(S\) be a smoothing kernel. We then define the _direction field_ of the image described by \(f\) as the 2x2-matrix-valued function \(\tilde{\rho}=S*\rho\).
We use 2x2 matrices to represent the direction field in order to gain sign-symmetry of the directions. This is important in the case of ripple-like \(f\) (for example \(f(x,y)=\cos(Ax+By)\)) as nearby directions would otherwise cancel each other out.
We then define the direction loss between two functions using a scale-invariant loss:
\[\mathrm{Dirdist}(f_{1},f_{2})=\sqrt{1-\frac{\tilde{\rho}(f_{1})\cdot\tilde{ \rho}(f_{2})}{\|\tilde{\rho}(f_{1})\|\|\tilde{\rho}(f_{2})\|}},\]
Figure 6: We model the artist as a map which is applied iteratively. A single iteration is shown here.
where the dot product and norms are entrywise (also known as Hilbert-Schmidt or Frobenius) and averaged over \(\mathbf{x}\). In practice \(f\) is given as a finite \(n\times n\times 3\) tensor and we compute \(\nabla f\) and \(\Gamma f\) using finite differences.
**Further information.** Code will be available at [https://github.com/nilin/art_ab_initio](https://github.com/nilin/art_ab_initio).
## 5 Discussion
We have shown two ways in which an AI model can create art styles without using human art in the training data. It is thus possible to construct a generative model that has never seen a human artwork _or_ any outputs from other generative AI models. This is important because artworks generated with other AI models can and do leak human-made art styles from their training data [3].
In this paper we have given a proof of concept to show that painting styles can be created which are not present in the training data. We do this by testing a simplified proxy for "aesthetics" which uses the inductive bias from the artistic medium with a reconstruction loss to allow for abstraction. We further proposed a way to direct the evolution of painting styles through inspiration from natural images which allows the generated painting styles to evolve without using artistic inputs as training data. We do not claim to capture or compete with the perception and sense of aesthetics of a human artist, which are highly complex (see [10] for discussions which are beyond the scope of this paper). But we believe that our contribution is significant in that it objectively tests the widely held assumption that an AI is limited to interpolating human-made creations.
We now speculate about possible implications of this work. As a proof of concept we have excluded artistic works from the training data in this paper, but in practice we envision that human creativity can be re-introduced. For example an artist could use a generative model for their own styles in place of the the baseline technique. In this way our method could be used as a creative tool to allow an artist to experiment with variations of their own styles.
For users of generative art models our methods provide a way to ethically use such AI tools, ensuring in an objective way that they are not infringing on artists' copyright. We hope that such a development could also be beneficial to artists and their ability to publicly share their work. Specifically we hope that the concern of having one's personal style reproduced by AI lessens if generative AI models become less reliant on artistic training data.
## 6 Acknowledgements
N.A. thanks Alina Scotti for literature references about aesthetics. N.A. was supported by the Simons Foundation under grant no. 825053.
Figure 7: Two paintings made by combining our two methods, using the artist from the medium+perception-driven procedure as the baseline technique for the inspiration procedure. The painting on the right was made with several iterations of the iterative artist working on smaller patches of the image, resulting in the difference in style. |
2307.13572 | Circle packings and total geodesic curvatures in hyperbolic background
geometry | In this paper, we study a new type of circle packings in hyperbolic
background geometry. Horocycles and hypercycles are also considered in this
packing. We give the existence and rigidity of this type of circle packing with
conical singularities in terms of the total geodesic curvature. Moreover, we
introduce the combinatorial curvature flow on surfaces to find the desired
circle packing with the prescribed total geodesic curvature. | Te Ba, Guangming Hu, Yu Sun | 2023-07-25T15:25:58Z | http://arxiv.org/abs/2307.13572v2 | # Circle packings and total geodesic curvatures in hyperbolic background geometry
###### Abstract
In this paper, we study a new type of circle packings in hyperbolic background geometry. Horocycles and hypercycles are also considered in this packing. We give the existence and rigidity of this type of circle packing with conical singularities in terms of the total geodesic curvature. Moreover, we introduce the combinatorial curvature flow on surfaces to find the desired circle packing with the prescribed total geodesic curvature.
**Mathematics Subject Classification (2020)**: 52C26, 53A70, 53E20, 57Q15.
## 1 Introduction
### Background
Let \((S,T)\) be a connected closed surface \(S\) with triangulation \(T\). Let \(V\), \(E\), \(F\) be the sets of vertices, edges and triangles of \(T\), respectively. A circle packing metric on \((S,T)\) is a map \(r:V\to\mathbb{R}_{+}\) such that the associated polyhedral metric on \((S,T)\) is given by
\[l(uv)=r(u)+r(v),\]
where \(u\), \(v\) are the endpoints of edge \(uv\). A circle packing metric is called Euclidean or hyperbolic if we calculate the geometry of triangles by Euclidean or hyperbolic trigonometric identities. A circle packing metric may have conical singularities at the center of the circles. The classical discrete Gaussian curvature is introduced to describe the singularity at the center of each circle, which is defined as the angle deficit at the vertex. The notion of circle patterns was proposed in the work of Thurston [33] as a significant tool to study the hyperbolic structure on 3-manifolds. Over the past decades, circle patterns have been bridging discrete conformal geometry [17, 3, 18, 19, 20], combinatorics [25], minimal surfaces [2, 23], combinatorial curvature flows [8, 26, 13] and others. Please refer to [32, 5] for more background. Below, we present Thurston's remarkable theorem regard to the existence and uniqueness of the hyperbolic circle packing metric.
**Theorem 1.1** (Thurston-Andreev).: _Let \((S,T)\) be a closed surface with triangulation \(T\). Let \(V\), \(F\) be the sets of vertices and faces of \(T\). Let \(F_{I}\) be the set of faces having at least one vertex in \(I\) for the subset \(I\subset V\). Then there exists a hyperbolic circle packing metric on \((S,T)\) with discrete Gaussian curvatures \(x_{i}\) on \(i\in V\) if and only if \((x_{1},\cdots,x_{|V|})\in\Omega\), where_
\[\Omega=\left\{(x_{1},\cdots,x_{|V|})\in\mathbb{R}^{|V|}\,|\,x_{i}<2\pi,\sum_{ i\in I}x_{i}>2\pi|I|-\pi|F_{I}|\text{ for each }I\subset V\right\}.\]
_Moreover, the hyperbolic circle packing metric is unique if it exists._
There are many results related to Theorem 1.1 and its proof. See, for example, the works of Andreev [1], Colin de Verdiere [9], Marden-Rodin [28], Bowers-Stephenson, [6] Chow-Luo [8], Bobenko-Springborn [4], Guo-Luo [21], Xu [35], Connelly-Gortler [10] and Ge-Hua-Zhou [14].
### Set up
For simplicity of notations, we use one index to denote a vertex \((i\in V)\), two indices to denote an edge \((ij\) is the arc on \(S\) joining \(i\), \(j)\) and three indices to denote a face \((ijk\) is the region on \(S\) bounded by \(ij\), \(jk\), \(ik)\). For each \(i\in V\), denote \(U(i)\) as a small open regular neighborhood of \(i\). We define
\[N(I):=\cup_{i\in I}U(i)\]
for \(I\subset V\). Suppose \(I_{1},I_{2}\subset V\) satisfying \(I_{1}\cap I_{2}=\emptyset\). Set
\[S_{I_{1},I_{2}}:=S\setminus(N(I_{1})\cup I_{2}).\]
The intersection
\[T_{I_{1},I_{2}}:=T\cap S_{I_{1},I_{2}}\]
is called the **pseudo ideal triangulation** of \(S_{I_{1},I_{2}}\). The intersections
\[E_{I_{1},I_{2}}:=\{ij\cap S_{I_{1},I_{2}}|ij\in E\},\quad F_{I_{1},I_{2}}:=\{ ijk\cap S_{I_{1},I_{2}}|ijk\in F\}\]
are called the edge and face set of \(T_{I_{1},I_{2}}\), respectively. The intersection of a face of \(S_{I_{1},I_{2}}\) and \(\partial S_{I_{1},I_{2}}\) is called a \(B\)-arc. It is easy to see \(S_{I_{1},I_{2}}=S\) and \(T_{I_{1},I_{2}}=T\) if \(I_{1}=\emptyset\) and \(I_{2}=\emptyset\). The pseudo ideal triangulation on surfaces is a generalization of ideal triangulation on surfaces. Please refer to [30, 27] for the definition of ideal triangulation.
Given \(I_{1},I_{2}\in V\) satisfying \(I_{1}\cap I_{2}=\emptyset\), we use \((S_{I_{1},I_{2}},T_{I_{1},I_{2}})\) to denote the surface \(S_{I_{1},I_{2}}\) with pseudo ideal triangulation \(T_{I_{1},I_{2}}\). Set \(I_{3}=V\setminus(I_{1}\cup I_{2})\). A **generalized hyperbolic circle packing metric** on \((S_{I_{1},I_{2}},T_{I_{1},I_{2}})\) is a map \(k:V\to\mathbb{R}_{+}\) satisfying
* \(k(i)<1\) if \(i\in I_{1}\),
* \(k(i)=1\) if \(i\in I_{2}\),
* \(k(i)>1\) if \(i\in I_{3}\).
The geometry of \((S_{I_{1},I_{2}},T_{I_{1},I_{2}})\) is determiend as follows:
Figure 1: Faces of \(F_{I_{1},I_{2}}\) with \(B\)-arc
1. The polyhedral metric on \(E_{I_{1},I_{2}}\) is defined by \(d:E_{I_{1},I_{2}}\rightarrow\mathbb{R}_{+}\), where \[d(ij)=\begin{cases}\operatorname{arctanh}k(i)+\operatorname{arctanh}k(j),&i,j \in I_{1},\\ \operatorname{arccoth}k(i)+\operatorname{arccoth}k(j),&i,j\in I_{3},\\ \operatorname{arctanh}k(i)+\operatorname{arccoth}k(j),&i\in I_{1},j\in I_{3}, \\ +\infty,&i\text{ or }j\in I_{2}.\end{cases}\]
2. Let \(\alpha\) be an inner angle of faces of \(F_{I_{1},I_{2}}\). If \(\alpha\) is at the endpoint of a \(B\)-arc, then \(\alpha\) is defined to be \(\pi/2\).
Given a generalized hyperbolic circle packing metric \(k\), it is obvious that the side lengths of edges of \(E_{I_{1},I_{2}}\) is determined by \((i)\). We assert that the side lengths of \(B\)-arcs and inner angles of each face of \(T_{I_{1},I_{2}}\) are determined by \((i)\), \((ii)\). Please refer to Lemma 2.1-Lemma 2.4 for proofs. See Figure 1 for three types of faces of \(T_{I_{1},I_{2}}\) with \(B\)-arcs. Then there exists a polyhedral metric on \((S_{I_{1},I_{2}},T_{I_{1},I_{2}})\) determined by \(k\) and \((i)\), \((ii)\). See Figure 2 for an example of a generalized hyperbolic circle packing metric.
Next, we provide a brief introduction to the geometric meaning of the generalized hyperbolic circle packing metric on \((S_{I_{1},I_{2}},T_{I_{1},I_{2}})\). The relationship between radii and geodesic curvatures of circles, horocycles and hypercycles is shown in the Table 1. Here \(\theta\) is a generalized angle, which can be an angle or a geodesic segment.
1. If \(i,j\in I_{1}\), the geometric meaning of \(d(ij)\) is the distance between the axis of two hypercycles with curvature \(k(i)\), \(k(j)\).
2. If \(i,j\in I_{3}\), the geometric meaning of \(d(ij)\) is the distance between the centers of two circles with curvature \(k(i)\), \(k(j)\).
3. If \(i\in I_{1}\) and \(j\in I_{3}\), the geometric meaning of \(d(ij)\) is the distance between the axis of the hypercycle with curvature \(k(i)\) and the center of the circle with curvature \(k(j)\).
\begin{table}
\begin{tabular}{l|l|l|l} & circle & horocycle & hypercycle \\ \hline radius & \(0<r<+\infty\) & \(r=+\infty\) & \(0<r<+\infty\) \\ geodesic curvature & \(k=\coth r\) & \(k=1\) & \(k=\tanh r\) \\ arc length & \(l=\theta\sinh r\) & — & \(l=\theta\cosh r\) \\ \hline \end{tabular}
\end{table}
Table 1: The relationship of radii, geodesic curvatures and arc lengths
Figure 2: A generalized hyperbolic circle packing metric on some \((S_{I_{1},I_{2}},T_{I_{1},I_{2}})\) where \(|I_{1}|=2\), \(|I_{2}|=1\) and \(|I_{3}|=1\).
4. If \(i\in I_{2}\), the geometric meaning of \(d(ij)\) is the distance between the center of the circle with curvature \(k(i)=1\) (a horocycle) to the center or axis of a circle, or a horocycle, or a hypercycle with curvature \(k(j)\), which is \(+\infty\).
Above analysis indicate that each \(ijk\in F_{I_{1},J_{2}}\) can be embedded into a configuration of three mutually tangent circles (with possibly horocycles or hypercycles), as shown in the Figure 3. Then for any generalized hyperbolic circle packing metric \(k:V\rightarrow\mathbb{R}_{+}\) on \((S_{I_{1},I_{2}},T_{I_{1},I_{2}})\), there exists a hyperbolic circle packing (with possibly horocycles or hypercycles) on \(S_{I_{1},I_{2}}\) induced by \(k\). The total geodesic curvature of \(k\) at \(v\in V\) is defined as the total geodesic curvature of the circle, or horocycle, or hypercycle at \(v\in V\) in the hyperbolic circle packing induced by \(k\). It can be calculated by
\[L(v)=l(v)k(v),\]
where \(l(v)\) is the length of the circle. The total geodesic curvature was first introduced in the work of Nie [29] as an important tool to study the existence and rigidity of the circle patterns in spherical background geometry.
### Main results
Motivated by [29], the first result of the paper is to provide the following existence and rigidity result for generalized hyperbolic circle packing metrics where the discrete Gaussian curvature at each vertex is replaced by the total geodesic curvature of each vertex.
**Theorem 1.2**.: _Let \((S,T)\) be a connected closed surface with vertex, face set \(V\), \(F\). Let \(F_{I}\) be the set of faces having at least one vertex in \(I\) for subset \(I\subset V\). Then there exists \(I_{1},I_{2}\subset V\) such that the generalized hyperbolic circle packing metric on \((S_{I_{1},I_{2}},T_{I_{1},I_{2}})\) having the total geodesic curvature \(L_{1},\cdots,L_{|V|}\) on each vertex if and only if \((L_{1},\cdots,L_{|V|})\in\Omega\), where_
\[\Omega=\left\{(L_{1},\cdots,L_{|V|})\in\mathbb{R}_{+}^{|V|}\,|\sum_{i=1}^{|V|} L_{i}<\pi|F_{I}|\;\;\text{for each}\;I\subset V\right\}. \tag{1.1}\]
_Moreover, the generalized hyperbolic circle packing metric is unique if it exists._
**Remark 1.3**.: The Euclidean version of Theorem 1.2 is reduced to Thurston-Andreev's Theorem in Euclidean background geometry. Because the total geodesic curvature at
Figure 3: Three-circle configurations
\(v\in V\) can be calculated by
\[L(v)=l(v)k(v)=\Theta(v)r(v)\frac{1}{r(v)}=\Theta(v),\]
where \(\Theta(v)\) is the the angle deficit at \(v\in V\).
One may ask the following question: Can we determine the topology of \(S_{I_{1},I_{2}}\) from the total geodesic curvature at each vertex? For this purpose, we follow the work of Chow-Luo [8] on combinatorial Ricci flows. There are many results on combinatorial Ricci flows, for example, see [24, 15, 12].
Let \(k_{i}\) be the geodesic curvature of the disk centered at \(i\). We consider the flow
\[\frac{dk_{i}}{dt}=-k_{i}(L_{i}-\hat{L}_{i}), \tag{1.2}\]
for \(i=1,\cdots,|V|\) with an initial radius vector \(k(0)\in\mathbb{R}_{+}^{|V|}\). Below is the second result of the paper.
**Theorem 1.4**.: _The solution \(k(t)\) of the flow (1.2) exists for all time. The following two statements are equivalent:_
1. \(k(t)\) _converges as_ \(t\to\infty\)_._
2. _The prescribed total geodesic curvature_ \(\{\hat{L}_{i}\}_{i\in V}\) _satisfies_ \[\hat{L}_{i}>0,\quad\sum_{i\in I}\hat{L}_{i}<\pi|F_{I}|\] _for each subset_ \(I\subset V\)_._
_If one of the above statements holds, then the flow (1.2) converges exponentially fast to a unique generalized hyperbolic circle packing metric with the total geodesic curvature \(\hat{L}_{i}\) at \(i\in V\)._
**Remark 1.5**.: The topology of \(S_{I_{1},I_{2}}\) can be deduced from the limit \(\{\hat{k}_{i}\}_{i\in V}\) of the solution of (1.2). In addition, we can also calculate the discrete Gaussian curvature at each conical singularity and the length of each geodesic boundary component.
The paper is organized as follows: In Section 2, we study the structure of three mutually tangent circles (with possibly horocycle and hypercycle). In Section 3, applying the variational principle, we derive Theorem 1.2. In Section 4, we introduce some properties of flows (1.2) and prove Theorem 1.4.
## 2 Three-circle configurations
### Admissible space of generalized hyperbolic circle packing metrics
This subsection is devoted to characterizing the space of generalized hyperbolic circle packing metrics on \((S_{I_{1},I_{2}},T_{I_{1},I_{2}})\). Hyperbolic cosine law indicates that a hyperbolic triangle can be determined by three side lengths of the triangle. The following three lemmas show that some special quadrilaterals, pentagons and hexagons can be determined by three side lengths as well. Please refer to [7, Chapter 2] for some hyperbolic trigonometric identities.
**Lemma 2.1**.: _For any \(r_{1},r_{2},r_{3}>0\), there exists a unique hyperbolic quadrilateral such that the following statements hold:_
1. _There exist two adjacent right angles._
2. _Except for the one with right angles at both ends, the other three sides have lengths_ \(l_{1}\)_,_ \(l_{2}\)_,_ \(l_{3}\)_, where_ \(l_{i}=r_{j}+r_{k}\) _for_ \(\{i,j,k\}=\{1,2,3\}\)_._
Proof.: Without loss of generality, we assume that the side facing the side with right angles at both ends has length \(l_{2}\). Suppose \(l_{1}\geq l_{3}\). It suffices to prove that there exists \(x\in(0,l_{1})\) such that there exists a hyperbolic quadrilateral with three right angles with side length \(x,y,l_{3}\) and a hyperbolic right-angled triangle with side length \(l_{1}-x,y,l_{2}\), as shown in the Figure 3(a). This is equivalent to show that there exists \(x\in(0,l_{1})\) such that
\[\sinh l_{3}=\sinh x\cosh y\text{ and }\cosh l_{2}=\sinh(l_{1}-x)\cosh y,\]
which is equivalent to
\[\cosh y=\frac{\sinh l_{3}}{\sinh x}=\frac{\cosh l_{2}}{\cosh(l_{1}-x)}>1. \tag{2.1}\]
Set
\[f(x)=\frac{\sinh x}{\cosh(l_{1}-x)}.\]
Note that \(f(x)\) is strictly increasing for \(x\in(0,l_{1})\). It is easy to see \(f(x)\to 0\) when \(x\to 0\) and \(f(x)\to\sinh l_{1}\) when \(x\to l_{1}\). We need to demonstrate there exists \(x_{0}\in(0,l_{1})\) such that
\[f(x_{0})=\frac{\sinh l_{3}}{\cosh l_{2}} \tag{2.2}\]
and
\[l_{3}>x_{0},\quad l_{2}>l_{1}-x_{0}. \tag{2.3}\]
Notice that (2.2) can be proved by showing
\[\frac{\sinh l_{3}}{\cosh l_{2}}<\sinh l_{1}.\]
This is the direct result of \(l_{1}\geq l_{3}\). Assume that \(l_{3}\leq x_{0}\). Then \(l_{2}\leq l_{1}-x_{0}\), which follows from (2.1). Then we obtain \(l_{2}+l_{3}\leq l_{1}\), which contradict to \(r_{1}>0\). Suppose \(l_{1}<l_{3}\). We can prove the lemma by finding \(x\in(0,l_{3})\) that satisfies similar properties as above. We omit the proof here.
**Lemma 2.2**.: _For any \(r_{1},r_{2},r_{3}>0\), there exists a unique hyperbolic pentagon with four right angles such that the following statements hold:_
1. _The two sides adjacent to the non-right angles have lengths_ \(r_{2}+r_{3}\)_,_ \(r_{1}+r_{3}\)_._
2. _The middle of the three sides with right angles at both ends has length_ \(r_{1}+r_{2}\)
Proof.: It suffices to prove that there exists \(x\in(0,l_{3})\) such that there exist two hyperbolic quadrilaterals with three right angles with side length \(x,y,l_{1}\) and \(l_{3}-x,y,l_{2}\), as shown in the Figure 3(b). It is equivalent to show that there exists \(x\in(0,l_{3})\) such that
\[\sinh l_{1}=\sinh x\cosh y\text{ and }\sinh l_{2}=\sinh(l_{3}-x)\cosh y,\]
It is equivalent to
\[\cosh y=\frac{\sinh l_{1}}{\sinh x}=\frac{\sinh l_{2}}{\sinh(l_{3}-x)}>1. \tag{2.4}\]
Set
\[g(x)=\frac{\sinh x}{\sinh(l_{3}-x)}.\]
It is easy to see \(g(x)\) is strictly increasing for \(x\in(0,l_{3})\). Furthermore, we know that \(g(x)\to 0\) as \(x\to 0\) and \(g(x)\to+\infty\) as \(x\to l_{3}\). Then for any \(l_{1},l_{2}>0\), there exists \(x_{0}\in(0,l_{3})\) satisfying
\[\frac{\sinh x_{0}}{\sinh(l_{3}-x_{0})}=\frac{\sinh l_{1}}{\sinh l_{2}}.\]
Observe that \(x_{0}<l_{1}\), \(l_{3}-x_{0}<l_{2}\). Assume that it is not true, then (2.4) indicates that \(x_{0}\geq l_{1}\), \(l_{3}-x_{0}\geq l_{2}\). It follows that \(l_{3}\geq l_{1}+l_{2}\), which leads to \(r_{3}\leq 0\). Then we find \(x_{0}\in(0,l_{3})\) satisfying (2.4), which completes the proof.
The following is the corollary of the classical result of the existence of hyperbolic right-angled hexagon.
**Lemma 2.3**.: _For any \(r_{1},r_{2},r_{3}>0\), there exists a unique right angled hyperbolic hexagon whose three non-pairwise adjacent edge lengths are \(r_{1}+r_{2}\), \(r_{2}+r_{3}\), \(r_{3}+r_{1}\)._
If some \(r_{i}=+\infty\) in the above lemmas, the polygon has some vertices at infinity. The existence and uniqueness still hold for the polygon satisfying the condition of the above lemmas.
**Lemma 2.4**.: _For any \(k_{1},k_{2},k_{3}>0\), there exist unique (up to isometry) mutually externally tangent hyperbolic circles (with possibly horocycles and hypercycles) and having \(k_{1}\), \(k_{2}\), \(k_{3}\) as their geodesic curvatures._
Proof.: Let us turn the problem into the proof of the existence of polygons that satisfy the following conditions.
Figure 4: Construction of hyperbolic quadrilaterals and pentagons
1. Suppose \(k_{i}>1\) for \(i=1,2,3\). Set \(r_{i}=\operatorname{arccoth}k_{i}\). There exists a unique triangle with side lengths \(d_{1}\), \(d_{2}\), \(d_{3}\), where \(d_{i}=r_{j}+r_{k}\). Then we draw a circle of radius \(r_{k}\) centered at the vertex of the triangle if the side facing the vertex has length \(r_{i}+r_{j}\). These three circles are exactly mutually tangent to each other on three sides of the triangle. The curvature of the three circles is exactly \(k_{1}\), \(k_{2}\), \(k_{3}\).
2. Suppose \(k_{1},k_{2}>1\) and \(k_{3}<1\). Set \(r_{i}=\operatorname{arccoth}k_{i}\) for \(i=1,2\) and \(r_{3}=\operatorname{arctanh}k_{3}\). It suffices to prove the existence of quadrilaterals described in Lemma 2.1. Because we can construct two circles with radius \(r_{2}\), \(r_{3}\) centered at two non-right angle vertices of the quadrilateral and construct a hypercycle of radius \(r_{1}\), taking the side of the quadrilateral where all vertices are right angles as the axis, as shown in the Figure 4(a). These three circles are mutually tangent to each other with curvatures \(k_{1}\), \(k_{2}\), \(k_{3}\).
3. Suppose \(k_{1},k_{3}<1\) and \(k_{2}>1\). Set \(r_{i}=\operatorname{arctanh}k_{i}\) for \(i=1,3\) and \(r_{2}=\operatorname{arccoth}k_{2}\). Similar analysis as \((a)\), \((b)\) shows that it is equivalent to Lemma 2.2. See Figure 4(b) for an example.
4. Suppose \(k_{i}<1\) for \(i=1,2,3\). Set \(r_{i}=\operatorname{arctanh}k_{i}\) for \(i=1,2,3\). This situation corresponds to Lemma 2.3. See Figure 4(c) for an example.
5. Suppose \(k_{i}=1\) for some \(i\in\{1,2,3\}\). Then \(r_{i}\) approaches to \(+\infty\). Then it suffices to prove the existence of some ideal polygons. This can be seen as the corollary to the above results.
The uniqueness is derived from the fact that the isometry of the hyperbolic plane maps circles (resp. horocycles, hypercycles) to circles (resp. horocycles, hypercycles). Thus we complete the proof.
The direct result of Lemma 2.4 is the following lemma. Lemma 2.5 implies the space of generalized hyperbolic circle packing metric on \((S_{I_{1},I_{2}},T_{I_{1},I_{2}})\) can be characterized by \(\mathbb{R}_{+}^{|V|}\).
**Lemma 2.5**.: _Let \((S,T)\) be a closed surface with triangulation \(T\). Let \(V\) be the vertex set of \(T\). For any \(k:V\to\mathbb{R}_{+}\), there exists unique choice of \(I_{1},I_{2}\subset V\) and exists unique polyhedral metric induced by (i),(ii) on \((S_{I_{1},I_{2}},T_{I_{1},I_{2}})\)._
Figure 5: Hyperbolic polygons with right angles
### Construction of convex functionals
The main target of this subsection is to define convex functionals on the admissible space of generalized hyperbolic circle packing metric on \((S_{I_{1},I_{2}},T_{I_{1},I_{2}})\). It can be constructed by taking the sum of the convex functionals on the admissible space of generalized hyperbolic circle packing metric on each face of \((S_{I_{1},I_{2}},T_{I_{1},I_{2}})\).
Let \(C_{1}\), \(C_{2}\), \(C_{3}\) be three mutually tangent circles (with possibly horocycles or hypercycles). Let \(l_{i}\) be the length of the arc between two points of tangency of \(C_{i}\). Lemma 2.4 indicates \(l_{i}(k_{1},k_{2},k_{3})\) is well-defined on \(\mathbb{R}_{+}^{3}\). We begin by introducing the following two preliminary lemmas.
**Lemma 2.6**.: _Let \(H\) be a horocycle and \(G\) be a geodesic intersecting \(H\) at two points. Suppose that the intersection angle of \(H\) and \(G\) is \(\alpha\), which is not equal to \(\frac{\pi}{2}\). Then the length of the horocycle segment between two intersection points is \(2\tan\alpha\)._
Proof.: Without loss of generality, we assume the horocycle is \(y=1\) on the upper half-plane model. Then the horocycle segment can be written as
\[\begin{cases}x(t)=\tan\alpha(2t-1),\\ y(t)=1.\end{cases}\]
for \(0\leq t\leq 1\). Denote \(l\) by the length of the horocycle segment. By the formula of hyperbolic length of curves, we obtain
\[l=\int_{0}^{1}\frac{\sqrt{x^{\prime}(t)^{2}+y^{\prime}(t)^{2}}}{y(t)}dt=2\tan\alpha.\]
**Lemma 2.7**.: _Let \(\gamma_{1}\), \(\gamma_{2}\) be two circle arcs intersect each other at the right angle. Let \(k_{1}\)\(l_{1}\), \(k_{2}\)\(l_{2}\) be their curvatures and lengths. If \(k_{2}>1\), then the differential form_
\[\eta=l_{1}dk_{1}+l_{2}dk_{2}\]
_is closed._
Proof.: We divide the proof into three cases.
* Suppose \(k_{1}>1\), as depicted in Figure 6. Let \(\theta_{1}\), \(\theta_{2}\) be the angle of \(\gamma_{1}\), \(\gamma_{2}\). Set \(r_{i}=\operatorname{arccoth}k_{i}\). We can find a right triangle with acute angles \(\theta_{1}/2\), \(\theta_{2}/2\) by connecting two centers of \(\gamma_{1}\), \(\gamma_{2}\) and one of the intersection point of \(\gamma_{1}\), \(\gamma_{2}\). By hyperbolic trigonometric identities, we obtain \[\cot\frac{\theta_{1}}{2}=\coth r_{2}\sinh r_{1}=\frac{k_{2}}{\sqrt{k_{1}^{2}-1 }}.\] It follows that \[\theta_{1}=2\operatorname{arccot}\frac{k_{2}}{\sqrt{k_{1}^{2}-1}}.\] Then \[\frac{\partial l_{1}}{\partial k_{2}}=\frac{\partial\theta_{1}}{\partial k_{2} }\sinh r_{1}=\frac{2}{1-k_{1}^{2}-k_{2}^{2}}=\frac{\partial l_{2}}{\partial k_ {1}}.\]
2. Suppose \(k_{1}<1\), as depicted in Figure 6. Let \(\theta_{1}\), \(\theta_{2}\) be the angle of \(\gamma_{1}\), \(\gamma_{2}\). Set \(r_{1}=\operatorname{arctanh}k_{1}\) and \(r_{2}=\operatorname{arccoth}k_{2}\). Then we can find a quadrilateral with three right angles satisfying the edge angle relation. By hyperbolic trigonometric identities, we obtain \[\operatorname{coth}\frac{\theta_{1}}{2}=\operatorname{coth}r_{2}\cosh r_{1}= \frac{k_{2}}{\sqrt{1-k_{1}^{2}}},\quad\operatorname{cot}\frac{\theta_{2}}{2}= \tanh r_{1}\sinh r_{2}=\frac{k_{1}}{\sqrt{k_{2}^{2}-1}}.\] It follows that \[\theta_{1}=2\operatorname{arccoth}\frac{k_{2}}{\sqrt{1-k_{1}^{2}}},\quad \theta_{2}=2\operatorname{arccot}\frac{k_{1}}{\sqrt{k_{2}^{2}-1}}.\] Then we derive \[\frac{\partial l_{1}}{\partial k_{2}}=\frac{\partial\theta_{1}}{\partial k_{2 }}\cosh r_{1}=\frac{2}{1-k_{1}^{2}-k_{2}^{2}}.\] A similar deduction gives that \[\frac{\partial l_{2}}{\partial k_{1}}=\frac{\partial\theta_{2}}{\partial k_{1 }}\sinh r_{2}=\frac{2}{1-k_{1}^{2}-k_{2}^{2}}.\]
3. Suppose \(k_{1}=1\), as depicted in Figure 6. Set \(r_{2}=\operatorname{arccoth}k_{2}\) and \(\theta_{2}\) the angle of \(\gamma_{2}\). We can find an ideal right triangle with an acute angle \(\theta_{2}/2\) by connecting two centers of \(\gamma_{1}\), \(\gamma_{2}\) and one of the intersection points of \(\gamma_{1}\), \(\gamma_{2}\). By hyperbolic trigonometric identities, we obtain (2.5) \[\sin\frac{\theta_{2}}{2}\cosh r_{2}=1.\] Let \(C_{1}\), \(C_{2}\) be the center of \(\gamma_{1}\), \(\gamma_{2}\) and let \(P\), \(Q\) be the intersection points of \(\gamma_{1}\), \(\gamma_{2}\). We connect the \(P\), \(Q\) by geodesic and let \(D\) be the intersection point of \(PQ\) and \(C_{1}C_{2}\). It is easy to see that \(\triangle PDC_{2}\) is a right triangle. By hyperbolic trigonometric identities, we obtain (2.6) \[\cosh r_{2}=\operatorname{cot}\triangle DPC_{2}\cot\frac{\theta_{2}}{2}.\] Combing (2.5), (2.6), we obtain \[\tan\triangle DPC_{2}=\frac{\cot\frac{\theta_{2}}{2}}{\cosh r_{2}}=\frac{ \sqrt{\cosh^{2}r_{2}-1}}{\cosh r_{2}}=\tanh r_{2}=\frac{1}{k_{2}}.\] Lemma 2.6 shows that \[l_{1}=\frac{2}{k_{2}},\] which yields that \[\frac{\partial l_{1}}{\partial k_{2}}=-\frac{2}{k_{2}^{2}}.\] By \((a)\), \((b)\), we obtain \[\lim_{k_{1}\to 1}\frac{\partial l_{2}}{\partial k_{1}}=-\frac{2}{k_{2}^{2}}.\]
By mean value theorem, it is easy to prove that \(l_{2}(k_{1},k_{2})\) is \(C^{1}\) smooth with respect to \(k_{1}\) when \(k_{1}=1\) and
\[\frac{\partial l_{2}}{\partial k_{1}}=-\frac{2}{k_{2}^{2}}\]
when \(k_{1}=1\).
Combining the analysis above, we obtain
\[\frac{\partial l_{1}}{\partial k_{2}}=\frac{\partial l_{2}}{\partial k_{1}}.\]
Thus we complete the proof.
The proof of the following lemma is similar to the work of Colin de Verdiere [9].
**Lemma 2.8**.: _The differential \(1\)-form \(\omega=l_{1}dk_{1}+l_{2}dk_{2}+l_{3}dk_{3}\) is closed._
Proof.: For any \(k_{1}\), \(k_{2}\), \(k_{3}>0\), Lemma 2.4 shows that there exist three mutually tangent circles with curvatures \(k_{1}\), \(k_{2}\), \(k_{3}\). Furthermore, we can draw a unique circle \(C\) passing through three tangency points as shown in Figure 7. It is easy to prove that \(C\) is a hyperbolic circle. Let \(k\) be the curvature of this circle. Then we obtain a continuous map \(f:\mathbb{R}_{+}^{3}\rightarrow\mathbb{R}_{+}\) from the curvature of three mutually tangent circles to the curvature of \(C\). Let \(l\) be the perimeter of \(C\). Then we define
\[\eta=l_{1}dk_{1}+l_{2}dk_{2}+l_{3}dk_{3}+ldk\]
on \(\Lambda=\{(x,f(x))|x\in\mathbb{R}_{+}^{3}\}.\) Note that \(\Lambda\) is diffeomorphism to \(\mathbb{R}_{+}^{3}\). It suffices to show that \(\eta\) is closed on \(\Lambda\). Let \(C_{i}\) be the circle with curvature \(k_{i}\). Set \(l_{i}^{\prime}\) as the length of arc of \(C\) which lies in the \(C_{i}\), as depicted in Figure 7. Note that
\[\eta=\sum_{i=1}^{3}(l_{i}dk_{i}+l_{i}^{\prime}dk).\]
It suffices to prove that
\[\eta_{i}=l_{i}dk_{i}+l_{i}^{\prime}dk\]
is closed, which is the direct result of Lemma 2.7.
Let \(L_{i}=l_{i}k_{i}\) be the total geodesic curvature of the arc between two points of tangency of \(C_{i}\). It is easy to see \(L_{i}\) is the smooth regarding \(k_{1}\), \(k_{2}\), \(k_{3}\). Set \(S_{i}=\ln k_{i}\). Then
\[\omega=l_{1}dk_{1}+l_{2}dk_{2}+l_{3}dk_{3}=L_{1}dS_{1}+L_{2}dS_{2}+L_{3}dS_{3}, \tag{2.7}\]
Figure 6: Two circle arcs intersect each other at the right angle
where \(L_{i}=l_{i}k_{i}\) is the total geodesic curvature of the arc between two points of tangency of \(C_{i}\). Note that \(l_{i}(k_{1},k_{2},k_{3})\) is \(C^{1}\)-smooth. Let us define \(C^{2}\)-smooth function
\[F(x)=\int_{x_{0}}^{x}L_{1}dS_{1}+L_{2}dS_{2}+L_{3}dS_{3}\]
on \(\mathbb{R}^{3}\).
**Remark 2.9**.: Let \(k_{i}=\coth r_{i}>1\). Then there exists a triangle formed by connecting the centers of three hyperbolic circles with radii \(r_{i}\), \(r_{j}\), \(r_{k}\) and tangent to each other. Let \(\theta_{i}\) be the inner angle opposite to side with length \(r_{j}+r_{k}\). Then
\[l_{i}dk_{i}=\frac{-l_{i}dr_{i}}{\sinh^{2}r_{i}}=\frac{-\theta_{i}dr_{i}}{\sinh r _{i}}=-\theta_{i}du_{i}, \tag{2.8}\]
where \(u_{i}=\ln\tanh\frac{r_{i}}{2}\). Set \(\omega=\sum_{i=1}^{3}\theta_{i}du_{i}\) and define \(G(x)=\int^{x}\omega\). Colin de Verdiere [9] first introduced \(G(x)=\int^{x}\omega\) to prove the uniqueness part of Theorem 1.1 by variational principle. Then (2.7) and (2.8) indicate \(F(x)\) can be viewed as an extension of \(G(x)\) with different geometric meaning.
The following lemma is directly obtained by the Gauss-Bonnet theorem.
**Lemma 2.10**.: _Let \(C_{1}\), \(C_{2}\), \(C_{3}\) be three mutually tangent circles. Let \(\Omega\) be the region enclosed by three arcs between tangency points. Then_
\[\mathrm{Area}(\Omega)=\pi-L_{1}-L_{2}-L_{3},\]
_where \(L_{i}\) is the total curvature of the arc between two points of tangency of \(C_{i}\)._
**Lemma 2.11**.: _The integral function \(F(x)\) is strictly convex._
Proof.: It is easy to see
\[\frac{\partial F(x)}{\partial x_{i}}=L_{i}.\]
The Hessian matrix of \(F(x)\) is
\[\mathrm{M}=\left[\frac{\partial L_{i}}{\partial S_{j}}\right]_{3\times 3}.\]
Recall that \(C_{1}\), \(C_{2}\), \(C_{3}\) be three mutually tangent circles (with possibly horocycles or hypercycles) in the hyperbolic plane. A key observation is that the Euclidean radius of \(C_{1}\) decreases as \(k_{1}\) increases and \(k_{2}\), \(k_{3}\) unchanged if \(C_{1}\), \(C_{2}\), \(C_{3}\) are required to remain
Figure 7: A three-circle configuration
tangent to each other all the time. Then \(C_{1}\) will shrink to the tangency point of \(C_{2}\), \(C_{3}\) as \(k_{1}\) increases. A direct corollary of above observation is that
\[\frac{\partial L_{i}}{\partial S_{j}}<0,\quad\frac{\partial\operatorname{Area} (\Omega)}{\partial S_{i}}<0.\]
Lemma 2.10 shows that
\[\operatorname{Area}(\Omega)=\pi-L_{1}-L_{2}-L_{3}.\]
As a result, we have
\[\frac{\partial L_{i}}{\partial S_{i}}=-\frac{\partial(\operatorname{Area}( \Omega)+L_{j}+L_{k})}{\partial S_{i}}>0.\]
Then
\[\left|\frac{\partial L_{i}}{\partial S_{i}}\right|-\left|\frac{\partial L_{j} }{\partial S_{i}}\right|-\left|\frac{\partial L_{k}}{\partial S_{i}}\right|= \frac{\partial(L_{1}+L_{2}+L_{3})}{\partial S_{i}}=-\frac{\partial\operatorname {Area}(\Omega)}{\partial S_{i}}>0.\]
Therefore, the Hessian matrix of \(F(x)\) is a symmetric strictly diagonally dominant matrix with positive diagonal entries, which yields that M is positive definite.
### Limit behaviors of total geodesic curvatures
To characterize the space of total geodesic curvature of generalized hyperbolic circle packing metrics on \((S_{I_{1},I_{2}},T_{I_{1},I_{2}})\), we need to study the limit behavior of total geodesic curvatures as generalized hyperbolic circle packing metrics varies.
**Lemma 2.12**.: _Let \(C_{1}\), \(C_{2}\), \(C_{3}\) be three mutually tangent circles with curvature \(k_{1}\), \(k_{2}\), \(k_{3}\). Let \(L_{i}\) be the total curvature of the arc between two points of tangency of \(C_{i}\). Let \(0\leq c_{i}<+\infty\) and \(\{r,s,t\}=\{1,2,3\}\). Then the following statements hold:_
\[\lim_{k_{r}\to 0}L_{r} =0, \tag{2.9b}\] \[\lim_{(k_{r},k_{s},k_{t})\rightarrow(+\infty,\infty,a,b)}L_{r} =\pi,\] (2.9c) \[\lim_{(k_{r},k_{s},k_{t})\rightarrow(+\infty,+\infty,c)}L_{r}+L_{s} =\pi,\] (2.9d) \[\lim_{(k_{r},k_{s},k_{t})\rightarrow(+\infty,+\infty,+\infty)}L_{r}+L_{s} +L_{t} =\pi. \tag{2.9a}\]
Proof.: We divide the proof of (2.9a) into three cases.
1. Suppose \(k_{s}\to 0\), \(k_{t}\to 0\). Denote the region enclosed by \(C_{r}\), \(C_{s}\), \(C_{t}\) as \(\Lambda_{r,s,t}\). Then \(\Lambda_{r,s,t}\) approaches to an ideal triangle \(T\). By Lemma 2.10, we have \[\operatorname{Area}(T)-\operatorname{Area}(\Lambda_{r,s,t})=L_{r}+L_{s}+L_{t} \to 0.\] Thus \(L_{r},L_{s},L_{t}\to 0\).
2. Suppose \(k_{s}\to 0\), \(k_{t}\to c(c\neq 0)\). Note that \(L_{r}\), \(L_{s}\) are strictly less than the corresponding total geodesic curvature in the case \((a)\), see Figure 8 for an explanation. Thus \(L_{r},L_{s}\to 0\).
3. Suppose \(k_{s},k_{t}\to 0\). Then \(\Lambda_{r,s,t}\) is bounded, which yields that \(l_{r}\) is bounded. Thus \(L_{r}\to 0\).
Next, we prove (2.9b)-(2.9d). Note that the area of \(\Lambda_{r,s,t}\) approaches to zero as one of \(k_{i}\to+\infty\), where \(i=r,s,t\). By Lemma 2.10, it can be derived that
\[L_{r}+L_{s}+L_{t}\to\pi\]
as one of \(k_{i}\to+\infty\).
1. Suppose \((k_{r},k_{s},k_{t})\to(+\infty,a,b)\). Then \(l_{s},l_{t}\to 0\), which implies \(L_{s},L_{t}\to 0\). Thus \(L_{r}\to\pi\).
2. Suppose \((k_{r},k_{s},k_{t})\to(+\infty,+\infty,c)\). Then \(l_{t}\to 0\), which implies \(L_{t}\to 0\). Thus \(L_{r}+L_{s}\to\pi\).
3. Suppose \((k_{r},k_{s},k_{t})\to(+\infty,+\infty,+\infty)\). Let \(\tau_{r,s,t}\) be the triangle formed by connecting the centers of \(C_{r}\), \(C_{s}\), \(C_{t}\). Let \(\theta_{i}\) be the inner angle of \(\tau_{r,s,t}\) at the center of \(C_{i}\), where \(i=r,s,t\). Then \[L_{i}=\theta_{i}\cosh r_{i}\to\theta_{i}.\] The area formula of the hyperbolic triangle demonstrates \[\operatorname{Area}(\tau_{r,s,t})=\pi-\theta_{r}-\theta_{s}-\theta_{t}\to 0,\] which implies that \[L_{r}+L_{s}+L_{t}\to\pi\] where \(L_{r},L_{s},L_{t}\to 0\).
## 3 Proof of Theorem 1.2
The main target of this section is to prove Theorem 1.2 by the variational principle. Recall that \(T\) is the triangulation of \(S\) and \(V\) is the vertex set of \(T\). We define \(C^{2}\)-smooth function \(W:\mathbb{R}^{|V|}\to\mathbb{R}\) by
\[W(x)=\sum_{ij\in F}F(x_{i},x_{j},x_{k}), \tag{3.1}\]
where \(x_{p}\) represents the value of the generalized circle packing metric at \(p\in V\). Lemma 2.11 shows that \(F(x_{i},x_{j},x_{k})\) is strictly convex, which yields that \(W(x)\) is strictly convex. Note that
\[\frac{\partial W(x)}{\partial x_{i}}=\frac{\partial}{\partial x_{i}}\sum_{ pqr\in F}F(x_{p},x_{q},x_{r})=\sum_{ij\in F}L_{i}^{jk},\]
where \(L_{i}^{jk}\) is the total curvature of the arc of \(C_{i}\) between tangency points of \(C_{i}\), \(C_{j}\) and tangency points of \(C_{i}\), \(C_{k}\).
The following property of convex functions is important in the proof of Theorem 1.2. Please refer to [11, 27] for a proof.
**Lemma 3.1**.: _If \(G:\mathbb{R}^{n}\to\mathbb{R}\) is a \(C^{2}\)-smooth strictly convex function, then its gradient \(\nabla G:\mathbb{R}^{n}\to\mathbb{R}^{n}\) is a smooth embedding._
Combining Lemma 3.1, we deduce that
\[\nabla W:\quad\mathbb{R}^{|V|} \to\quad\mathbb{R}^{|V|}\] \[(K_{1},\cdots,K_{|V|}) \to(L_{1},\cdots,L_{|V|})\]
realizes a smooth embedding from the space of generalized circle packing metrics to the total geodesic curvature on each vertex. Next we will give the image space of the mapping \(\nabla W\).
**Lemma 3.2**.: _Let \(W\) be defined as (3.1). Then \(\nabla W:\mathbb{R}^{|V|}\to\Omega\) is a homeomorphism, where_
\[\Omega=\left\{(L_{1},\cdots,L_{|V|})\in\mathbb{R}^{|V|}_{+}\,\big{|}\,\sum_{i \in I}L_{i}<\pi|F_{I}|\text{ for any subset }I\subset V\right\}.\]
Proof.: It suffices to prove that \(\nabla W(\mathbb{R}^{|V|})=\Omega\). To this end, we need to analyze the boundary of \(\nabla W(\mathbb{R}^{|V|})\) in \(\mathbb{R}^{|V|}_{+}\). Take a sequence \(K^{(m)}\in\mathbb{R}^{|V|}\) so that
\[\lim_{m\to\infty}K^{(m)}=a\in[-\infty,+\infty]^{|V|},\]
where \(a(i)=-\infty\) or \(+\infty\) for some \(i\in V\). We need to prove that \(\nabla W(K^{(m)})\) converges to the boundary of \(\Omega\).
Let \(I\) (resp. \(I^{\prime}\)) be a subset of \(V\) such that \(a(i)=+\infty\) (resp. \(a(i)=-\infty\)) for \(i\in I\) (resp. \(i\in I^{\prime}\)). Let \(F_{I}\) be the set of faces having at least one vertex in \(I\). Suppose \(ijk\in F_{I}\). Let us divide it into three situations.
* There exists exactly one vertex, and say \(i\in I\). By (2.9b), we have \(L_{i}^{jk}\to\pi\) as \(K^{(m)}\to a\).
* There exist exactly two vertices, and say \(i,j\in I\). By (2.9c), we have \(L_{i}^{jk}+L_{j}^{ik}\to\pi\) as \(K^{(m)}\to a\).
* All vertices \(i\), \(j\), \(k\in I\). By (2.9d), we have \(L_{i}^{jk}+L_{j}^{ik}+L_{k}^{ij}\to\pi\) as \(K^{(m)}\to a\).
Let us compute the sum of all total curvatures of disks in \(I\). Let \(F_{I_{s}}\) be the set of faces having exactly \(s\) vertex in \(I\). We group the total curvatures of disks in \(I\) according to the faces of \(F_{I}\) in which they lie. Then
\[\sum_{i\in I}L_{i}=\sum_{ijk\in F_{I}}L_{i}^{jk}\to\pi(|F_{I_{1}}|+|F_{I_{2}}| +|F_{I_{3}}|)=\pi|F_{I}|.\]
Suppose \(v\in I^{\prime}\). By (2.9a), we have
\[L_{i}=\sum_{i\not\in F}L_{i}^{jk}\to 0.\]
Lemma 2.10 indicates that
\[L_{i}^{jk}+L_{j}^{ik}+L_{k}^{ij}<\pi\]
for each \(ijk\in F\), which yields that
\[\sum_{i\in I}L_{i}<\pi|F_{I}|.\]
In addition, it is obvious that \(L_{i}>0\). According to the arbitrariness of the choice of \(I\), we derive that \(\nabla W(K^{(m)})\in\Omega\) and \(\nabla W(K^{(m)})\) converges to the boundary of \(\Omega\) as \(m\to+\infty.\) Hence, it can be inferred that \(\nabla W(\mathbb{R}^{|V|})=\Omega\). Thus we complete the proof.
Proof of Theorem 1.2.: Set \(L=(L_{1},\cdots,L_{|V|})\in\Omega\). Lemma 3.2 indicates that there exists
\[K=(K_{1},\cdots,K_{|V|})\in\mathbb{R}^{|V|}\]
such that \(\nabla W(K)=L\). Set \(k_{i}=e^{K_{i}}\). Let us define \(k_{L}:V\to\mathbb{R}_{+}\) where \(k_{L}(i)=k_{i}\). It is obvious that \(k_{L}\) is the generalized circle packing metric on \((S_{I_{1},I_{2}},T_{I_{1},I_{2}})\) with the total geodesic curvature \(L_{1},\cdots,L_{|V|}\) on each vertex. This proves the 'if' part. The 'only if' is the direct result of Lemma 3.2. Thus the theorem is proved.
## 4 Combinatorial Ricci flows
By the change of variables \(K_{i}=\ln k_{i}\), we rewrite the flow (1.2) as
\[\frac{dK_{i}}{dt}=-(L_{i}-\hat{L}_{i}). \tag{4.1}\]
**Lemma 4.1**.: _The flow (4.1) exists a unique solution all the time._
Proof.: Note that \(L_{i}\) is a \(C^{1}\) smooth function of \(K_{i}\). By Cauchy-Lipschitz Theorem, the flow (4.1) exists a unique solution \(K(t)\) for some \(t\in(0,\varepsilon)\). Now we prove that \(\varepsilon=+\infty\). Assume on the contrary that \(\varepsilon\) is finite. Then there exists a sequence \(t_{m}\to\varepsilon\) such that \(K\left(t_{m}\right)\to+\infty\) or \(-\infty\). Lemma 3.2 indicates that
\[|L_{i}|<c_{i}\pi\]
for some \(c_{i}>0\). Thus we have
\[\left|\frac{dK_{i}}{dt}\right|=\left|L_{i}-\hat{L}_{i}\right|\leq|L_{i}|+ \left|\hat{L}_{i}\right|\leq c_{i}\pi+\left|\hat{L}_{i}\right|\leq N_{i}\]
for some \(N_{i}>0\). This yields that
\[|K(t_{m})|\leq|K(0)|+N_{i}t_{m}.\]
Then we obtain
\[\lim_{m\to\infty}|K(t_{m})|\leq|K(0)|+N_{i}\varepsilon,\]
which yields a contradiction.
Let us define
\[\Phi(K)=W(K)-\sum_{i=1}^{|V|}\hat{L}_{i}K_{i}.\]
Note that the flow (4.1) is the negative gradient flow of \(\Phi(K)\). It is obvious that \(\Phi(K)\) is smooth and strictly convex. Set \(\hat{L}=(\hat{L}_{1},\cdots,\hat{L}_{|V|})\). It can be derived that \(\Phi(K)\) exists a unique critical point if and only if there exists \(\hat{K}\in\mathbb{R}^{|V|}\) such that \(\nabla W(\hat{K})=\hat{L}\). It is equivalent to \(\hat{L}\in\Omega\), which follows from Lemma 3.2. Then we obtain the following lemma.
**Lemma 4.2**.: \(\Phi(K)\) _exists a unique critical point if and only if \(\hat{L}\in\Omega\)._
The following property of convex functions plays an important role in proving Theorem 1.4, a proof of which could be found in [16, Lemma 4.6].
**Lemma 4.3**.: _Suppose \(f(x)\) is a \(C^{1}\) smooth convex function on \(\mathbb{R}^{n}\) with \(\nabla f(x_{0})=0\) for some \(x_{0}\in\mathbb{R}^{n}\). Suppose \(f(x)\) is \(C^{2}\) smooth and strictly convex in a neighborhood of \(x_{0}\). Then the following statements hold:_
* \(\nabla f(x)\neq 0\) _for any_ \(x\not\in\mathbb{R}^{n}\setminus\{x_{0}\}\)_._
* _Then_ \(\lim_{\|x\|\to+\infty}f(x)=+\infty\)_._
Before we prove Theorem 1.4, we give a brief introduction to Lyapunov's Stability theory. Please refer to [31, 34, 22] for more information. Let us consider the real autonomous systems
\[\frac{dx}{dt}=f(x), \tag{4.2}\]
where \(f\) is continuous on the open set \(D\subset\mathbb{R}^{n}\), which contains the origin. The following result is the classic Lyapunov's Stability Theorem.
**Theorem 4.4**.: _[_22_, Theorem 3.3]_ _Let \(f(x)\) be a continuous function defined on an open set \(D\subset\mathbb{R}^{n}\), which contains the origin, and \(f(0)=0\). Let \(V(x)\) be a continuously differentiable function defined over \(D\) such that_
* \(V(0)=0\) _and_ \(V(x)>0\) _for all_ \(x\in D\setminus\{0\}\)_,_
* \(\dot{V}(x)\leq 0\) _for all_ \(x\in D\)_._
* \(\dot{V}(x)<0\) _for all_ \(x\in D\setminus\{0\}\)_,_
_Then \(x(t)=0\) is an asymptotically stable equilibrium point of (4.2). Suppose \(D=\mathbb{R}^{n}\) and (a),(b) hold. If \(V(x)\to+\infty\) as \(\|x\|\to+\infty\), then \(x(t)=0\) is globally asymptotically stable._
Proof of Theorem 1.4.: Since the flow (1.2) is equivalent to the flow (4.1), we can prove this theorem for the flow (4.1). Due to Lemma 4.1, denote \(K(t)\) as the solution of the flow (4.1).
First we prove \((a)\Rightarrow(b)\). Suppose \(K(t)\to\hat{K}\) as \(t\to\infty\). By the mean value theorem, there exists \(\xi_{n}\in(n,n+1)\) such that
\[\Phi(K(n+1))-\Phi(K(n))=\frac{d}{dt}\Phi(K(\xi_{n}))\to 0\]
as \(n\to+\infty\). Note that
\[\frac{d}{dt}\Phi(K(t))=\sum\nolimits_{i=1}^{|V|}\frac{\partial\Phi(K)}{\partial K _{i}}\frac{dK_{i}}{dt}=-\sum\nolimits_{i=1}^{|V|}\left(\frac{\partial\Phi(K)}{ \partial K_{i}}\right)^{2}.\]
Consequently, we have
\[|\nabla\Phi(K(\xi_{n}))|\to 0\]
as \(n\to+\infty\). Note that \(K(\xi_{n})\to\hat{K}\). It can be concluded that \(\nabla\Phi(\hat{K})=0\), which implies that \(\nabla W(\hat{K})=\hat{L}\). Then Lemma 3.2 demonstrates \(\hat{L}\in\Omega\).
Next, we prove \((b)\Rightarrow(a)\). Assume that \(\hat{L}\in A\). Lemma 3.2 shows that there exists a unique \(\hat{K}\in\mathbb{R}^{|V|}\) such that \(\nabla W(\hat{K})=\hat{L}\). It follows that \(\nabla\Phi(\hat{K})=0\). Let us define
\[V(K)=\Phi(K)-\Phi(\hat{K}).\]
Combining to lemma 4.2 and lemma 4.3, it is easy to verify that
* \(V(K)\geq 0\) and \(V(K)>0\) for \(K\neq\hat{K}\).
* \(\frac{d}{dt}V(K(t))\leq 0\).
* \(\frac{d}{dt}V(K(t))=0\) if and only if \(K=\hat{K}\).
* \(V(K)\to+\infty\) as \(\|K\|\to+\infty\).
Then \(K(t)=\hat{K}\) is the asymptotically stable equilibrium point of (4.1). The convergence of \(K(t)\) to \(\hat{K}\) follows from the Theorem 4.4.
The next step is to prove that \(K(t)\) converges exponentially fast to \(\hat{K}\). Define
\[C\left(K\right)=\sum\nolimits_{i=1}^{n}(L_{i}-\hat{L}_{i})^{2}.\]
Note that \(W\) is strictly convex. Then there exists \(\lambda>0\) such that
\[\frac{dC\left(K(t)\right)}{dt} =-2\left(L_{i}-\hat{L}_{i},\cdots,L_{i}-\hat{L}_{i}\right)^{ \mathrm{T}}\mathrm{M}\left(L_{i}-\hat{L}_{i},\cdots,L_{i}-\hat{L}_{i}\right),\] \[\leq-2\lambda\sum\nolimits_{i=1}^{|V|}\left(L_{i}-\hat{L}_{i} \right)^{2}\] \[=-2\lambda C(K),\]
where \(\mathrm{M}\) is the Hessian matrix of \(W(K)\). It means
\[\sum\nolimits_{i=1}^{n}(L_{i}-\hat{L}_{i})^{2}=C\left(K(t)\right)\leq C(K(0)) e^{-2\lambda_{0}t},\]
therefore, we have
\[|L_{i}-\hat{L}_{i}|\leq\sqrt{C(K(0))}e^{-\lambda_{0}t}.\]
Then we obtain
\[|K_{i}(t)-\hat{K}_{i}|\leq\int_{t}^{\infty}\left|L_{i}-\hat{L}_{i}\right|dt \leq\int_{t}^{\infty}C(K(0))e^{-\lambda_{0}t}dt\leq\frac{C(K(0))}{\lambda_{0} }e^{-\lambda_{0}t},\]
for any \(t>0\). This gives the exponential convergence of the flow (4.1).
Open questions
This paper leaves behind several topics that merit investigation.
1. Explore more rigidity results of circle patterns with respect to the total geodesic curvature. There are two generalized structures for circle packings, allowing for the intersection or separation of circles. These two types of generalized structures are determined by discrete Gaussian curvatures, which are established in the works of Thurston [33], Ge-Hua-Zhou [14], Xu [35]. We want to know if Theorem 1.2 can be extended to the above two generalized structures.
2. Let \((S,T)\) be a closed triangulated surface and let \((r_{1},\cdots,r_{|v|})\in\mathbb{R}_{+}^{N}\) be a hyperbolic circle packing metric. Let \(L_{i}\), \(\Theta_{i}\) be the total geodesic curvature and angle deficit at \(i\in V\). Set \(A_{i}=\Theta_{i}r_{i}\). The Gauss-Bonnet theorem indicates that \[L_{i}-A_{i}-\Theta_{i}=0.\] Theorem 1.1 indicates hyperbolic circle packing metrics are determined by angle deficit at each vertex. Theorem 1.2 shows that hyperbolic circle packing metrics are determined by total geodesic curvature at each vertex as well. The viewpoints of Theorem 1.1 and Theorem 1.2 are similar to a dual thought on the Gauss-Bonnet Formula. We are interested in studying whether \(A_{i}\) can be used to characterize the rigidity of hyperbolic circle packing metrics.
3. Since the circle packing metric induces a special polyhedral surface, whether we can generalize total geodesic curvatures to the polyhedral surfaces or not. The relationship between the discrete conformal structures on polyhedral surfaces and 3-dimensional hyperbolic geometry was first discovered by Bobenko-Pinkall-Springborn [3], in the case of vertex scaling, which is further studied by Zhang-Guo-Zeng-Luo-Yau-Gu [36]. We want to know if there exists a relationship between the generalized circle packing induced by total geodesic curvatures and 3-dimensional hyperbolic geometry.
## 6 Acknowledgments
Te Ba is supported by NSF of China (No. 11631010). Guangming Hu is supported by NSF of China (No. 12101275). The authors would like to thank Xin Nie, Xu Xu and Ze Zhou for helpful discussions.
|
2306.12545 | Neural Multigrid Memory For Computational Fluid Dynamics | Turbulent flow simulation plays a crucial role in various applications,
including aircraft and ship design, industrial process optimization, and
weather prediction. In this paper, we propose an advanced data-driven method
for simulating turbulent flow, representing a significant improvement over
existing approaches. Our methodology combines the strengths of Video Prediction
Transformer (VPTR) (Ye & Bilodeau, 2022) and Multigrid Architecture (MgConv,
MgResnet) (Ke et al., 2017). VPTR excels in capturing complex spatiotemporal
dependencies and handling large input data, making it a promising choice for
turbulent flow prediction. Meanwhile, Multigrid Architecture utilizes multiple
grids with different resolutions to capture the multiscale nature of turbulent
flows, resulting in more accurate and efficient simulations. Through our
experiments, we demonstrate the effectiveness of our proposed approach, named
MGxTransformer, in accurately predicting velocity, temperature, and turbulence
intensity for incompressible turbulent flows across various geometries and flow
conditions. Our results exhibit superior accuracy compared to other baselines,
while maintaining computational efficiency. Our implementation in PyTorch is
available publicly at https://github.com/Combi2k2/MG-Turbulent-Flow | Duc Minh Nguyen, Minh Chau Vu, Tuan Anh Nguyen, Tri Huynh, Nguyen Tri Nguyen, Truong Son Hy | 2023-06-21T20:19:57Z | http://arxiv.org/abs/2306.12545v2 | # Neural Multigrid Memory For Computational Fluid Dynamics
###### Abstract
Turbulent flow simulation plays a crucial role in various applications, including aircraft and ship design, industrial process optimization, and weather prediction. In this paper, we propose an advanced data-driven method for simulating turbulent flow, representing a significant improvement over existing approaches. Our methodology combines the strengths of Video Prediction Transformer (VPTR) (Ye and Bilodeau, 2022) and Multigrid Architecture (MgConv, MgResnet) (Ke et al., 2017). VPTR excels in capturing complex spatiotemporal dependencies and handling large input data, making it a promising choice for turbulent flow prediction. Meanwhile, Multigrid Architecture utilizes multiple grids with different resolutions to capture the multiscale nature of turbulent flows, resulting in more accurate and efficient simulations. Through our experiments, we demonstrate the effectiveness of our proposed approach, named MGxTransformer, in accurately predicting velocity, temperature, and turbulence intensity for incompressible turbulent flows across various geometries and flow conditions. Our results exhibit superior accuracy compared to other baselines, while maintaining computational efficiency. Our implementation in PyTorch is available publicly at [https://github.com/Combi2k2/MG-Turbulent-Flow](https://github.com/Combi2k2/MG-Turbulent-Flow).
Machine Learning, ICML
## 1 Introduction
Fluid dynamics simulation is crucial in various fields, including aerospace engineering, automotive engineering, chemical engineering, and environmental engineering. However, computational fluid dynamics faces challenges due to high computational costs and the complexity of simulating the Navier-Stokes equations accurately.
Inspired by the success of many fully data-driven deep learning models in computer vision (Yao et al., 2018), (Xue et al., 2016) and the appearance of a new attention and memory mechanism, Multigrid Neural Memory (Huynh et al., 2020), we have developed another data-driven approach for solving computational fluid dynamics problems. Although the specified memory structure retains the multiscale structure in computational fluid dynamics (CFD), combining both approaches has been shown to be effective.
To verify the strength of our proposal, we first create MgNet (Ke et al., 2017), used by the Multigrid Neural Memory (Huynh et al., 2020). Additionally, due to the rise of certain transformer approaches in computer vision (Yuan et al., 2021), (Ye and Bilodeau, 2022), we attempt to integrate Transformer layers into the architecture to enable parallel computation. These models are evaluated in predicting flow velocity up to \(50\) time steps into the future.
In summary, our contributions include proposing a fully data-driven model for solving computational fluid dynamics problems and improving the efficiency of this approach by incorporating Transformer and Multiscale architectures.
## 2 Related work
Spatiotemporal forecasting.Predicting spatiotemporal dynamics is crucial in various fields, such as physics, economics, and neurology. Conventional physics-based differential equations (Izhikevich, 2007) have been used to model system dynamics but are hard to solve due to sensitivity to initial conditions. Recently, data-driven models using deep learning techniques have been applied to spatiotemporal forecasting, although incorporating physical knowledge into these models can be difficult and modeling turbulence remains a significant challenge for predicting turbulent flow. Despite these challenges, deep learning models have great potential for improving spatiotemporal forecasting.
Video prediction.Our work is also related to future video prediction (Zhai et al., 2022), (SHI et al., 2015), (Huynh et al., 2020). Conditioning on the observed frames, video
prediction models are trained to predict future frames. There are 2 main problems with previous approaches. The first problem is that many of these models are trained on natural videos with complex noisy data from unknown physical processes which causes difficulty in explicitly learning physical principles in the model. The second problem is the slow inference as the inherited nature of recurrent methods. Hence, some of these techniques are perhaps under-suited for our application.
Multiscale modeling.Various multiscale modeling techniques have been proposed to simulate turbulent flow, including large eddy simulation (LES), hybrid RANS-LES (Hamba, 2003), and particle-based methods. These methods differ in how they address the small-scale features of turbulence. While they have demonstrated positive outcomes, they pose challenges such as computational expenses and modeling the interactions between different scales. Researchers are still working on developing more effective and precise multiscale modeling approaches for simulating turbulent flow.
Physics-informed Deep Learning.Physics-informed deep learning (PIDL) has recently gained attention for its ability to combine physics-based models and data-driven approaches, by incorporating prior physical knowledge into the loss function of a deep neural network or designing the network to preserve physical properties (Wang et al., 2020). PIDL has been applied to various fields, including fluid mechanics, to simulate the dynamics of turbulent flows and estimate physical parameters from observational data. Although PIDL is promising for improving simulations in various fields, it has some limitations, such as the need for careful tuning of hyperparameters and the challenge of including complex physics in the loss function. Nonetheless, PIDL shows potential for significant advancements in the field of fluid mechanics and beyond.
## 3 Method
We propose a data-driven method that replicates Multigrid Neural Memory (Huynh et al., 2020). In our approach, we notice a chance to parallelize the memory encoding process by replacing ConvLSTM layers by one Transformer layer that recently succeeded in Natural Processing Language. Following that, we choose Video Prediction Transformers, which is recently proved to have more efficient computational cost than traditional Vision Transformer. We observe that the result model can can perform equally to TF-Net (Wang et al., 2020) which is the current best model that serve this particular purpose.
### Video Prediction Transformers (VPTR)
VPTR is an efficient alternative to traditional Vision Transformers, offering comparable performance with improved computational efficiency. It utilizes self-attention mecha
Figure 1: **Architecture of MGxTransformers.** The model is composed of Video Prediction Transformer (VPTR) and a feed-forward network of Multigrid Convolutional Network (MgNet).
nisms to capture global dependencies, enabling effective modeling of long-range relationships. However, applying VPTR directly to turbulent flow may compromise interpretability since it operates on a compressed latent space that lacks the ability to effectively capture the inherent multiscale characteristics.
The architecture illustrated in Figure [1] showcases the upper section of VPTR, featuring an additional residual connection. The objective is to leverage the learned representations from preceding layers and merge them with the extracted features in subsequent layers. This fusion enhances the autoencoder's capability for feature extraction, resulting in more expressive and adaptable representations.
The detailed implementation of this layer follows the instruction of (Ye and Bilodeau, 2022). The input is the representation of the turbulent flows of the shape \((N,T,C,H,W)\) and the output has the shape \((N,T,d_{model},H,W)\).
### Multigrid Convolutional Network (MgNet)
In our design, we integrate a multigrid variant of CNNs and ResNets, as proposed by (Ke et al., 2017), which inherently exhibit attentional behavior. This architecture consists of multiple convolutional layers operating on different grid sizes. The outputs are concatenated and fed into a final layer for classification or segmentation. MgNet, in our approach, can performs equivalently as an U-Net or Convolutional Feed Forward Net and is designed to capture features at multiple scales.
The layer's input is a pyramid \(\mathcal{X}=\{z_{j}^{x}\}\), with \(j\) representing the pyramid level, obtained by reshaping the output of previous layers (VPTR) into a shape of \((N,T\cdot d_{model},H,W)\). It is then downsampled and upsampled to create a structured input at multiple scales, as depicted by the orange section in Figure [1]. We modify the resolution by a factor of two in each spatial dimension when transitioning between pyramid levels.
The resulting output of this layer remains in the form of a pyramid but now consists of a single level. This level has a specific size of \(N\times(T^{\prime}\cdot C)\times H\times W\), where \(T^{\prime}\) represents the number of frames from the future that we aim to predict. By reshaping this level, we obtain the ultimate prediction of the model.
## 4 Experiments
We investigate the performance of our model by testing its learning ability over 2 datasets that are governs by Navier-Stokes Equations.
We develop a framework to train our model and baselines given the shape of one frame in the sequential data. Here, we employ the PyTorch deep learning library (Paszke et al., 2019) and set up the same hyper parameters for each model:
* Adam optimizer (Kingma and Ba, 2014),
* Initial learning rate \(=10^{-4}\),
* Number of training epochs \(=100\).
Our source code is available at [https://github.com/Combi2k2/MG-Turbulent-Flow](https://github.com/Combi2k2/MG-Turbulent-Flow).
### Evaluation Metrics
To evaluate the effectiveness of our models, we measured the accuracy of their predictions with the following two metric functions:
* **Mean Square Error**: We calculate the MSE of all predicted values from the ground truth for each pixel.
* **Divergence Loss**: Since we investigate incompressible turbulent flows in this work, which means the divergence, \(\nabla w\), at each pixel should be zero, we use the average of absolute divergence over all pixels at each prediction step as an additional evaluation metric.
### Rayleigh-Benard convection (RBC)
This is a model for turbulent convection, with a horizontal layer of fluid heated from below so that the lower surface is at a higher temperature than the upper surface, governed by Navier-Stokes Equations. The dataset comes from two dimensional turbulent flow simulated using the Lattice Boltzmann Method. We use only the velocity vector fields, where the spatial resolution of each frame is \(1,792\times 256\) (see Figure [2]). Each image has two channels, one is the turbulent flow velocity along \(x\) direction and the other one is the velocity along \(y\) direction. The physics parameters relevant to this numerical simulation are: Prandtl number = \(0.71\), Rayleigh number = \(2.510^{8}\) and the maximum Mach number = \(0.1\). We use \(1,000\) images for our experiments. The task is to predict the spatiotemporal velocity fields up to \(50\) steps ahead given \(16\) initial frames.
We divided each \(1,792\times 256\) image into \(7\) square subregions of size \(256\times 256\), then downsample them into \(64\times 64\) pixels sized images. We use a sliding window approach to generate \(9,870\) samples of sequences of veloc
Figure 2: A snapshot of the Rayleigh-Bénard convection flow, the velocity fields along \(x\) direction (top) and \(y\) direction (bottom). The spatial resolution is 1,792 x 256 pixels.
ity fields. Here we use only \(3,000\) training samples, and \(1,000\) validation samples and \(1,000\) test samples. The DL model is trained using back-propagation through prediction errors accumulated over multiple steps.
### 2D Random Flow
The 2D Random Flow dataset utilized in this experiment is derived from the PDEBench benchmark (Takamoto et al., 2022), a comprehensive resource for machine learning experiments in physics (Takamoto et al., 2022). For this particular experiment, we focused on the random flow dataset within the 2D Computational Fluid Dynamics (CFD) sections. The dataset was generated through simulations of the compressible Navier-Stokes equation, considering a shear viscosity of \(\eta=0.01\) and a bulk viscosity of \(\zeta=0.01\). These parameters enable the modeling of fluid behavior in a computationally efficient manner. The dataset comprises a total of 10,000 samples, each of which contains \(21\) frames. Each frame consists of two channels representing the \(x\)-velocity and \(y\)-velocity components of the fluid flow. The resolution of each frame is set to \(128\times 128\), providing a sufficiently detailed representation of the flow characteristics. To make the dataset ready for testing, we resized the samples to dimensions of \((21,2,64,64)\), where the values indicate the number of timesteps, channels, image height, and image width, respectively. Subsequently, we divided these downscaled samples into three sets: 3,000 for training, 1,000 for validation, and 1,000 for testing. For our model evaluations, we performed tests on 5,000 of these test samples across all models.
### Result and Observation
Table 1 presents the evaluation metrics, namely _Mean Squared Error_ (MSE) and _Divergence_, for a single prediction step. In terms of the one-step prediction performance, MGxTransformers demonstrates strong potential by outperforming most of the baselines. However, when employing autoregressive methods, our model falls short compared to the TF-Net and FNO2d baselines. Furthermore, regarding divergence loss, our models exhibit a slight performance deficit. This can be attributed to the absence of normal physical laws within the model's structure. The lack of physical reflection within the architecture is the underlying cause of this limitation. Figure 3 visually represents the evolution of the evaluation metrics over a span of \(100\) prediction steps, employing an autoregressive method. Notably, our model exhibits a diminishing performance over time, primarily attributed to its inability to preserve essential physical properties and image details within a single prediction step. Nonetheless, these observations underscore the potential efficacy of incorporating physical regularizers as a promising avenue for enhancing our model in future iterations.
## 5 Conclusion
We introduce a fully data-driven approach for learning the behavior of turbulent flow. Our methodology incorporates a Transformer layer that effectively captures the spatio-temporal interactions inherent in turbulent flow, akin to a variant of Conv-LSTM. Additionally, the inclusion of MgNet layer facilitates the simulation of the multiscale structure characteristics of fluid dynamics. The end-to-end training pipeline aims to achieve the optimal combination of these components. To comprehensively evaluate the performance of our proposed model, MGxTransformers, we conduct extensive comparisons against various baselines, employing different metrics to assess their respective strengths and limitations. A significant contribution of this research lies in the novel concept of modeling the multiscale structure of turbulent flow through purely deep learning methods. Moving forward, our future work entails the incorporation of physical regularizers such as divergence and turbulence kinetic energy. Moreover, we aim to explore methods that directly integrate the modeling of multiscale structures within the Transformer layers. These endeavors collectively aim to enhance the accuracy and fidelity of deep learning models in the context of turbulent flow analysis.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{**Rayleigh–Bénard Convection**} & \multicolumn{2}{c}{**2D Random Flow**} \\ \cline{2-5} & **MSE**\(\downarrow\) & **Divergence**\(\downarrow\) & **MSE**\(\downarrow\) & **Divergence**\(\downarrow\) \\ \hline ConvFFN & 0.12393 & 0.00115 & 6.163 \(\times 10^{-3}\) & 0.207 \(\times 10^{-3}\) \\ U-Net (Ronneberger et al., 2015) & 0.00473 & 0.00762 & 0.764 \(\times 10^{-3}\) & 0.271 \(\times 10^{-3}\) \\ C-LSTM (SHI et al., 2015) & 0.01279 & 0.00689 & 0.176 \(\times 10^{-3}\) & 0.002 \(\times 10^{-3}\) \\ TF-Net (Wang et al., 2020) & 0.05341 & 0.00248 & 0.040 \(\times 10^{-3}\) & 0.006 \(\times 10^{-3}\) \\ FNO2d (Li et al., 2021) & 0.00627 & 0.00680 & 0.281 \(\times 10^{-3}\) & 0.097 \(\times 10^{-3}\) \\ \hline
**MNM**(Huynh et al., 2020) & 0.01051 & 0.00815 & 12.79 \(\times 10^{-3}\) & 6.483 \(\times 10^{-3}\) \\
**MGxTransformer** & 0.00704 & 0.00791 & 0.255 \(\times 10^{-3}\) & 0.145 \(\times 10^{-3}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results on 2 datasets, measured by **MSE** and **Divergence** metrics. |
2306.05039 | Powers of Karpelevic arcs and their Sparsest Realising matrices | The region in the complex plane containing the eigenvalues of all stochastic
matrices of order n was described by Karpelevic in 1988, and it is since then
known as the Karpelevic region. The boundary of the Karpelevic region is the
union of disjoint arcs called the Karpelevic arcs. We provide a complete
characterization of the Karpelevic arcs that are powers of some other
Karpelevic arc. Furthermore, we find the necessary and sufficient conditions
for a sparsest stochastic matrix associated with the Karpelevic arc of order n
to be a power of another stochastic matrix. | Priyanka Joshi, Stephen Kirkland, Helena Smigoc | 2023-06-08T08:44:46Z | http://arxiv.org/abs/2306.05039v1 | # Powers of Karpelevic arcs and their Sparsest Realising matrices
###### Abstract
The region in the complex plane containing the eigenvalues of all \(n\times n\) stochastic matrices was described by Karpelevic in 1988, and it is since then known as the Karpelevic region. The boundary of the Karpelevic region is the union of disjoint arcs called the Karpelevic arcs. We provide a complete characterization of the Karpelevic arcs that are powers of some other Karpelevic arc. Furthermore, we find the necessary and sufficient conditions for a sparsest stochastic matrix associated with the Karpelevic arc of order \(n\) to be a power of another stochastic matrix.
## 1 Introduction
A square entrywise nonnegative matrix is called stochastic if each row sum equals \(1\). Stochastic matrices and their properties are central to the study of Markov chains; in particular, the eigenvalues of a stochastic matrix govern the long-term behavior of the iterates of the corresponding Markov chain. Consequently, there is a long-standing interest in localising the eigenvalues of stochastic matrices, and a classic problem of Kolmolgorov [8] asks for a description of the region in the complex plane containing all the eigenvalues of all \(n\times n\) stochastic matrices.
That region, denoted by \(\Theta_{n}\), was characterised by Karpelevic [4]. He showed that the boundary of the Karpelevic region, denoted by \(\partial\Theta_{n}\), is a union of disjoint arcs called the Karpelevic arcs. Kirkland, Laffey and Smigoc [6] expanded on this result by identifying the point on the boundary of the Karpelevic region with a given argument \(\theta\in[0,2\pi)\).
The problem of determining a stochastic \(p\)-th root of a stochastic matrix finds its motivation in the theory of Markov Chains, where it corresponds to the problem of
finding a transition matrix over a shorter time interval from a given transition matrix. This problem was considered for example in [1].
Johnson and Paparella [3] posed a conjecture that selected Karpelevic arcs are powers of some other Karpelevic arcs. Kim and Kim [5] proved their conjecture. However, results in [3] and [5] only partially answer the question of characterising the powers of the Karpelevic arcs. After establishing notation and recalling the necessary background results on the Karpelevic region in Section 2, we give a complete characterization of the Karpelevic arcs that can be written as a power of another Karpelevic arc in Section 3.
Johnson and Paparella [3] also considered the question of constructing stochastic matrices realising the boundary of the Karpelevic region. For each Karpelevic arc, they provided a single parametric family of stochastic matrices that realises eigenvalues on that arc. Kirkland and Smigoc in [7] described all sparsest \(n\times n\) stochastic matrices realising eigenvalues on the border of the Karpelevic region. In Section 4 we recall the results from [7], and establish notation for digraphs and the associated stochastic matrices. This background is needed in Section 5, where we characterise the sparsest realising matrices that can be written as a power of another stochastic matrix.
This paper is organised in two parts that can be read independently. Section 2 provides the background to Section 3 where the complete characterisation of the Karpelevic arcs that can be written as a power of another Karpelevic arc is given in Theorem 3.10 and Corollary 3.12. A reader that is more interested in the powers of matrices and results developed in Section 5, could view Theorem 3.10 and Corollary 3.12 as part of background results, and learn about additional background in Section 4.
## 2 Background and Notation on the Karpelevic Region
Given \(n\in\mathbb{N}\), the set
\[\mathcal{F}_{n}=\{\nicefrac{{p}}{{q}}\mid 0\leq p<q\leq n,\gcd(p,q)=1\}\]
is called _the set of Farey fractions of order \(n\)_. The pair \((\nicefrac{{p}}{{q}},\nicefrac{{r}}{{s}})\) is called _a Farey pair (of order \(n\)_), if \(\nicefrac{{p}}{{q}},\nicefrac{{r}}{{s}}\in\mathcal{F}_{n}\), \(\nicefrac{{p}}{{q}}<\nicefrac{{r}}{{s}}\) and \(\nicefrac{{p}}{{q}}<x<\nicefrac{{r}}{{s}}\) implies \(x\not\in\mathcal{F}_{n}\). The Farey fractions \(\nicefrac{{p}}{{q}}\) and \(\nicefrac{{r}}{{s}}\) are called _Farey neighbours_ if one of \((\nicefrac{{p}}{{q}},\nicefrac{{r}}{{s}})\) and \((\nicefrac{{r}}{{s}},\nicefrac{{p}}{{q}})\) is a Farey pair. It is well known that the Farey fractions \(\nicefrac{{p}}{{q}}\) and \(\nicefrac{{r}}{{s}}\) form a Farey pair iff \(q+s>n\) and \(|qr-ps|=1\).
For \(n\), \(q\) and \(s\), satisfying, \(q<s\leq n\), \(\gcd(q,s)=1\), and \(q+s>n\), there exits precisely two Farey pairs in \(\mathcal{F}_{n}\) with denominators \(q\) and \(s\):
\[\mathcal{F}_{n}(q,s):=\{\big{(}\nicefrac{{p}}{{q}},\nicefrac{{r}}{{s}}\big{)},\big{(}\nicefrac{{(s-r)}}{{s}},\big{(}\nicefrac{{q-p}}{{q}}\big{)}\big{)}\}. \tag{1}\]
Hence, there exist unique \(p\) and \(r\) so that \((\nicefrac{{p}}{{q}},\nicefrac{{r}}{{s}})\in\mathcal{F}_{n}(q,s)\). We will denote those as \(p(q,s)\) and \(r(q,s)\), and when clear from the context just as \(p\) and \(r\).
Further parameters associated with \(\mathcal{F}_{n}(q,s)\) that were first defined in [6] and will
also be needed in this work, are given below:
\[d(q,s) =\left\lfloor\frac{n}{q}\right\rfloor,\] \[\delta(q,s) =\gcd(d(q,s),s),\] \[s =s_{1}(q,s)\delta(q,s),\] \[d(q,s) =d_{1}(q,s)\delta(q,s).\]
Note that \(d(q,s)\), \(\delta(q,s)\), \(d_{1}(q,s)\), \(s_{1}(q,s)\) depend on \(n\) as well as on \(q\) and \(s\). Our notation does not capture this dependence, as we will always fix \(n\). Furthermore, once \(q\) and \(s\) are established, we will abbreviate notation to \(d\), \(\delta\), \(s_{1}\) and \(d_{1}\).
**Example 2.1**.: Let \(q=3\) and \(s=14\). Then \(\mathcal{F}_{n}(3,14)=\{(\sfrac{1}{3},\sfrac{5}{14}),(\sfrac{9}{14},\sfrac{2}{ 3})\}\) for \(n\in\{14,15,16\}\). For \(n=14\) we get \(d(3,14)=\left\lfloor\frac{14}{3}\right\rfloor=4\), \(\delta(3,14)=2,d_{1}(3,14)=2\) and \(s_{1}(3,14)=7\). On the other hand, for \(n=15\) and for \(n=16\) we get \(d(3,14)=\left\lfloor\frac{16}{3}\right\rfloor=5\), \(\delta(3,14)=1,d_{1}(3,14)=5\) and \(s_{1}(3,14)=14\).
The Karpelevic region was first described in [4]. We recall a version of this result from [2] below.
**Theorem 2.2**.: _([4],[2]) The region \(\Theta_{n}\) is symmetric with respect to the real axis, is included in the unit disc \(\{z\in C|\ |z|\leq 1\}\), and intersects the unit circle \(\{z\in C|\ |z|=1\}\) at the points \(\{e^{\frac{2\pi ip}{q}}|\ p/q\in\mathcal{F}_{n}\}\). The boundary of \(\Theta_{n}\) consists of these points and curvilinear arcs connecting them in circular order._
_The arc with endpoints \(e^{\frac{2\pi ip}{q}}\) and \(e^{\frac{2\pi ir}{s}}\), \(q<s\), is given by the following parametric equation:_
\[t^{s}(t^{q}-1+\alpha)^{\left\lfloor\frac{n}{q}\right\rfloor}=\alpha^{\lfloor \frac{n}{q}\rfloor}t^{q\lfloor\frac{n}{q}\rfloor},\alpha\in[0,1]. \tag{2}\]
For a Farey pair associated with the endpoints of an arc, the above theorem assumes \(q\) to be the smallest denominator of the two fractions appearing in the pair. As equation (2) depends on \(q\) and \(s\) only, the same equation describes both arcs associated with the Farey pairs in \(\mathcal{F}_{n}(q,s)\). These two arcs are conjugates of each other, and we will (when convenient) only concentrate on one of them, say the arc associated with the Farey pair, \((\sfrac{p}{q},\sfrac{r}{s})\) with \(q<s\).
For \(q<s\), \(\gcd(q,s)=1\) and \(q+s>n\), we set \(p:=p(q,s)\) and \(r:=r(q,s)\), and introduce the following notation:
1. \(\arg(q,s)\equiv(\sfrac{2\pi p}{q},\sfrac{2\pi r}{s})\cup(\sfrac{2\pi(s-r)}{s}, \sfrac{2\pi(q-p)}{q})\),
2. \(\mathcal{K}_{n}(q,s)\equiv\partial\Theta_{n}\cap\{z\in\mathbb{C}\ |\ \arg(z)\in\arg(q,s)\}\),
3. \(\mathcal{K}_{n}\) the set of all \(\mathcal{K}_{n}(q,s)\) for a fixed \(n\in\mathbb{N}\).
Theorem 2.2 tells us that \(\mathcal{K}_{n}(q,s)\) contains a pair of arcs that are complex conjugates of each other, and whose points satisfy the parametric equation (2). For the Farey pair, \((\sfrac{p}{q},\sfrac{r}{s})\in\mathcal{F}_{n}(q,s)\) with \(q<s\), we can substitute \(d=\lfloor\frac{n}{q}\rfloor\) in equation (2). The polynomial \(f_{\alpha}(t)=t^{s}(t^{q}-\beta)^{d}-\alpha^{d}t^{qd},\alpha\in[0,1]\), are called the _Ito polynomials_ for the Farey pair \((\sfrac{p}{q},\sfrac{r}{s}),q<s\). The _reduced Ito polynomials_ are the polynomials obtained from the Ito polynomials by removing the zero roots.
**Example 2.3**.: In Example 2.1 we have seen that \(\mathcal{F}_{n}(3,14)=\{\left(\nicefrac{{1}}{{3}},\nicefrac{{5}}{{14}}\right),\left( \nicefrac{{9}}{{14}},\nicefrac{{2}}{{3}}\right)\}\) for \(n\in\{14,15,16\}\). In all three cases \(\arg(3,14)=\left(\nicefrac{{2\pi}}{{3}},\nicefrac{{5\pi}}{{7}}\right)\cup \left(\nicefrac{{9\pi}}{{7}},\nicefrac{{4\pi}}{{3}}\right)\), and \(\mathcal{K}_{n}(3,14)=\partial\Theta_{n}\cap\{z\in\mathbb{C}\mid\arg(z)\in \arg(3,14)\}\).
Theorem 2.2 describes \(\partial\Theta_{n}\). However, from the theorem, it is not immediately clear, given \(\theta\in[0,2\pi]\), how to find \(\rho\) satisfying \(\rho e^{i\theta}\in\partial\Theta_{n}\). This issue was resolved in Theorem 1.2 from [6] which is restated below.
**Definition 2.4**.: For \(n\geq 2\), we define \(\rho_{n}:[0,2\pi]\rightarrow\mathbb{R}_{+}\) to be the positive number satisfying \(\rho_{n}(\theta)e^{i\theta}\in\partial\Theta_{n}\).
**Theorem 2.5** (Theorem 1.2, [6]).: _Let \((\frac{p}{q},\frac{r}{s})\in\mathcal{F}_{n}(q,s)\), \(q<s\), and \(\theta\in[\nicefrac{{2\pi p}}{{q}},\nicefrac{{2\pi r}}{{s}}]\). Then \(\rho_{n}(\theta)=\mu^{d_{1}}\), where \(\mu\) is the unique positive solution to_
\[\mu^{s_{1}}\sin(q\theta)-\mu^{qd_{1}}\sin\left(\frac{s_{1}}{d_{1}}\theta-\frac {2\pi r}{d}\right)-\sin\left((q-\frac{s_{1}}{d_{1}})\theta+\frac{2\pi r}{d} \right)=0. \tag{3}\]
_Furthermore, \(\rho_{n}(\theta)e^{\mathrm{i}\theta}\) is a root of the rational function \(\phi_{\alpha}(t)=(t^{q}-\beta)^{d}-\alpha^{d}t^{qd-s}\), where \(\alpha\) is given by_
\[\alpha\sin\left((q-\frac{s_{1}}{d_{1}})\theta+\frac{2\pi r}{d}\right)=\mu^{s_{ 1}}\sin(q\theta).\]
**Remark 2.6**.: To apply Theorem 2.5 directly to \((\frac{p}{q},\frac{r}{s})\in\mathcal{F}_{n}\), the assumption \(q<s\) is necessary. To use the theorem for \(\theta^{\prime}\in(\frac{2\pi(s-r)}{s},\frac{2\pi(q-p)}{q})\in\mathcal{F}_{n}(q,s)\) we note that \(\rho_{n}e^{i\theta^{\prime}}\in\partial\Theta_{n}\) if and only if \(\rho_{n}e^{i\theta}\in\partial\Theta_{n}\) for \(\theta=2\pi-\theta^{\prime}\). So for \(\theta^{\prime}\in(\frac{2\pi(s-r)}{s},\frac{2\pi(q-p)}{q})\in\mathcal{F}_{n}\), we have \(\mu^{d_{1}}e^{i\theta^{\prime}}\in\mathcal{K}_{n}(q,s)\) if and only if \(\mu\) satisfies:
\[\mu^{s_{1}}\sin(q\theta^{\prime})-\mu^{qd_{1}}\sin\left(\frac{s_{1}}{d_{1}} \theta^{\prime}+\frac{2\pi(r-s)}{d}\right)-\sin\left((q-\frac{s_{1}}{d_{1}}) \theta^{\prime}-\frac{2\pi(r-s)}{d}\right)=0. \tag{4}\]
This equation can be obtained from (3) by inserting \(\theta=2\pi-\theta^{\prime}\).
**Example 2.7**.: To determine \(\rho_{14}(\nicefrac{{29\pi}}{{42}})\) we first note that \(\nicefrac{{29\pi}}{{42}}\in[\nicefrac{{2\pi}}{{3}},\nicefrac{{5\pi}}{{7}}]\) and \((\nicefrac{{1}}{{3}},\nicefrac{{5}}{{14}})\in\mathcal{F}_{14}(3,14)\). Substituting \(d=4,\delta=2,d_{1}=2,s_{1}=7\) and \(\theta=\nicefrac{{29\pi}}{{42}}\) in (3), we get:
\[\mu^{7}\sin(\nicefrac{{29\pi}}{{14}})+\mu^{6}\sin(\nicefrac{{7}}{{12}})-\sin \left(\nicefrac{{181\pi}}{{84}}\right)=0,\]
which has a unique positive solution \(\mu_{0}\) (approximately equal to \(0.99542\)). Thus, \(\rho_{14}(\nicefrac{{29\pi}}{{42}})=\mu_{0}^{2}\).
Now, for \((\nicefrac{{9}}{{14}},\nicefrac{{2}}{{3}})\in\mathcal{F}_{14}(3,14)\) let \(\theta^{\prime}=2\pi-\theta=\nicefrac{{55\pi}}{{42}}\). We see that \(\nicefrac{{55\pi}}{{42}}\in[\nicefrac{{9\pi}}{{7}},\nicefrac{{4\pi}}{{3}}]\). Substituting \(\theta^{\prime}=55\pi/42\) in equation (4), we get:
\[\mu^{7}\sin(\nicefrac{{55\pi}}{{14}})-\mu^{6}\sin(\nicefrac{{7}}{{12}})-\sin \left(\nicefrac{{323\pi}}{{84}}\right)=0,\]
which again has the unique positive solution \(\mu_{0}\). Hence, \(\rho_{14}(55\pi/42)=\mu_{0}^{2}\).
## 3 Powers of Karpelevic Arcs
For \(\mathcal{S}\subset\mathbb{C}\) and \(c\in\mathbb{N}\), we define the \(c\)-th power of \(\mathcal{S}\) to be:
\[\mathcal{S}^{c}:=\{\lambda^{c}|\lambda\in\mathcal{S}\}.\]
In this section, we aim to understand, when one Karpelevic arc is a power of another Karpelevic arc. In other words, we want to identify \(q,s,\hat{q},\hat{s},n\) and \(c\) so that \(\mathcal{K}_{n}(q,s)=\mathcal{K}_{n}(\hat{q},\hat{s})^{c}\).
We start by considering the necessary conditions on the associated Farey pairs that assure that the endpoints of an arc are mapped to the endpoints of another arc. Let \(\mathcal{F}_{n}(q,s)=\{(\nicefrac{{p}}{{q}},\nicefrac{{r}}{{s}}),(\nicefrac{{ \left(s-r\right)}}{{s}},(\nicefrac{{q-p}}{{q}})\})\) be the set of two Farey pairs associated with \(\mathcal{K}_{n}(q,s)\), and \(\mathcal{F}_{n}(\hat{q},\hat{s})=\{(\nicefrac{{\left(\hat{p}/\hat{q},\hat{r} /\hat{s}\right)}}{{s}},(\nicefrac{{\left(\hat{s}-\hat{r}\right)}}{{s}},( \nicefrac{{\left(\hat{q}-\hat{r}\right)}}{{s}},(\nicefrac{{\left(\hat{q}- \hat{p}\right)}}{{\hat{q}}})\}\) the set of two Farey pairs associated with \(\mathcal{K}_{n}(\hat{q},\hat{s})\). If \(\mathcal{K}_{n}(q,s)=\mathcal{K}_{n}(\hat{q},\hat{s})^{c}\), then the endpoints of the arcs in \(\mathcal{K}_{n}(\hat{q},\hat{s})^{c}\) have to map to the endpoints of the arcs in \(\mathcal{K}_{n}(q,s)\). In terms of the associated Farey pairs:
\[\{\left(\nicefrac{{\left(\hat{p}c/\hat{q}-\left\lfloor\nicefrac{{\left.{\hat {p}c/\hat{q}\right\rfloor}}}{{s}}\right\rfloor}}{{}^{\hat{r}c/\hat{s}-\left\lfloor \nicefrac{{\left.{\hat{r}c/\hat{s}\right\rfloor}}}{{s}}\right\rfloor}},\nicefrac{ {\left(\left.{\left(\hat{s}-\hat{r}\right)c/\hat{s}\right\rfloor}}{{s}},( \nicefrac{{\left(\hat{q}-\hat{p}\right)c/\hat{q}-\left\lfloor\nicefrac{{ \left.{\left(\hat{q}-\hat{p}\right)c/\hat{q}}}{{q}}\right\rfloor}}{{}^{\hat{r}c /\hat{q}}})\right)\}=\mathcal{F}_{n}(q,s). \tag{5}\]
Since arcs in \(\mathcal{K}_{n}(q,s)\) span only a fraction of the unit circle, we also need:
\[\left\lfloor\nicefrac{{\left.{\hat{p}c/\hat{q}}}}{{{}^{\hat{q}}}}\right\rfloor =\left\lfloor\nicefrac{{\left.{\hat{r}c/\hat{s}}}}{{{}^{\hat{s}}}}\right\rfloor \text{ and }\left\lfloor\nicefrac{{\left.{\left(\hat{q}-\hat{p}\right)c/\hat{q}}}}{{{}^{ \hat{q}}}}\right\rfloor=\left\lfloor\nicefrac{{\left.{\left(\hat{s}-\hat{r} \right)c/\hat{s}}}}{{{}^{\hat{s}}}}\right\rfloor. \tag{6}\]
The endpoints of \(\mathcal{K}_{n}(q,s)\) are the \(c\)-th powers of the endpoints of \(\mathcal{K}_{n}(\hat{q},\hat{s})\) precisely when (5) and (6) hold. Since those will from now on be ongoing assumptions, we gather them in the definition below.
**Definition 3.1**.: If \(\mathcal{F}_{n}(q,s)\) and \(\mathcal{F}_{n}(\hat{q},\hat{s})\) satisfy equations (5) and (6), then we define \(\mathcal{F}_{n}(\hat{q},\hat{s})\star c:=\mathcal{F}_{n}(q,s)\), otherwise we say that \(\mathcal{F}_{n}(\hat{q},\hat{s})\star c\) is not defined.
The example below illustrates that it is possible for (5) to hold while (6) does not.
**Example 3.2**.: Let \(\hat{q}=5\) and \(\hat{s}=6\) and \(6\leq n\leq 10\). Then \(\hat{p}=4\), \(\hat{r}=5\) and \(\mathcal{F}_{n}(\hat{q},\hat{s})=\mathcal{F}_{n}(5,6)=\{(\nicefrac{{4}}{{5}}, \nicefrac{{5}}{{6}}),(\nicefrac{{1}}{{6}},\nicefrac{{1}}{{5}})\}\). Taking \(c=31\), we verify (5):
\[\{\left(\nicefrac{{\left(4\times 31\right)}}{{5}}-24,\nicefrac{{\left(5\times 31 \right)}}{{6}}-25\right),\left(\nicefrac{{\left(1\times 31\right)}}{{6}}-5,\nicefrac{{ \left(1\times 31\right)}}{{6}}-5,\nicefrac{{\left(1\times 31\right)}}{{5}}-6\right)\}= \mathcal{F}_{n}(5,6)=\mathcal{F}_{n}(q,s).\]
Clearly, (6) does not hold and \(\mathcal{F}_{n}(5,6)\star 31\) is not defined.
**Lemma 3.3**.: _If \(\mathcal{F}_{n}(\hat{q},\hat{s})\star c=\mathcal{F}_{n}(q,s)\), then either \(c\) divides \(\hat{q}\) or \(c\) divides \(\hat{s}\)._
Proof.: Let us define \(c_{\hat{q}}:=\gcd(c,\hat{q})\), \(c_{\hat{s}}:=\gcd(c,\hat{s})\) so that \(c=c_{\hat{q}}c_{\hat{s}}c_{0}\). If \(\mathcal{F}_{n}(\hat{q},\hat{s})\star c=\mathcal{F}_{n}(q,s)\), we notice that \(\{q,s\}=\{\nicefrac{{\left.{\hat{q}/c_{\hat{q}}}}{{s}}},\nicefrac{{\hat{s}/c_{ \hat{s}}}}{{{}^{\hat{s}}}}\}\). Now, if \(c_{\hat{s}}=c_{\hat{q}}=1\) then \(\{q,s\}=\{\hat{q},\hat{s}\}\) i.e \(\mathcal{F}_{n}(\hat{q},\hat{s})=\mathcal{F}_{n}(q,s)\) since \(q\), \(s\) and \(n\) uniquely define \(\mathcal{F}_{n}(q,s)\). As this implies \(c=1\), we can assume \(c_{\hat{s}}c_{\hat{q}}>1\).
From \(\hat{q}<\hat{s}\leq n\) and \(q+s=\nicefrac{{\left.{\hat{q}/c_{\hat{q}}}\right.}}{{c_{\hat{q}}}}+\nicefrac{{ \left.{\hat{s}/c_{\hat{s}}}\right.}}{{c_{\hat{s}}}}>n\), we conclude that \(c_{\hat{q}}\geq 2\) and \(c_{\hat{s}}\geq 2\) cannot occur. Assume \(c_{\hat{s}}=1\), \(c_{\hat{q}}\geq 2\), and \(\hat{q}=c_{\hat{q}}\hat{q}_{0}\). (The case \(c_{\hat{s}}\geq 2\) and \(c_{\hat{q}}=1\) can be argued similarly.) By assumption
\[\left(\nicefrac{{\left.{\hat{p}c/\hat{q}}\right.}}{{a}}-a,\nicefrac{{\left.{ \hat{r}c/\hat{s}}\right.}}{{s}}-a\right)=\left(\nicefrac{{\left.{\hat{p}c_{ \hat{0}}}\right.}}{{a\hat{q}_{0}}}\right)\nicefrac{{\left.{\hat{q}}_{0}}}{{ \left.{\hat{q}_{0}}}},\nicefrac{{\left.{\hat{r}c_{\hat{q}}c_{0}}\right.}}{{ \left.{\hat{r}c_{\hat{q}}c_{0}}\right.}}-a\nicefrac{{\left.{\hat{s}}\right.}}{{ \hat{s}}}\right)\]
is a Farey pair for \(a=\left\lfloor\nicefrac{{\hat{p}c}}{{\hat{q}}}\right\rfloor=\left\lfloor\nicefrac{{ \hat{r}c}}{{\hat{s}}}\right\rfloor\). Hence :
\[1 =\hat{q}_{0}(\hat{r}c_{q}c_{0}-a\hat{s})-\hat{s}(\hat{p}c_{0}-a\hat{q }_{0})\] \[=(\hat{s}\hat{p}-\hat{q}\hat{r})c_{0}=c_{0}.\]
We conclude that \(c_{0}=1\), proving that \(c\) divides \(\hat{q}\).
**Example 3.4**.: Let \(6\leq n\leq 10\), \(\mathcal{F}_{n}(\hat{q},\hat{s})=\mathcal{F}_{n}(5,6)=\{(\nicefrac{{4}}{{5}}, \nicefrac{{5}}{{6}}),(\nicefrac{{1}}{{6}},\nicefrac{{1}}{{5}})\}\), and \(c=3\). Although \(c\) divides \(\hat{s}\), \(\mathcal{F}_{n}(\hat{q},\hat{s})\star c\) is not defined for all \(6\leq n\leq 10\). Indeed,
\[\{(\nicefrac{{(4\times 3)}}{{5}}-2,\nicefrac{{(5\times 3)}}{{6}}-2),( \nicefrac{{(1\times 3)}}{{6}}-0,\nicefrac{{(1\times 3)}}{{5}}-0)\}=\{( \nicefrac{{2}}{{5}},\nicefrac{{1}}{{2}}),(\nicefrac{{1}}{{2}},\nicefrac{{3}}{ {5}})\},\]
and \(\{(\nicefrac{{2}}{{5}},\nicefrac{{1}}{{2}}),(\nicefrac{{1}}{{2}},\nicefrac{{3 }}{{5}})\}=\mathcal{F}_{n}(2,5)\) for \(n\in\{5,6\}\) but not for \(n\in\{7,8,9,10\}\).
**Corollary 3.5**.: _Let \(\mathcal{F}_{n}(\hat{q},\hat{s})\star c=\mathcal{F}_{n}(q,s)\), \(\hat{\zeta}=(\frac{\hat{p}}{\hat{q}},\frac{\hat{r}}{\hat{s}})\in\mathcal{F}_{n }(\hat{q},\hat{s})\) and \(a:=\left\lfloor\nicefrac{{\hat{p}c}}{{\hat{q}}}\right\rfloor=\left\lfloor \nicefrac{{\hat{r}c}}{{\hat{s}}}\right\rfloor\)._
1. _If_ \(c\) _divides_ \(\hat{q}\)_, then:_ * \(\zeta=(\frac{p}{q},\frac{r}{s})\in\mathcal{F}_{n}(q,s)\) _where_ \(\hat{s}=s\)_,_ \(\hat{q}=cq\)_,_ \(\hat{p}=p+aq\) _and_ \(c\hat{r}=r+as\)_,_ * \(\hat{\theta}\in(\frac{2\pi\hat{p}}{\hat{q}},\frac{2\pi\hat{r}}{\hat{s}})\) _if and only if_ \(\hat{\theta}=\frac{1}{c}(\theta+2\pi a)\) _for_ \(\theta\in(\frac{2\pi p}{q},\frac{2\pi r}{s})\)_._
2. _If_ \(c\) _divides_ \(\hat{s}\)_, then:_ * \(\zeta=(\frac{s-r}{s},\frac{q-p}{q})\in\mathcal{F}_{n}(q,s)\) _where_ \(\hat{q}=s\)_,_ \(\hat{s}=cq\)_,_ \(\hat{p}c=s-r+as\) _and_ \(\hat{r}=q-p+aq\)_,_ * \(\hat{\theta}\in(\frac{2\pi\hat{p}}{\hat{q}},\frac{2\pi\hat{r}}{\hat{s}})\) _if and only if_ \(\hat{\theta}=\frac{1}{c}(\theta^{\prime}+2\pi a)\) _for_ \(\theta^{\prime}\in(\frac{2\pi(s-r)}{s},\frac{2\pi(q-p)}{q})\)_._
Proof.:
1. Assuming \(\hat{\zeta}=(\frac{\hat{p}}{\hat{q}},\frac{\hat{r}}{\hat{s}})\in\mathcal{F}_{n }(\hat{q},\hat{s})\), \(c|\hat{q}\) and \(\hat{q}=c\hat{q}_{0}\), we have: \[\left(\frac{\hat{p}c}{\hat{q}}-a,\frac{\hat{r}c}{\hat{s}}-a\right)=\left(\frac {\hat{p}-a\hat{q}_{0}}{\hat{q}_{0}},\frac{\hat{r}c-a\hat{s}}{\hat{s}}\right) \in\mathcal{F}_{n}(q,s)\] by (5). Since \(\hat{q}_{0}<\hat{q}<\hat{s}\), this implies: \(\hat{q}_{0}=q\), \(\hat{s}=s\), \(\hat{p}-a\hat{q}_{0}=p\), and \(c\hat{r}-a\hat{s}=r\), as desired. Now: \[\left(\frac{2\pi\hat{p}c}{\hat{q}},\frac{2\pi\hat{r}c}{\hat{s}}\right)=\left( \frac{2\pi(p+aq)}{q},\frac{2\pi(r+as)}{s}\right)=\left(\frac{2\pi p}{q}+2\pi a,\frac{2\pi r}{s}+2\pi a\right).\] The second part of the claim follows.
2. Let \(c|\hat{s}\), then \(\hat{s}=c\hat{s}_{0}\) which gives: \[\left(\frac{\hat{p}c}{\hat{q}}-a,\frac{\hat{r}c}{\hat{s}}-a\right)=\left(\frac {\hat{p}c-a\hat{q}}{\hat{q}},\frac{\hat{r}-a\hat{s}_{0}}{\hat{s}_{0}}\right)\in \mathcal{F}_{n}(q,s).\] Since \(\hat{q}<\hat{s}\), we have either \(\hat{s}_{0}<\hat{q}\) or \(\hat{q}<\hat{s}_{0}<\hat{s}\). The latter implies \(n<2\hat{s}_{0}\) i.e \(n<2\hat{s}/c\) which is not possible as \(c\geq 2\). Thus, taking \(\hat{q}>\hat{s}_{0}\) we must have \(\hat{q}=s\), \(\hat{s}=cq\), \(\hat{p}c=s-r+as\) and \(\hat{r}=q-p+aq\).
Further,
\[\left(\frac{2\pi\hat{p}c}{\hat{q}},\frac{2\pi\hat{r}c}{\hat{s}}\right)=\left(\frac {2\pi(s-r)}{s}+2\pi a,\frac{2\pi(q-p)}{q}+2\pi a\right).\]
Thus, \(\hat{\theta}c=\theta^{\prime}+2\pi a\), where \(\theta^{\prime}\in(\frac{2\pi(s-r)}{s},\frac{2\pi(q-p)}{q})\).
**Example 3.6**.: Let \(\mathcal{F}_{n}(\hat{q},\hat{s})=\mathcal{F}_{6}(5,6)=\{(\nicefrac{{4}}{{5}}, \nicefrac{{5}}{{6}}),(\nicefrac{{1}}{{6}},\nicefrac{{1}}{{5}})\}\), \(c=3\), and \(a:=\lfloor\frac{\hat{p}c}{\hat{q}}\rfloor=\lfloor\frac{\hat{r}c}{s}\rfloor=2\). Then \(F_{6}(5,6)\star 3=\mathcal{F}_{6}(2,5)\). In particular,
\[(e^{\frac{4\cdot 2\pi i}{5}})^{3}=e^{\frac{2\cdot 2\pi i}{5}},(e^{\frac{5\cdot 2 \pi i}{6}})^{3}=e^{\frac{2\pi i}{2}},\]
and \(\hat{\theta}\in(\frac{4\cdot 2\pi}{5},\frac{5\cdot 2\pi}{6})\) if and only if \(\theta^{\prime}=3\hat{\theta}-2\cdot 2\pi\in(\frac{2\cdot 2\pi}{5},\frac{2\pi}{ 2})\).
Now that we understand, when the endpoints of a Karpelevic arc are powers of endpoints of another Karpelevic arc, we want to know when the whole arc is mapped to another arc. To this end, we compute the derivative of the modulus \(\rho_{n}\) with respect to the argument \(\theta\) of a Karpelevic arc at the endpoints.
**Lemma 3.7**.: _Suppose that \((\frac{p}{q},\frac{r}{s})\), \(q<s\), is a Farey pair and for \(\theta\in[\nicefrac{{2\pi p}}{{q}},\nicefrac{{2\pi r}}{{s}}]\) let the point on the boundary of \(\Theta_{n}\) with the argument \(\theta\) be given by \(\rho_{n}(\theta)e^{\mathrm{i}\theta}\). Then_
\[\frac{\partial\rho_{n}(\theta)}{\partial\theta}\Big{|}_{\theta=\frac{2\pi p} {q}}\sin\left(\frac{2\pi}{qd}\right)=\cos\left(\frac{2\pi}{qd}\right)-1 \tag{7}\]
_and_
\[\frac{\partial\rho_{n}(\theta)}{\partial\theta}\Big{|}_{\theta=\frac{2\pi r }{s}}\sin\left(\frac{2\pi}{s}\right)=1-\cos\left(\frac{2\pi}{s}\right). \tag{8}\]
Proof.: Implicit differentiation of equation (3) gives us:
\[\mu^{s_{1}}q\cos(q\theta)+s_{1}\mu^{s_{1}-1}\frac{\partial\mu( \theta)}{\partial\theta}\Big{|}_{\theta}\sin(q\theta)-\] \[\Big{[}\mu^{qd_{1}}\frac{s_{1}}{d_{1}}\cos\left(\frac{s_{1}}{d_{1 }}\theta-\frac{2\pi r}{d}\right)+qd_{1}\mu^{qd_{1}-1}\frac{\partial\mu(\theta )}{\partial\theta}\Big{|}_{\theta}\sin\left(\frac{s_{1}}{d_{1}}\theta-\frac{2 \pi r}{d}\right)\Big{]}-\] \[\Big{[}(q-\frac{s_{1}}{d_{1}})\cos\left((q-\frac{s_{1}}{d_{1}}) \theta+\frac{2\pi r}{d}\right)\Big{]}=0.\]
Taking \(\mu=1\) we have:
\[q\cos(q\theta)+s_{1}\frac{\partial\mu(\theta)}{\partial\theta} \Big{|}_{\theta}\sin(q\theta)-\] \[\Big{[}(q-\frac{s_{1}}{d_{1}})\cos\left((q-\frac{s_{1}}{d_{1}}) \theta+\frac{2\pi r}{d}\right)\Big{]}=0.\]
Inserting \(\theta=\frac{2\pi p}{q}\) and \(\theta=\frac{2\pi r}{s}\) in above we get:
\[\frac{\partial\mu(\theta)}{\partial\theta}\Big{|}_{\theta=2\pi p/q}\sin\left( \frac{2\pi}{qd}\right)=\frac{1}{d_{1}}\left(\cos\left(\frac{2\pi}{qd}\right)-1\right)\]
and
\[\frac{\partial\mu(\theta)}{\partial\theta}\Big{|}_{\theta=2\pi r/s}\sin\left( \frac{2\pi}{s}\right)=\frac{1}{d_{1}}\left(1-\cos\left(\frac{2\pi}{s}\right) \right).\]
Noting that, \(\rho_{n}(\theta)=\mu(\theta)^{d_{1}}\) which implies \(\frac{\partial\rho_{n}(\theta)}{\partial\theta}=d_{1}\mu(\theta)^{d_{1}-1} \frac{\partial\mu}{\partial\theta}\), equations (7) and (8) follow.
**Corollary 3.8**.: _Let \(\mathcal{K}_{n}(q,s)=\mathcal{K}_{n}(\hat{q},\hat{s})^{c}\), \(d:=\lfloor\nicefrac{{n}}{{q}}\rfloor\) and \(\hat{d}:=\lfloor\nicefrac{{n}}{{q}}\rfloor\). Then:_
1. _If_ \(c\) _divides_ \(\hat{q}\)_, then_ \(qd=\hat{q}\hat{d}\) _and_ \(s=\hat{s}\)_._
2. _If_ \(c\) _divides_ \(\hat{s}\)_, then_ \(qd=\hat{s}\) _and_ \(s=\hat{q}\hat{d}\)_._
Proof.: Let \(F(x):=\frac{1-\cos(x)}{\sin(x)}\). Note that \(F(x)\) is injective on \([0,2\pi]\).
If \(\mathcal{K}_{n}(q,s)=\mathcal{K}_{n}(\hat{q},\hat{s})^{c}\), then for all \(\theta\in(\frac{2\pi p}{q},\frac{2\pi r}{s})\) we have \(\rho_{n}(\theta)=(\rho_{n}(\hat{\theta}))^{c}\) for \(\hat{\theta}\in\arg(\hat{q},\hat{s})\), where by Corollary 3.5:
\[\hat{\theta}=\frac{1}{c}\left(\theta+2\pi a\right)\text{ if }c\text{ divides }\hat{q}, \tag{9}\]
\[\hat{\theta}=\frac{1}{c}\left(2\pi-\theta+2\pi a\right)\text{ if }c\text{ divides }\hat{s}. \tag{10}\]
In particular:
\[\frac{\partial(\rho_{n}(\hat{\theta}))^{c}}{\partial\theta}\Big{|}_{\theta= \frac{2\pi p}{q}}=\frac{\partial\rho_{n}(\theta)}{\partial\theta}\Big{|}_{ \theta=\frac{2\pi p}{q}}, \tag{11}\]
\[\frac{\partial(\rho_{n}(\hat{\theta}))^{c}}{\partial\theta}\Big{|}_{\theta= \frac{2\pi r}{s}}=\frac{\partial\rho_{n}(\theta)}{\partial\theta}\Big{|}_{ \theta=\frac{2\pi r}{s}}, \tag{12}\]
and
\[\frac{\partial(\rho_{n}(\hat{\theta}))^{c}}{\partial\theta}=c\rho_{n}(\hat{ \theta})^{c-1}\frac{\partial\rho_{n}(\hat{\theta})}{\partial\hat{\theta}} \frac{\partial\hat{\theta}}{\partial\theta}.\]
1. If \(c\) divides \(\hat{q}\), then : \[\frac{\partial\hat{\theta}}{\partial\theta}=\frac{1}{c},\] by equation (9). Now, by Lemma by 3.7: \[\frac{\partial\rho_{n}(\theta)}{\partial\theta}\Big{|}_{\theta= \frac{2\pi p}{q}}=\frac{\cos\left(\frac{2\pi}{qd}\right)-1}{\sin\left(\frac{2 \pi}{qd}\right)}=-F\left(\frac{2\pi}{qd}\right),\] \[\frac{\partial(\rho_{n}(\hat{\theta}))^{c}}{\partial\theta}\Big{|}_{\theta= \frac{2\pi p}{q}}=\frac{\partial\rho_{n}(\hat{\theta})}{\partial\hat{\theta}} \Big{|}_{\hat{\theta}=\frac{2\pi\hat{p}}{q}}=\frac{\cos\left(\frac{2\pi}{\hat{q }\hat{d}}\right)-1}{\sin\left(\frac{2\pi}{\hat{q}\hat{d}}\right)}=-F\left( \frac{2\pi}{\hat{q}\hat{d}}\right),\]
since \(\rho_{n}(\hat{\theta})=1\) at the endpoints, and \(\hat{\theta}=\nicefrac{{2\pi\hat{p}}}{{\hat{q}}}\) when \(\theta=\nicefrac{{2\pi p}}{{q}}\). Hence, \(F\left(\frac{2\pi}{qd}\right)=F\left(\frac{2\pi}{\hat{q}d}\right)\) by equation (11) and by injectivity of \(F\) we have \(\hat{q}\hat{d}=qd\). Similarly, \(\frac{\partial\rho_{n}(\theta)}{\partial\theta}\Big{|}_{\theta=\frac{2\pi r} {s}}=F\left(\frac{2\pi}{s}\right)\) and \(\frac{\partial(\rho_{n}(\hat{\theta}))^{c}}{\partial\theta}\Big{|}_{\theta= \frac{2\pi r}{s}}=F\left(\frac{2\pi}{\hat{s}}\right),\) which implies \(\hat{s}=s\).
2. If \(c\) divides \(\hat{s}\), then : \[\frac{\partial\hat{\theta}}{\partial\theta}=-\frac{1}{c},\] by equation (10). Now, by Lemma by 3.7: \[\frac{\partial\rho_{n}(\theta)}{\partial\theta}\Big{|}_{\theta=\frac{2\pi p} {q}} =\frac{\cos\left(\frac{2\pi}{qd}\right)-1}{\sin\left(\frac{2\pi}{qd} \right)}=-F\left(\frac{2\pi}{qd}\right),\] \[\frac{\partial(\rho_{n}(\hat{\theta}))^{c}}{\partial\theta}\Big{|}_{\theta= \frac{2\pi p}{q}} =-\frac{\partial\rho_{n}(\hat{\theta})}{\partial\hat{\theta}} \Big{|}_{\hat{\theta}=\frac{2\pi r}{\hat{s}}}=\frac{\cos\left(\frac{2\pi}{\hat {s}}\right)-1}{\sin\left(\frac{2\pi}{\hat{s}}\right)}=-F\left(\frac{2\pi}{\hat {s}}\right),\] since \(\rho_{n}(\hat{\theta})=1\) at the endpoints, and \(\hat{\theta}=\nicefrac{{2\pi\hat{r}}}{{\hat{s}}}\) when \(\theta=\nicefrac{{2\pi p}}{{q}}\). Hence, \(F\left(\frac{2\pi}{qd}\right)=F\left(\frac{2\pi}{\hat{s}}\right)\) by equation (11) and by injectivity of \(F\) we have \(\hat{s}=qd\). Similarly, identities \(\frac{\partial\rho_{n}(\theta)}{\partial\theta}\Big{|}_{\theta=\frac{2\pi r} {s}}=F\left(\frac{2\pi}{s}\right)\) and \(\frac{\partial(\rho_{n}(\hat{\theta}))^{c}}{\partial\theta}\Big{|}_{\theta= \frac{2\pi r}{s}}=F\left(\frac{2\pi}{\hat{q}d}\right)\) imply \(\hat{q}\hat{d}=s\).
The following example shows that \(\mathcal{F}_{n}(\hat{q},\hat{s})\star c=\mathcal{F}_{n}(q,s)\) does not imply \(\mathcal{K}_{n}(\hat{q},\hat{s})^{c}=\mathcal{K}_{n}(q,s)\).
**Example 3.9**.: Let \(n=27\), \(\mathcal{F}_{n}(\hat{q},\hat{s})=\mathcal{F}_{27}(4,27)\), \(\mathcal{F}_{n}(q,s)=\mathcal{F}_{27}(2,27)\) and \(c=2\). Then:
\[\mathcal{F}_{27}(4,27)\star 2=\mathcal{F}_{27}(2,27).\]
We have \(q=2\), \(\hat{q}=4\), \(d=13\) and \(\hat{d}=6\). Clearly \(c\) divides \(\hat{q}\) and \(qd=26\neq\hat{q}\hat{d}=24\). Therefore \(\mathcal{K}_{n}(\hat{q},\hat{s})^{c}=\mathcal{K}_{27}(4,27)^{2}\neq\mathcal{K }_{27}(2,27)=\mathcal{K}_{n}(q,s)\) by Corollary 3.8.
**Theorem 3.10**.: _Let \(\mathcal{K}_{n}(q,s)\in\mathcal{K}_{n}\) and \(d:=\lfloor\nicefrac{{n}}{{q}}\rfloor\). Then \(\mathcal{K}_{n}(q,s)=\mathcal{K}_{n}(\hat{q},\hat{s})^{c}\) if, and only if, one of the following situations occurs:_
1. \(\mathcal{K}_{n}(q,s)=\mathcal{K}_{n}(cq,s)^{c}\)_, for any_ \(c\) _that divides_ \(d\)_, satisfies_ \(\gcd(c,s)=1\) _and_ \(cq<s\)_._
2. \(\mathcal{K}_{n}(q,s)=\mathcal{K}_{n}(s,qd)^{d}\) _if_ \(\delta:=\gcd(d,s)=1\)_._
Proof.: Let \(\mathcal{K}_{n}(q,s)\in\mathcal{K}_{n}\) and \(\mathcal{K}_{n}(q,s)=\mathcal{K}_{n}(\hat{q},\hat{s})^{c}\). Then \(\mathcal{F}_{n}(\hat{q},\hat{s})\star c=\mathcal{F}_{n}(q,s)\), and either \(c\) divides \(\hat{q}\) or \(c\) divides \(\hat{s}\) by Lemma 3.3. We consider each case separately.
Assume first that \(c\) divides \(\hat{q}\) (and \(\gcd(c,\hat{s})=1\)). Hence, \(\hat{q}=qc\) and \(\hat{s}=s\) by Corollary 3.5. Clearly, we also have \(\gcd(c,s)=1\). By Corollary 3.8 we now have
\(qd=qc\hat{d}\) which implies \(\hat{d}c=d\). Since \(\hat{q}<\hat{s}\), \(qc<s\) has to hold. With this, we have shown that if \(\mathcal{K}_{n}(q,s)=\mathcal{K}_{n}(\hat{q},\hat{s})^{c}\) and \(c\) divides \(\hat{q}\), then the conditions listed in item 1 have to hold.
Now, assuming \(c\) divides \(d\), \(\gcd(c,s)=1\), and \(cq<s\), we want to prove \(\mathcal{K}_{n}(q,s)=\mathcal{K}_{n}(cq,s)^{c}\). Let \(\theta\in(\nicefrac{{2\pi p}}{{q}},\nicefrac{{2\pi r}}{{s}})\), where \((\nicefrac{{p}}{{q}},\nicefrac{{r}}{{s}})\in\mathcal{F}_{n}(q,s)\). Theorem 2.5 tells us that \(\rho_{n}(\theta)e^{i\theta}=\mu^{d_{1}}e^{i\theta}\in\mathcal{K}_{n}(q,s)\) if and only if \(\mu\) is the unique solution to (3).
To study \(\hat{\lambda}\in\mathcal{K}_{n}(\hat{q},\hat{s})^{c}\) with the argument \(\theta\), we first use Corollary 3.5 to identify parameters associated with Farey pairs in \(\dot{\mathcal{K}_{n}}(\hat{q},\hat{s})\):
\[\hat{s}=s,\,\hat{q}=cq,\,\hat{p}=p+aq,\,\,\,\text{and}\,c\hat{r}=r+as, \tag{13}\]
and note that \(\hat{\theta}:=\frac{1}{c}(\theta+2\pi a)\in(\nicefrac{{2\pi\hat{p}}}{{q}}, \nicefrac{{2\pi\hat{r}}}{{s}})\subset\arg(\hat{q},\hat{s})\). Now, \(\gcd(c,s)=1\) implies \(\gcd(c\hat{d},s)=\gcd(\hat{d},s)\) which gives us \(\hat{\delta}=\gcd(\hat{d},\hat{s})=\gcd(c\hat{d},s)=\gcd(d,s)=\delta\). Hence:
\[\hat{\delta}=\delta,\,\hat{s}_{1}=s_{1},\,\hat{d}_{1}c=d_{1}, \tag{14}\]
where \(\hat{s}_{1}:=s_{1}(\hat{q},\hat{s})\) and \(\hat{d}_{1}:=d_{1}(\hat{q},\hat{s})\).
By Theorem 2.5, \(\hat{\mu}^{\hat{d}_{1}}e^{i\hat{\theta}}\in\mathcal{K}_{n}(cq,s)\) if and only if \(\hat{\mu}\) satisfies equation (3) for \(\hat{\theta}\) and the parameters associated with \((\nicefrac{{p}}{{q}},\nicefrac{{r}}{{s}})\):
\[\hat{\mu}^{\hat{s}_{1}}\sin(\hat{q}\hat{\theta})-\hat{\mu}^{\hat{q}\hat{d}_{1} }\sin\left(\frac{\hat{s}_{1}}{\hat{d}_{1}}\hat{\theta}-\frac{2\pi\hat{r}}{\hat {d}}\right)-\sin\left((\hat{q}-\frac{\hat{s}_{1}}{\hat{d}_{1}})\hat{\theta}+ \frac{2\pi\hat{r}}{\hat{d}}\right)=0. \tag{15}\]
Using (13) and (14) we get:
\[\hat{\mu}^{\hat{s}_{1}}\sin(\hat{q}\hat{\theta}) =\hat{\mu}^{s_{1}}\sin\left(q(\theta+2\pi a)\right)=\hat{\mu}^{s_ {1}}\sin(q\theta),\] \[\mu^{\hat{q}\hat{d}_{1}}\sin\left(\frac{\hat{s}_{1}}{\hat{d}_{1}} \hat{\theta}-\frac{2\pi\hat{r}}{\hat{d}}\right) =\hat{\mu}^{qd_{1}}\sin\left(\frac{s_{1}}{d_{1}}(\theta+2\pi a)- \frac{2\pi(r+as)}{d}\right)\] \[=\hat{\mu}^{qd_{1}}\sin\left(\frac{s_{1}}{d_{1}}\theta-\frac{2 \pi r}{d}\right),\] \[\sin\left((\hat{q}-\frac{\hat{s}_{1}}{\hat{d}_{1}})\hat{\theta}+ \frac{2\pi\hat{r}}{\hat{d}}\right) =\sin\left((cq-\frac{cs_{1}}{d_{1}})\frac{(\theta+2\pi a)}{c}+ \frac{2\pi(r+as)}{d}\right)\] \[=\sin\left((q-\frac{s_{1}}{d_{1}})\theta+\frac{2\pi r}{d}\right),\]
which proves that the coefficients of \(\hat{\mu}\) in (15) and the coefficients of \(\mu\) in (3) agree. Since (3) defines \(\mu\) uniquely, we conclude \(\hat{\mu}=\mu\). Finally,
\[\hat{\lambda}^{c}=(\hat{\mu}^{\hat{d}_{1}}e^{i\hat{\theta}})^{c}=\mu^{d_{1}}e^ {i\theta}=\rho_{n}(\theta)e^{i\theta}\]
completes the proof that \(\mathcal{K}_{n}(cq,s)^{c}=\mathcal{K}_{n}(q,s)\).
Next, assume that \(c\) divides \(\hat{s}\) (and \(\gcd(c,\hat{q})=1\)). Hence, \(\hat{s}=cq\) and \(\hat{q}=s\) by Corollary 3.5. Now, by Corollary 3.8 we have \(qd=cq\) and \(s=s\hat{d}\) which gives \(c=d\) and \(\hat{d}=1\). Therefore, \(\delta=\gcd(d,s)=\gcd(c,\hat{q})=1\), as required. This shows that if \(\mathcal{K}_{n}(q,s)=\mathcal{K}_{n}(\hat{q},\hat{s})^{c}\) and \(c\) divides \(\hat{s}\), then the condition in item 2 has to hold.
Now, assuming \(\gcd(d,s)=1\), we want to prove \(\mathcal{K}_{n}(q,s)=\mathcal{K}_{n}(s,qd)^{d}\). Since, \(c\) divides \(\hat{s}\), let \(\theta^{\prime}\in(\nicefrac{{2\pi(s-r)}}{{s}},\nicefrac{{2\pi(q-p)}}{{q}})\), where \(\zeta=(\nicefrac{{(s-r)}}{{s}},\nicefrac{{(q-p)}}{{q}})\in\mathcal{F}_{n}(q,s)\).
By Remark 2.6, \(\rho_{n}e^{i\theta^{\prime}}=\mu^{d_{1}}e^{i\theta^{\prime}}\in\mathcal{K}_{n}(q,s)\) if and only if \(\mu\) satisfies (3) for parameters associated with \(\zeta\) and \(\theta=2\pi-\theta^{\prime}\). That is:
\[\mu^{s_{1}}\sin(q(2\pi-\theta^{\prime}))-\mu^{qd_{1}}\sin\left(\frac{s_{1}}{d_ {1}}(2\pi-\theta^{\prime})-\frac{2\pi r}{d}\right)-\sin\left((q-\frac{s_{1}}{d _{1}})(2\pi-\theta^{\prime})+\frac{2\pi r}{d}\right)=0,\]
or equivalently:
\[-\mu^{s_{1}}\sin(q\theta^{\prime})+\mu^{qd_{1}}\sin\left(\frac{s_{1}}{d_{1}} \theta^{\prime}+\frac{2\pi(r-s)}{d}\right)+\sin\left((q-\frac{s_{1}}{d_{1}}) \theta^{\prime}+\frac{2\pi(s-r)}{d}\right)=0. \tag{16}\]
To understand \(\hat{\lambda}\in\mathcal{K}_{n}(\hat{q},\hat{s})^{c}\) with the argument \(\theta^{\prime}\), we first use Corollary 3.5 to identify parameters associated with Farey pairs in \(\mathcal{K}_{n}(\hat{q},\hat{s})\):
\[\hat{q}=s,\,\hat{s}=qc,\,\hat{p}c=s-r+as,\,\,\,\text{and}\,\,\hat{r}=q-p+aq, \tag{17}\]
and \(\hat{\theta}=\frac{1}{c}(\theta^{\prime}+2\pi a)\in(\nicefrac{{2\pi\hat{p}}}{ {\hat{q}}},\nicefrac{{2\pi\hat{r}}}{{s}})\subset\arg(\hat{q},\hat{s})\). In addition: \(c=d\), \(\hat{d}=1\), and \(\hat{d}_{1}=1\).
Theorem 2.5 tells us that \(\hat{\lambda}=\hat{\mu}^{\hat{d}_{1}}e^{i\hat{\theta}}\in\mathcal{K}_{n}(s,qd)\) if and only of \(\hat{\mu}\) satisfies (15) for parameters associated with \((\nicefrac{{p}}{{\hat{q}}},\nicefrac{{r}}{{s}})\) and \(\hat{\theta}\). Using the parameters defined above, we get:
\[\hat{\mu}^{\hat{s}_{1}}\sin(\hat{q}\hat{\theta}) =\hat{\mu}^{qd}\sin\left(\frac{s}{d}\theta^{\prime}+2\pi\frac{as} {d}\right),\] \[\hat{\mu}^{\hat{q}\hat{d}_{1}}\sin\left(\frac{\hat{s}_{1}}{\hat{ d}_{1}}\hat{\theta}-\frac{2\pi\hat{r}}{\hat{d}}\right) =\hat{\mu}^{s}\sin\left(q\theta^{\prime}\right),\] \[\sin\left((\hat{q}-\frac{\hat{s}_{1}}{\hat{d}_{1}})\hat{\theta}+ \frac{2\pi\hat{r}}{\hat{d}}\right) =\sin\left((\frac{s}{d}-q)(\theta^{\prime}+2\pi a)\right)\] \[=\sin\left((\frac{s}{d}-q)\theta^{\prime}+\frac{2\pi as}{d}\right).\]
Since \(as=\hat{p}d-s+r\) implies that the coefficients of \(\hat{\mu}\) in (15) and the coefficients of \(\mu\) in (16) are the same, we can conclude that \(\hat{\mu}=\mu\). Hence, \(\hat{\lambda}^{d}=(\hat{\mu}^{\hat{d}_{1}}e^{i\hat{\theta}})^{d}=\mu^{d_{1}}e^ {i\theta^{\prime}}=\rho_{n}(\theta)e^{i\theta^{\prime}}\) i.e \(\mathcal{K}_{n}(q,s)=\mathcal{K}_{n}(s,qd)^{d}\), as required.
**Example 3.11**.: Theorem 3.10 allows us to identify all Karpelevic arcs that are powers of another arc. In particular, for \(n=8\) we list all such cases below:
1. \(\mathcal{K}_{8}(4,7)=\mathcal{K}_{8}(7,8)^{2}\)
2. \(\mathcal{K}_{8}(2,7)=\mathcal{K}_{8}(7,8)^{4}=K_{8}(4,7)^{2}\)
3. \(\mathcal{K}_{8}(3,7)=\mathcal{K}_{8}(7,6)^{2}\)
4. \(\mathcal{K}_{8}(4,5)=\mathcal{K}_{8}(5,8)^{2}\).
As an example let us consider \(\mathcal{K}_{8}(4,7)\), with \(q=4\), \(s=7\), \(d=2\) and \(\delta=1\). Hence, \(\mathcal{K}_{8}(4,7)=\mathcal{K}_{8}(7,8)^{2}\), by item 1 in the theorem. Considering \(\mathcal{K}_{8}(2,7)\) with \(q=2\), \(s=7\), \(d=4\), \(\delta=1\), we have \(\mathcal{K}_{8}(2,7)=\mathcal{K}_{8}(7,8)^{4}\) by item 2. Also, \(c=2\) divides \(d=4\) which implies \(\mathcal{K}_{8}(2,7)=\mathcal{K}_{8}(4,7)^{2}\) by item 1.
Theorem 3.10 answers the question, given \(\mathcal{K}_{n}(q,s)\), what are all possible arcs \(\mathcal{K}_{n}(\hat{q},\hat{s})\) whose power is \(\mathcal{K}_{n}(q,s)\). Since the theorem gives the complete characterisation, it also answers the question, given \(\mathcal{K}_{n}(\hat{q},\hat{s})\), what are all possible arcs \(\mathcal{K}_{n}(q,s)\) that are powers of \(\mathcal{K}_{n}(\hat{q},\hat{s})\). This point of view is given in the result below.
**Corollary 3.12**.: _Let \(\mathcal{K}_{n}(\hat{q},\hat{s})\in\mathcal{K}_{n}\) and \(\hat{d}:=\lfloor\nicefrac{{n}}{{q}}\rfloor\). Then \(\mathcal{K}_{n}(\hat{q},\hat{s})^{c}=\mathcal{K}_{n}(q,s)\in\mathcal{K}_{n}\) if and only if one of the following situations occurs:_
1. \(\mathcal{K}_{n}(\hat{q},\hat{s})^{c}=\mathcal{K}_{n}(\nicefrac{{\hat{q}}}{{c} },\hat{s})\) _for all_ \(c\) _that divide_ \(\hat{q}\) _and satisfy_ \(\lvert\hat{s}-\hat{q}\hat{d}\rvert<\nicefrac{{\hat{q}}}{{c}}\)__
2. \(\mathcal{K}_{n}(\hat{q},\hat{s})^{c}=\mathcal{K}_{n}(\nicefrac{{\hat{s}}}{{c} },\hat{q})\)_, for all_ \(c\) _that divide_ \(\hat{s}\) _and satisfy_ \(n-\hat{q}<\nicefrac{{\hat{s}}}{{c}}\)_._
## 4 Stochastic matrices and associated digraphs, notation and background
In the second part of the paper, we consider the question, of when a stochastic matrix, that realises an eigenvalue on the border of the Karpelevic region, can be written as a power of another stochastic matrix. This section is dedicated to the necessary background and notation.
### Notation
When taking powers of matrices and the associated digraphs, we will repeatedly encounter modular arithmetic. Since we want to use the standard numbering of the rows and columns of \(n\times n\) matrices from \(1\) to \(n\), it is convenient to define: \(\langle k\rangle_{n}:=1+((k-1)\mod n)\). In this notation \(\langle k\rangle_{n}\in\{1,\ldots,n\}\).
Given \(n\in\mathbb{N}\) we define the following vectors:
\[\mathbf{a}(n) :=\begin{pmatrix}1&2&\ldots&n\end{pmatrix},\] \[\mathbf{a_{0}}(n) :=\begin{pmatrix}0&1&\ldots&n-1\end{pmatrix},\] \[\mathbf{e} :=\begin{pmatrix}1&1&\ldots&1\end{pmatrix},\]
where the size of \(\mathbf{e}\) will be clear from the context. In addition, we will depend on standard operations to build new vectors. For example, \(i\cdot\mathbf{a}(n)=\begin{pmatrix}i&2i&\ldots&ni\end{pmatrix}\), \(i\cdot\mathbf{e}+\mathbf{a}(n)=\begin{pmatrix}i+1&i+2&\ldots&i+n\end{pmatrix}\), etc. Furthermore, for \(\mathbf{v}=\begin{pmatrix}v_{1}&\ldots v_{k}\end{pmatrix}\in\ \mathbb{N}_{0}^{k}\) we denote:
\[\mathscr{T}(\mathbf{v}):=\{\begin{pmatrix}v_{i}&v_{\langle i+1\rangle_{k}}& \ldots&v_{\langle i+k-1\rangle_{k}}\end{pmatrix};i=1,\ldots,k\}\]
to be the set of vectors obtained from \(\mathbf{v}\) by cyclic permutations of its elements.
A digraph \(G\) is defined by its vertex set \(V(G)=\{1,\ldots,n\}\) and edge set \(E(G)\subseteq V(G)\times V(G)\). For \(\mathbf{v}=\begin{pmatrix}v_{1}&\ldots v_{k}\end{pmatrix}\in\mathbb{N}^{k}\) we denote by \(C(\mathbf{v})\) the \(k\)-cycle with
\(V(C(\mathbf{v}))=\{v_{i};i=1,\ldots,k\}\) and \(E(C(\mathbf{v}))=\{(v_{i},v_{(1+i)_{k}});i=1,\ldots,k\}\). Clearly, \(C(\mathbf{v})=C(\mathbf{u})\) if and only if \(\mathscr{T}(\mathbf{v})=\mathscr{T}(\mathbf{u})\). The _weight of a cycle_ is defined to be the product of the weights on the edges of that cycle. Furthermore, \(P(\mathbf{v})\) will denote the path with \(V(P(\mathbf{v}))=\{v_{i};i=1,\ldots,k\}\) and \(E(P(\mathbf{v}))=\{(v_{i},v_{i+1});i=1,\ldots,k-1\}\). Let \(\mathcal{E}\subset V(G)\times V(G)\), then \(G+\mathcal{E}\) is defined to be the digraph with \(V(G+\mathcal{E})=V(G)\) and \(E(G+\mathcal{E})=E(G)+\mathcal{E}\). With \(G^{(b)}\) we denote the strong power of \(G\), i.e. the digraph on vertex set \(V(G)\), where \((v_{1},v_{2})\in E(G^{(b)})\) if and only if \(v_{1}\) and \(v_{2}\) are at distance \(b\) in \(G\). Given a nonnegative \(n\times n\) matrix \(A\), we define \(\Gamma(A)\) to be _the digraph associated with \(A\)_ defined by \(V(\Gamma(A))=\{1,2,\ldots,n\}\) and \(E(\Gamma(A))=\{(i,j);(A)_{i,j}\neq 0\}\).
For integers \(q,s,n\in\mathbb{N}\) satisfying \(q<s\), \(\gcd(q,s)=1\) and \(q+s>n\), we denote by \(\mathcal{M}_{n}(q,s)\) the set of all \(n\times n\) stochastic matrices with an eigenvalue from \(\mathcal{K}_{n}(q,s)\). The set of sparsest matrices in \(\mathcal{M}_{n}(q,s)\) is denoted by \(\mathcal{M}_{n}^{0}(q,s)\). More precisely, \(A\in\mathcal{M}_{n}^{0}(q,s)\) if and only if \(A=(a_{ij})\in\mathcal{M}_{n}(q,s)\) and there does not exist \(A^{\prime}=(a^{\prime}_{ij})\in\mathcal{M}_{n}(q,s)\) satisfying \(\{(i,j);a^{\prime}_{ij}\neq 0\}\subset\{(i,j);a_{ij}\neq 0\}\). (It turns out that, for fixed \(n\), \(q\) and \(s\), all matrices in \(\mathcal{M}_{n}^{0}(q,s)\) have the same number of nonzero elements, hence \(\mathcal{M}_{n}^{0}(q,s)\) can also be defined as the set of matrices in \(\mathcal{M}_{n}(q,s)\) with the least number of non-zero entries.)
Let \(\mathcal{S}\) be a set of \(n\times n\) matrices and \(c\in\mathbb{N}\), then \(\mathcal{S}^{c}:=\{A^{c};A\in\mathcal{S}\}\). For \(n\times m\) matrix \(M\), let \(\operatorname{vec}(M)\) denote the \(m\cdot n\) vector obtained by stacking the columns of the matrix \(M\).
### Sparsest realisations of Ito polynomials
In [7] the sparsest realizing matrices for Ito polynomials of degree \(n\) were characterised. Given a reduced Ito polynomial \(f_{\alpha}(t)\) of degree \(n\) associated with \(\mathcal{K}_{n}(q,s)\), the sparsest realisation of \(f_{\alpha}(t)\) is (up to permutation similarity) uniquely defined by the associated digraph. Moreover, associated digraphs are precisely digraphs on \(n\) vertices that contain one \(s\)-cycle and \(d\) disjoint \(q\)-cycles.
Let \(q,s,\hat{q},\hat{s},n\) and \(c\) be such that \(\mathcal{K}_{n}(q,s)=\mathcal{K}_{n}(\hat{q},\hat{s})^{c}\). Then:
\[\mathcal{M}_{n}(\hat{q},\hat{s})^{c} \subset\mathcal{M}_{n}(q,s),\] \[\mathcal{M}_{n}^{0}(\hat{q},\hat{s})^{c} \subset\mathcal{M}_{n}^{0}(q,s),\]
but \(\mathcal{M}_{n}(\hat{q},\hat{s})^{c}\neq\mathcal{M}_{n}(q,s)\) and \(\mathcal{M}_{n}^{0}(\hat{q},\hat{s})^{c}\neq\mathcal{M}_{n}^{0}(q,s)\). In this section, we aim to characterise matrices in \(\mathcal{M}_{n}^{0}(\hat{q},\hat{s})^{c}\) in terms of the associated digraphs. For this, we first recall the characterisation of \(\mathcal{M}_{n}^{0}(q,s)\) given in [7] that depends on separating the Karpelevic arcs into four categories: Type 0, I, II, and III. This separation was introduced by Johnson and Paparella [3].
While it is possible for the degree of the reduced Ito polynomial associated with a Farey pair \((\nicefrac{{p}}{{q}},\nicefrac{{r}}{{s}})\) of order \(n\) to be less than \(n\), we will consider only the situations when \(n\) and the degree of the reduced Ito polynomial agree. For example, the reduced Ito polynomials associated with \(\mathcal{K}_{n}(3,14)\) for \(n=15\) and \(n=16\) are both equal to \(f_{\alpha}(t)=(t^{3}-(1-\alpha))^{5}-\alpha^{5}t\), \(\alpha\in[0,1]\). But since for \(n=16\), this polynomial has a degree less than \(n\), our investigation will not cover this case.
Below we list the different types of reduced Ito polynomials of order \(n\), using the notation introduced in [7]:
* Type 0: If \(n=s,d=n,q=1\), then \(f_{\alpha}(t)=(t+\alpha-1)^{n}-\alpha^{n}\) for \(\alpha\in[0,1]\).
* Type I: If \(n=s,d=1,q>n/2\), then \(f_{\alpha}(t)=t^{n}-(1-\alpha)t^{n-q}-\alpha\) for \(\alpha\in[0,1]\).
* Type II: If \(n=qd,d>1\), then \(f_{\alpha}(t)=(t^{q}+\alpha-1)^{d}-\alpha^{d}t^{z}\), where \(z=qd-s\) and \(z\in\{1,\ldots,q-1\}\) for \(\alpha\in[0,1]\).
* Type III: If \(n=s,d>1\), then \(f_{\alpha}(t)=t^{y}(t^{q}+\alpha-1)^{d}-\alpha^{d}\), where \(y=s-qd\) and \(y\in\{1,\ldots,q-1\}\) for \(\alpha\in[0,1]\).
Next, we recall results from [7] that characterise the sparsest realising matrices for Type I, II, and III reduced Ito polynomials, through their associated digraphs.
**Theorem 4.1** ([7], Type I).: _Let \(q,n\in\mathbb{N}\), \(\gcd(q,n)=1\) and \(2q>n>q\). Then for a stochastic matrix \(A\), the following statements are equivalent:_
1. \(A\in\mathcal{M}_{n}^{0}(q,s)\)_._
2. \(\Gamma(A)\) _is up to isomorphism equal to_ \(\Gamma=C(\mathbf{a}(n))+\{(q,1)\}\)_._
3. \(\Gamma(A)\) _has one_ \(n\)_-cycle, one_ \(q\)_-cycle and no other cycles._
_In addition, if the weight on the edge \((q,1)\) in \(\Gamma(A)\) is equal to \((1-\alpha)\), then the characteristic polynomial of \(A\) is \(f_{\alpha}(t)=(t+\alpha-1)^{n}-\alpha^{n}\)._
**Theorem 4.2** ([7], Type II).: _Let \(n=qd\), \(s=qd-z\) where \(z\in\{1,\ldots,q-1\}\), \(\gcd(q,s)=1\), \(d\geq 2\). For a stochastic matrix \(A\), the following statements are equivalent:_
1. \(A\in\mathcal{M}_{n}^{0}(q,s)\)_._
2. \(\Gamma(A)\) _is up to isomorphism equal to_ \[\Gamma=\cup_{i=1}^{d}\left(C\left(q(i-1)\cdot\mathbf{e}+\mathbf{a}(q)\right)+ \left\{(iq-z_{i},\langle 1+iq\rangle_{n})\right\}\right),\] (18) _where_ \(z_{i}\) _correspond to some (ordered) partition of_ \(z\) _into_ \(d\) _parts:_ \(z=z_{1}+\ldots+z_{d}\)_,_ \(z_{i}\geq 0\)_. Furthermore, all the edges_ \((iq-z_{i},\langle 1+iq\rangle_{n}),i=1,\ldots,d\)_, have equal weight_ \(\alpha\)_, for some_ \(\alpha\in(0,1)\)_._
3. \(\Gamma(A)\) _has_ \(d\)__\(q\)_-cycles, one_ \(s\)_-cycle, and no other cycles. All_ \(q\)_-cycles have equal weight._
_For \(A\) to have the characteristic polynomial \(f_{\alpha}(t)=(t^{q}+\alpha-1)^{d}-\alpha^{d}t^{z}\), the weights on all the edges \((iq-z_{i},\langle 1+iq\rangle_{n})\), \(i=1\ldots,d\), have to be equal to \(\alpha\)._
In the second item of the above theorem, a detailed description of the digraph \(\Gamma(A)\) is given, where \(z_{i}\) count the number of vertices on the \(i\)-th \(q\)-cycle \(C(q(i-1)\cdot\mathbf{e}+\mathbf{a}(q))\) that are not included in the (unique) \(s\)-cycle in \(\Gamma(A)\). Different partitions of \(z\) will result in different graphs \(\Gamma(A)\). To identify the partitions that produce non-isomorphic graphs we offer the following definition.
**Definition 4.3**.: Let \(A\in\mathcal{M}_{n}^{0}(q,s)\), \(n=qd\), \(s=qd-z\), where \(\Gamma(A)\) is isomorphic to a directed graph of the form (18) for the partition \(z=z_{1}+\ldots+z_{d}\). Let \(\mathbf{z}=\begin{pmatrix}z_{1}&\ldots&z_{d}\end{pmatrix}\). We say that \(\mathscr{T}(\mathbf{z})\) is _the partition class_ of \(A\), denoted by \(\mathscr{P}_{u}(A)\).
Note that the partition class \(\mathscr{T}(\mathbf{z})\) determines \(A\in\mathcal{M}_{n}^{0}(q,s)\) up to permutation similarity, and the associated directed graph up to isomorphism.
**Theorem 4.4** ([7], Type III).: _Let \(n=qd+y\), where \(\gcd(q,n)=1\), \(d\geq 2\), and \(y\in\{1,\ldots,q-1\}\). For a stochastic matrix \(A\), the following statements are equivalent:_
1. \(A\in\mathcal{M}_{n}^{0}(q,s)\)_._
2. \(\Gamma(A)\) _is up to isomorphism equal to_ \[\Gamma=C\left(\mathbf{a}\left(n\right)\right)+\left\{\left(iq+\sum_{k=1}^{i}y _{k},1+(i-1)q+\sum_{k=1}^{i}y_{k}\right);i=1,\ldots,d\right\},\] (19) _where_ \(y_{i}\) _correspond to an (ordered) partition of_ \(y\) _into_ \(d\) _parts:_ \(y=y_{1}+\ldots+y_{d}\)_,_ \(y_{i}\geq 0\)_. In addition, the edges_ \((iq+\sum_{k=1}^{i}y_{k},1+(i-1)q+\sum_{k=1}^{i}y_{k}),i=1,\ldots,d\)_, all have the same weight._
3. \(\Gamma(A)\) _has_ \(d\)__\(q\)_-cycles, one_ \(n\)_-cycle, and no other cycles, where the weights on each of the_ \(q\)_-cycles are equal._
_For \(A\) to have the characteristic polynomial \(f_{\alpha}(t)=t^{y}(t^{q}+\alpha-1)^{d}-\alpha^{d}\), the weights on the edges \((iq+\sum_{k=1}^{i}y_{k},1+(i-1)q+\sum_{k=1}^{i}y_{k})\), \(i=1,\ldots,d\), have to be equal to \(1-\alpha\)._
In the theorem above, a digraph \(\Gamma\) consists of \(d\)\(q\)-cycles and an \(s\)-cycle that contains all the vertices of the \(q\)-cycles together with paths that connect the \(q\)-cycles. The partition of \(y\) determines the number of vertices on those connecting paths. To recognize partitions that produce non-isomorphic graphs described in item 2 of the theorem above, we introduce the following definition.
**Definition 4.5**.: Let \(A\in\mathcal{M}_{n}^{0}(q,s)\), \(n=qd+y\), \(\gcd(q,n)=1\), \(d\geq 2\), where \(\Gamma(A)\) is isomorphic to a directed graph of the form (19) for the partition \(y=y_{1}+\ldots+y_{d}\). Let \(\mathbf{y}=\begin{pmatrix}y_{1}&\ldots&y_{d}\end{pmatrix}\). We say that \(\mathscr{T}(\mathbf{y})\) is _the partition class_ of \(A\), denoted by \(\mathscr{P}_{u}(A)\).
To summarise, in all cases (Type I, II, and III) the sparsest realising matrices for the arc \(\mathcal{K}_{n}(q,s)\) are completely described by their digraphs and the weights on \(q\)-cycles in the digraphs. Moreover, the digraph for Type I is unique, for Type II and III the digraphs are associated with the partitions of \(z\) and \(y\), respectively.
### Power of a single cycle
The following well-known lemma on the powers of a cycle will be needed to study the powers of digraphs associated with stochastic matrices. A short proof is given for completeness.
**Lemma 4.6**.: _Let \(C(\mathbf{a}(k))\) be a cycle with \(k\) vertices, \(c,h\in\mathbb{N}\) so that \(\gcd(c,k)=h\). Furthermore, let \(k=k_{1}h\), \(c=c_{1}h\), \(c_{1},k_{1}\in\mathbb{N}\). Then \(C(\mathbf{a}(k))^{(c)}\) is a digraph with \(h\) cycles of order \(k_{1}\), i.e \(C(\mathbf{a}(k))^{(c)}=\cup_{i=1}^{h}C_{i}\), where_
\[C_{i}=C\left(i\cdot\mathbf{e}+\langle c\cdot\mathbf{a}_{0}(k_{1}-1)\rangle_{k }\right). \tag{20}\]
Proof.: Since any vertex \(i\in V(C(\mathbf{a}(k)))\) has a unique vertex at a distance \(c\) in \(C(\mathbf{a}(k))\), there is a unique edge outgoing from \(i\): \((i,\langle i+c\rangle_{k})\in E(C(\mathbf{a}(k))^{(c)})\). Furthermore, for any \(i\in\{1,2,\ldots,k\}\), the vertices \(\langle i+\ell c\rangle_{k}\), \(\ell\in\{0,\ldots,k_{1}-1\}\), form a cycle of order \(k_{1}\) in \(C(\mathbf{a}(k))^{(c)}\). This follows from \(\langle i+k_{1}c\rangle_{k}=\langle i\rangle_{k}\) and \(\langle i+\ell c\rangle_{k}\neq\langle i\rangle_{k}\) for any \(\ell<k_{1}\). Finally, \(k=k_{1}h\) implies that there are \(h\) cycles of order \(k_{1}\) in \(C(\mathbf{a}(k))^{(c)}\).
The next remark considers two special cases of the above lemma, that we will encounter in the upcoming sections.
**Remark 4.7**.: We consider the \(c\)-th strong power of \(C(\mathbf{a}(k))\) in the case when \(c\) divides \(k\) and in the case when \(\gcd(c,k)=1\).
1. If \(k=k_{1}c\) then \(C(\mathbf{a}(k))^{(c)}=\cup_{i=1}^{c}C_{i}\), where \(C_{i}=C(i\cdot\mathbf{e}+\langle c\cdot\mathbf{a}_{0}(k_{1}-1)\rangle_{k})\).
2. If \(\gcd(k,c)=1\), then \(C(\mathbf{a}(k))^{(c)}=C(\langle c\cdot\mathbf{a}(k)\rangle_{k})\).
## 5 Powers of Sparset Realising Matrices
### Type II arc is a power of a Type I arc
In this subsection we assume \(n=dq\), \(\gcd(qd,z)=1\), \(z\in\{1,\ldots,q-1\}\). From Theorem 3.10 and Corollary 3.12 we have:
\[\mathcal{K}_{dq}(dq-z,qd)^{d}=\mathcal{K}_{dq}(q,qd-z).\]
The theorem below determines the partition class of \(B^{d}\) for \(B\in\mathcal{M}_{n}^{0}(dq-z,qd)\).
**Theorem 5.1**.: _Let \(B\in\mathcal{M}_{n}^{0}(dq-z,qd)\), where \(d\geq 2\), \(n=qd\), \(\gcd(qd,z)=1\) and \(z\in\{1,\ldots,q-1\}\). We define \(\beta:=d-\langle z\rangle_{d}\), \(w:=\frac{z-\langle z\rangle_{d}}{d}\), and for \(j=1,\ldots,d\):_
\[z(j):=\begin{cases}w&\text{for $j=1,\ldots,\beta$}\\ w+1&\text{for $j=\beta+1,\ldots,d$.}\end{cases}\]
_The elements of the partition class \(\mathscr{P}_{\mathfrak{u}}(B^{d})\) consist of the parts \(z(j)\), \(j=1,\ldots,d\), where the part \(z(j)\) is followed by the part \(z(\langle j-\beta\rangle_{d})\) in the partition._
Proof.: Let \(B\in\mathcal{M}_{n}^{0}(\hat{q},\hat{s})\), where \(\hat{q}=s=qd-z\), \(\hat{s}=qd\), \(\gcd(\hat{q},n)=1\) and \(2\hat{q}>n>\hat{q}\).
**Digraph.** By Theorem 4.1, \(\Gamma(B)\) is isomorphic to \(\widehat{\Gamma}=C(\mathbf{a}(n))+\{(s,1)\}\). First, we want to find a digraph \(\Gamma\) that is isomorphic to \(\widehat{\Gamma}^{(d)}\).
Taking \(k=n\), \(c=d\) and \(k_{1}=q\) in the first part of Remark 4.7, we get \(C(\mathbf{a}(n))^{(d)}=C(\mathbf{a}(qd))^{(d)}=\cup_{j=1}^{d}C_{j}\), where
\[C_{j}:=C(j\cdot\mathbf{e}+d\cdot\mathbf{a}_{0}(q-1)) \tag{21}\]
is a \(q\)-cycle in \(\widehat{\Gamma}^{(d)}\). Furthermore, the edge \(\hat{e}=(s,1)\) in \(\widehat{\Gamma}\) contributes the following edges in \(\widehat{\Gamma}^{(d)}\):
\[e_{t}:=(s-t+1,1+d-t),t=1,\ldots,d, \tag{22}\]
where we define \(e_{t}:=(v_{O}(t),v_{I}(t))\) for later use. With this, we have determined \(\widehat{\Gamma}^{(d)}\) to be:
\[\widehat{\Gamma}^{(d)}=\cup_{j=1}^{d}C_{j}+\{e_{t}:t=1,\ldots,d\}.\]
In addition, if the edge \(\hat{e}\) has weight \(1-\alpha\) in \(\widehat{\Gamma}\), then all the \(q\)-cycles, \(C_{j},j=1,\ldots,d\), have equal weight \(\alpha\) in \(\widehat{\Gamma}^{(d)}\).
Finally, note that the edges \(e_{t},t=1,\ldots,d\), connect the \(q\)-cycles \(C_{j},j=1,\ldots,d\), to form an \(s\)-cycle in \(\widehat{\Gamma}^{(d)}\). The \(s\)-cycle in \(\widehat{\Gamma}^{(d)}\) consists of all the edges \(e_{t}\) and certain paths that are subgraphs of \(C_{j}\)'s. The lengths of those paths will help us to determine the partition class of \(B^{d}\).
**The ordering of parts in the partition.** Let \(z(j)\) be the number of vertices in \(V(C_{j})\) that do not belong to the \(s\)-cycle in \(\widehat{\Gamma}^{(d)}\). Note that the elements of \(\mathscr{P}_{u}(B^{d})\) consist of \(z(j)\) in some order.
To determine the order of \(z(j)\) in \(\mathscr{P}_{u}(B^{d})\) we fix \(j\in\{1,\ldots,d\}\) and note that there exists a unique \(t^{\prime}\in\{1,\ldots,d\}\) such that \(v_{I}(t^{\prime})\in V(C_{j})\). We say that \(e_{t^{\prime}}\) is the incoming edge for \(C_{j}\). From \(v_{I}(t^{\prime})=1+d-t^{\prime}\) and \(V(C_{j})=\{j+\ell d:\ell=0,\ldots,q-1\}\), we conclude that if \(e_{t^{\prime}}\) is the incoming edge for our fixed \(C_{j}\), then \(j+t^{\prime}\) is congruent \(1\) modulo \(d\). Thus, \(e_{\langle 1-j\rangle_{d}}\) is the incoming edge for \(C_{j}\). Similarly, there exists a unique \(t\) such that \(v_{O}(t)\in V(C_{j})\) and we say that \(e_{t}\) is the outgoing edge from \(C_{j}\). For this to be true, we must have \(j+t\) congruent to \(1-z\) modulo \(d\). This relation, and the fact that \(t\in\{1,\ldots,d\}\), uniquely define \(t\) to be:
\[t:=\begin{cases}\beta+1-j,&\text{for }j=1,\ldots,\beta,\\ \beta+1+d-j,&\text{for }j=\beta+1,\ldots,d,\end{cases} \tag{23}\]
where \(\beta=d-\langle z\rangle_{d}\). For this \(t\) we get:
\[v_{I}(t)=\begin{cases}d+(j-\beta),&\text{for }j=1,\ldots,\beta,\\ j-\beta,&\text{for }j=\beta+1,\ldots,d,\end{cases}\]
from (22) and (23). In particular, \(v_{I}(t)\in C_{\langle j-\beta\rangle_{d}}\) and \(C_{j}\) is connected to \(C_{\langle j-\beta\rangle_{d}}\) in the \(s\)-cycle in \(\widehat{\Gamma}^{(d)}\). Equivalently, \(z(j)\) is followed by \(z(\langle j-\beta\rangle_{d})\) in the partition class \(\mathscr{P}_{u}(B^{d})\).
**Parts of the partition.** To determine the parts that appear in the partition class of \(B^{d}\), we need to determine the lengths of the paths that are intersections of the \(q\)-cycles \(C_{j}\) and the \(s\)-cycle in \(\widehat{\Gamma}^{(d)}\). In other words, the number of vertices in such a path is the same as the number of vertices \(C_{j}\) is contributing to the \(s\)-cycle in \(\widehat{\Gamma}^{(d)}\). It is clear from the discussion so far that \(v_{I}(\langle 1-j\rangle_{d})\) is the first and \(v_{O}(\langle 1-z-j\rangle_{d})\) is the last vertex on this path. From (22) and (23) we get \(v_{I}(\langle j-1\rangle_{d})=j\) and
\[v_{O}(t) =\begin{cases}qd-z-\beta+j,&\text{ for }j=1,\ldots,\beta\\ qd-z-\beta-d+j,&\text{ for }j=\beta+1,\ldots,d,\end{cases}\] \[=\begin{cases}(q-w-1)d+j,&\text{ for }j=1,\ldots,\beta\\ (q-w-2)d+j,&\text{ for }j=\beta+1,\ldots,d,\end{cases}\]
where \(\beta=d-\langle z\rangle_{d}\) and \(w:=\frac{z-\langle z\rangle_{d}}{d}\).
Let \(k(j)\) denote the number of vertices from \(C_{j}\) that are contained in the \(s\)-cycle. Since vertices in \(C_{j}\) are consecutively numbered by \(j+\ell d,\ell=0,\ldots,q-1\), we have:
\[k(j)=\begin{cases}q-w,&\text{ for }j=1,\ldots,\beta\\ q-w-1,&\text{ for }j=\beta+1,\ldots,d,\end{cases}\]
or equivalently, \(z(j):=q-k(j)\) is the number of vertices from \(C_{j}\) that are not on the \(s\)-cycle. With this, we have determined the numbers that appear in the partition class \(\mathscr{P}_{u}(B^{d})\).
**Remark 5.2**.: For \(B\in\mathcal{M}^{0}_{n}(qd-z,qd)\), let us define the row vector \(\mathbf{z}\) consisting of parts of the partition class \(\mathscr{P}_{u}(B^{d})\): \(\mathbf{z}:=\big{(}z(1)\quad\ldots\quad z(d)\big{)}\) as in Theorem 5.1. Since, \(z(j)\) is followed by \(z(\langle j-\beta\rangle_{d})\) in the partition class \(\mathscr{P}_{u}(B^{d})\), we need to permute the elements of \(\mathbf{z}\) to get the row vector:
\[\mathbf{z}^{\prime}:=\big{(}z(1)\quad z(\langle 1-\beta\rangle_{d})\quad \ldots\quad z(\langle 1-(d-1)\beta\rangle)_{d}\big{)}\,.\]
which belongs to the partition class \(\mathscr{P}_{u}(B^{d})\).
**Example 5.3**.: For \(n=120\) we have \(\mathcal{K}_{n}(15,109)=\mathcal{K}_{n}(109,120)^{8}\). Below we follow the steps of the proof of Theorem 5.1 for \(B\in\mathcal{M}^{0}_{n}(109,120)\). We assume the notation developed in the proof. In particular, \(q=15\), \(s=109\), \(d=8\), \(\hat{q}=s=109\), \(\hat{s}=qd=120\) and \(z=qd-s=11\).
**Digraph.** Let the digraph of \(B\) be \(\widehat{\Gamma}=C(\mathbf{a}(120))+\{(109,1)\}\). Then \(\widehat{\Gamma}^{(8)}\) consists of \(8\) cycles of order \(15\): \(C(j\cdot\mathbf{e}+8\cdot\mathbf{a}_{0}(15))\), \(j=1,\ldots,8\), and additional edges \(\{(110-t,9-t):t=1,\ldots,8\}\) that connect the \(15\)-cycles to form a \(109\)-cycle in \(\widehat{\Gamma}^{(8)}\). Note that the weights on the edges \((110-t,9-t)\), \(t=1,\ldots,8\), in \(\widehat{\Gamma}^{(8)}\) are the same as the weight on the edge \((109,1)\) in \(\widehat{\Gamma}\).
**Partition.** From \(\beta=5\) and \(w=1\) we determine the parts of the partition class \(\mathscr{P}_{u}(B^{8})\):
\[z(j)=\begin{cases}1&\text{ for }j=1,\ldots,5\\ 2&\text{ for }j=6,7,8.\end{cases}\]
The vector \(\mathbf{z}^{\prime}\) defined in Remark 5.2 is:
\[\mathbf{z}^{\prime} =\begin{pmatrix}z(1)&z(4)&z(7)&z(2)&z(5)&z(8)&z(3)&z(6)\end{pmatrix}\] \[=\begin{pmatrix}1&1&2&1&1&2&1&2\end{pmatrix},\]
and \(\mathscr{P}_{\text{\tiny$\mathfrak{u}$}}(B^{8})=\mathscr{T}(\mathbf{z}^{ \prime})\).
**Corollary 5.4**.: _Let \(A\in\mathcal{M}^{0}_{n}(q,qd-z)\) have the associated partition \(\begin{pmatrix}z_{1}&\ldots&z_{d}\end{pmatrix}\in\mathscr{P}_{\text{\tiny$ \mathfrak{u}$}}(A)\). Then \(A=B^{d}\) for some \(B\in\mathcal{M}^{0}_{n}(qd-z,qd)\) if and only if:_
1. _There exists_ \(w\) _such that_ \(z_{j}\in\{w,w+1\}\)_. (If all_ \(z_{j}\) _are equal then we say they are all equal to_ \(w+1\)_)._ _We define_ \(\beta:=|\{j\in\{1\ldots,d\};z_{j}=w\}|\) _and :_ \[z(j)=\begin{cases}w&\text{for }j=1,\ldots,\beta\\ w+1&\text{for }j=\beta+1,\ldots,d.\end{cases}\]
2. _The partition class_ \(\mathscr{P}_{\text{\tiny$\mathfrak{u}$}}(A)\) _consists of the parts_ \(z(j)\)_,_ \(j=1,\ldots,d\)_. Further, the part_ \(z(j)\) _is followed by the part_ \(z(\langle j-\beta\rangle_{d})\) _in_ \(\mathscr{P}_{\text{\tiny$\mathfrak{u}$}}(A)\)_._
Proof.: Let \(A\in\mathcal{M}^{0}_{n}(q,qd-z)\) with associated partition \(\begin{pmatrix}z_{1}&\ldots&z_{d}\end{pmatrix}\in\mathscr{P}_{\text{\tiny$ \mathfrak{u}$}}(A)\) and \(B\in\mathcal{M}^{0}_{n}(qd-z,qd)\). Then \(A=B^{d}\) if and only if the parts \(z_{j}\) correspond to the parts \(z(j)\) in Theorem 5.1 which is true if and only if both the conditions of the corollary are satisfied.
**Example 5.5**.: Let \(n=120\), then \(\mathcal{K}_{n}(15,107)=\mathcal{K}_{n}(107,120)^{8}\), where \(q=15\), \(s=107\), \(d=8\) and \(z=13\). Recall, by Definition 4.3, that any \(8\) non-negative integers that sum to \(13\) result in a partition class for a matrix in \(\mathcal{M}^{0}_{n}(15,107)\). So, let \(A\in\mathcal{M}^{0}_{n}(15,107)\) and \(\mathbf{z}=\begin{pmatrix}z_{1}&\ldots&z_{8}\end{pmatrix}\in\mathscr{P}_{ \text{\tiny$\mathfrak{u}$}}(\mathbf{A})\).
If \(A=B^{8}\) for some \(B\in\mathcal{M}^{0}_{n}(15,120)\), then the elements in \(\mathbf{z}^{\prime}{}_{A}\) must be in the set \(\{w,w+1\}\) for some \(w\in\{1,2,\ldots,13\}\). If \(\mathbf{z}^{\prime}{}_{A}\) either contains three different numbers or numbers that are more than one apart, then \(A\) will not be a power of any matrix \(B\in\mathcal{M}^{0}_{n}(107,120)\). Thus, the necessary condition from the first part of the corollary leaves us just one choice for the vector \(\mathbf{z}\) defined in Remark 5.2:
\[\mathbf{z}=\begin{pmatrix}1&1&1&2&2&2&2&2\end{pmatrix}.\]
The above \(\mathbf{z}\) gives \(\beta=3\) and \(w=1\). The entries of \(\mathbf{z}\) can be permuted in several ways, but \(\mathbf{z}\) gives us the unique partition class for \(\mathscr{P}_{\text{\tiny$\mathfrak{u}$}}(A)\) as defined in Corollary 5.4. Therefore \(A=B^{8}\) for some \(B\in\mathcal{M}^{0}_{n}(107,120)\) if and only if \(\mathbf{z}^{\prime}\in\mathscr{P}_{\text{\tiny$\mathfrak{u}$}}(A)\), where:
\[\mathbf{z}^{\prime} =\begin{pmatrix}z(1)&z(6)&z(3)&z(8)&z(5)&z(2)&z(7)&z(4)\end{pmatrix}\] \[=\begin{pmatrix}1&2&1&2&2&1&2&2\end{pmatrix}.\]
### Type II arc is a power of a Type II arc
Throughout this subsection we assume: \(n=c\hat{d}q\), \(z\in\{1,\ldots,q-1\}\), \(\gcd(cq,z)=1\). By Theorem 3.10 we have:
\[\mathcal{K}_{n}(cq,c\hat{d}q-z)^{c}=\mathcal{K}_{n}(q,c\hat{d}q-z).\]
Given the partition class \(\mathscr{P}_{\mathfrak{u}}(B)\) for \(B\in\mathcal{M}_{n}^{0}(cq,c\hat{d}q-z)\) we want to determine the partition class \(\mathscr{P}_{\mathfrak{u}}(B^{c})\).
**Theorem 5.6**.: _Let \(B\in\mathcal{M}_{n}^{0}(cq,c\hat{d}q-z)\) and \(\big{(}\hat{z}_{1}\quad\ldots\quad\hat{z}_{\hat{d}}\big{)}\in\mathscr{P}_{ \mathfrak{u}}(B)\). For \(i=1\ldots,\hat{d}\) and \(j=1\ldots,c\), we define \(\beta_{i}=c-\langle\hat{z}_{i}\rangle_{c}\), \(w_{i}=\frac{\hat{z}_{i}-\langle\hat{z}_{i}\rangle_{c}}{c}\), and_
\[z(i,j)=\begin{cases}w_{i}&\text{for }j=1,\ldots,\beta_{i}\\ w_{i}+1&\text{for }j=\beta_{i}+1,\ldots,c.\end{cases}\]
_The partition class \(\mathscr{P}_{\mathfrak{u}}(B^{c})\) is defined as follows:_
* _the parts of the partition are equal to_ \(z(i,j)\)_,_ \(i=1,\ldots,\hat{d}\)_,_ \(j=1\ldots,c\)_,_
* _in the partition_ \(z(i,j)\) _is followed by_ \(z(\langle i+1\rangle_{\hat{d}},\langle j-\beta_{i}\rangle_{c})\)_._
Proof.: Let \(B\in\mathcal{M}_{n}^{0}(\hat{q},\hat{s})\), where \(\hat{q}=cq\), \(\hat{s}=c\hat{d}q-z\).
**Digraph of \(\mathbf{B^{c}}\).** Given a directed graph \(\widehat{\Gamma}\) that is isomporhic to \(\Gamma(B)\), we first find a directed graph \(\Gamma=\widehat{\Gamma}^{(c)}\) that is isomorphic to \(\Gamma(B^{c})\). By Theorem 4.2, \(\Gamma(B)\) is isomorphic to
\[\widehat{\Gamma}=\cup_{i=1}^{\hat{d}}\left(C(\hat{q}(i-1)\cdot\mathbf{e}+ \mathbf{a}(\hat{q}))+\{(i\hat{q}-\hat{z}_{i},\langle 1+i\hat{q}\rangle_{n})\} \right),\]
where \(\big{(}\hat{z}_{1}\quad\ldots\quad\hat{z}_{\hat{d}}\big{)}\in\mathscr{P}_{ \mathfrak{u}}(B)\). Denoting the \(\hat{q}\)-cycles in \(\widehat{\Gamma}\) by \(\widehat{C}_{i}:=C(\hat{q}(i-1)\cdot e+\mathbf{a}(\hat{q}))\), \(i=1,\ldots,\hat{d}\), and taking \(k=\hat{q}\), \(k_{1}=q\) in the first part of Remark 4.7, we get \(\widehat{C}_{i}^{(c)}=\cup_{j=1}^{c}C_{i,j},\) where for \(i=1,\ldots,\hat{d}\) and \(j=1,\ldots,c\), we denote:
\[C_{i,j}:=C\left((qc(i-1)+j)\cdot\mathbf{e}+c\cdot\mathbf{a}_{0}(q-1)\right). \tag{24}\]
Note that each \(C_{i,j}\) is a \(q\)-cycle in \(\widehat{\Gamma}^{(c)}\).
Next, we consider contribution of the edges \(\hat{e}_{i}=(i\hat{q}-\hat{z}_{i},\langle 1+i\hat{q}\rangle_{n}),i=1,\ldots, \hat{d}\), to \(\widehat{\Gamma}^{(c)}\). Note that the edge \(\hat{e}_{i}\) connects \(\widehat{C}_{i}\) to \(\widehat{C}_{\langle i+1\rangle_{\hat{d}}}\) in \(\widehat{\Gamma}\), and contributes the edges \(\{e_{i,t}:t=1,\ldots,c\}\) to \(\widehat{\Gamma}^{(c)}\), where:
\[e_{i,t}:=(\langle i\hat{q}-\hat{z}_{i}-(t-1)\rangle_{n},\langle i\hat{q}+c-(t -1)\rangle_{n}). \tag{25}\]
Let us denote \(e_{i,t}:=(v_{O}(i,t),v_{I}(i,t))\) for future use. Note that in \(\widehat{\Gamma}^{(c)}\), the edges \(e_{i,t}\) connect the \(q\)-cycles to form an \(s\)-cycle. This \(s\)-cycle is the union of the edges \(e_{i,t}\) and
paths that are part of \(C_{i,j}\)'s. These paths are discussed in the third section of the proof where we talk about the partition parts. At this point we can write down \(\widehat{\Gamma}^{(c)}\) as:
\[\widehat{\Gamma}^{(c)}=\cup_{i=1}^{\hat{d}}\left(\cup_{j=1}^{c}C_{ij}+\{e_{i,t} :t=1,\ldots,c\}\right).\]
The weight \(\alpha\) on the edges \(\hat{e}_{i},i=1,\ldots,\hat{d},t=1,\ldots,c\) in \(\widehat{\Gamma}^{(c)}\). Thus, the weights on all the \(q\)-cycles, \(C_{i,j},i=1,\ldots,\hat{d},j=1,\ldots,c\), are equal to \(1-\alpha\) in \(\widehat{\Gamma}^{(c)}\).
**Connecting the \(q\)-cycles and ordering the parts in the partition.** To determine \(\mathscr{P}_{u}(B^{c})\) we take a closer look at how the edges \(e_{i,t}\) connect the cycles \(C_{i,j}\). In particular, let \(z(i,j)\) be the number of vertices in \(V(C_{i,j})\) that are not on the \(s\)-cycle in \(\widehat{\Gamma}^{(c)}\). To determine the ordering of \(z(i,j)\) in \(\mathscr{P}_{u}(B^{c})\) we fix \(i\) and \(j\) and determine how the \(q\)-cycles follow each other to form the \(s\)-cycle in \(\widehat{\Gamma}^{(c)}\).
For each pair \(i,j\), \(i\in\{1,\ldots,\hat{d}\}\) and \(j\in\{1,\ldots,c\}\), there exists precisely one \(t^{\prime}\in\{1,\ldots,c\}\) so that \(v_{I}(i-1,t^{\prime})\in V(C_{i,j})\). We say that \(e_{i-1,t^{\prime}}\) is _an incoming edge_ for \(C_{i,j}\). From \(v_{I}(i-1,t^{\prime})=\langle(i-1)\hat{q}+c-t^{\prime}+1)\rangle_{n}\) and \(V(C_{i,j})=\{(qc(i-1)+\ell c+j),\ell=0,\ldots,q-1\}\), we deduce that \(j+t^{\prime}\) is congruent to \(1\) modulo \(c\). In short, \(e_{i-1,\langle 1-j\rangle_{c}}\) is the incoming edge for \(C_{i,j}\). Similarly, there exists a unique \(t\in\{1,\ldots,c\}\) such that \(v_{O}(i,t)\in V(C_{i,j})\) and we say that \(e_{i,t}\) is an _outgoing edge_ for \(C_{i,j}\). This implies that \(j+t\) is congruent to \(1-\hat{z}_{i}\) modulo \(c\). This relation and the fact that \(t\in\{1,\ldots,c\}\) uniquely define \(t\) to be:
\[t:=\begin{cases}\beta_{i}+1-j&\text{for }j=1,\ldots,\beta_{i}\\ \beta_{i}+1+c-j&\text{for }j=\beta_{i}+1,\ldots,c,\end{cases} \tag{26}\]
where \(\beta_{i}:=c-\langle\hat{z}_{i}\rangle_{c}\). Finally, to determine how the cycles \(C_{i,j}\) are ordered to form the \(s\)-cycle in \(\widehat{\Gamma}^{(c)}\), we use equation (25) and equation (26) to get:
\[v_{I}(i,t)=\begin{cases}i\hat{q}+c+j-\beta_{i}&\text{for }j=1,\ldots,\beta_{i}\\ i\hat{q}+j-\beta_{i}&\text{for }j=\beta_{i}+1,\ldots,c.\end{cases}\]
This implies \(v_{I}(i,t)\in C_{\langle i+1\rangle_{\hat{d}},\langle j-\beta_{i}\rangle_{c}}\). Consequently, \(z(i,j)\) is followed by \(z(\langle i+1\rangle_{\hat{d}},\langle j-\beta_{i}\rangle_{c})\) in \(\mathscr{P}_{u}(B^{c})\).
**Parts of the partition.** To determine the parts that appear in \(\mathscr{P}_{u}(B^{c})\), we want to determine how many vertices from each \(C_{i,j}\) cycle are (are not) contained on the \(s\)-cycle in \(\widehat{\Gamma}^{(c)}\). Equivalently, we want to know the number of vertices on the path that is the intersection between \(C_{i,j}\) and the \(s\)-cycle in \(\widehat{\Gamma}^{(c)}\). Let us denote this number by \(k(i,j)\). From the above discussion, we know that \(v_{I}(i-1,\langle 1-j\rangle_{c})\) is the first and \(v_{O}(i,t)\) is the last vertex on this path for \(t=\langle 1-j-\hat{z}_{i}\rangle_{c}\). Using equation (25) and equation (26), we get:
\[v_{I}(i-1,\langle 1-j\rangle_{c} =\hat{q}(i-1)+j,\] \[v_{O}(i,t) =\begin{cases}i\hat{q}-\hat{z}_{i}-\beta_{i}+j,&\text{for }j=1, \ldots,\beta_{i}\\ i\hat{q}-\hat{z}_{i}-c-\beta_{i}+j&\text{for }j=\beta_{i}+1,\ldots,c.\end{cases}\]
From here, we can write:
\[v_{O}(i,t)=\begin{cases}(i-1)\hat{q}+(q-w_{i}-1)c+j,&\text{ for }j=1,\ldots, \beta_{i}\\ (i-1)\hat{q}+(q-w_{i}-2)c+j&\text{ for }j=\beta_{i}+1,\ldots,c,\end{cases}\]
where \(\beta_{i}=c-\langle\hat{z}_{i}\rangle_{c}\), and \(w_{i}=\frac{\hat{z}_{i}-\langle\hat{z}_{i}\rangle_{c}}{c}\). Recalling that the vertices of \(C_{i,j}\) are consecutively numbered by \((qc(i-1)+\ell c+j)\), \(\ell=0,\ldots,q-1\), we conclude that:
\[k(i,j)=\begin{cases}q-w_{i},&\text{ for }j=1,\ldots,\beta_{i}\\ q-w_{i}-1,&\text{ for }j=\beta_{i}+1,\ldots,c,\end{cases}\]
or equivalently, \(z(i,j):=q-k(i,j)\) is the number of vertices from \(C_{i,j}\) that are not on the \(s\)-cycle. With this, we have determined the numbers that appear in the partition class \(\mathscr{P}_{\mathpzc{u}}(B^{c})\).
**Remark 5.7**.: Given \(\hat{\mathbf{z}}\in\mathcal{P}(B)\) we can define a matrix \(Z^{\prime}\) that satisfies \(\operatorname{vec}(Z^{\prime})^{T}\in\ \mathscr{P}_{\mathpzc{u}}(B^{c})\) as follows. From \(\hat{\mathbf{z}}\) we define the parts \(z(i,j)\) as in the statement of Theorem 5.6, and consider the matrix:
\[Z:=\left(\begin{array}{cccc}z(1,1)&z(1,2)&\ldots&z(1,c)\\ \vdots&\vdots&\ddots&\vdots\\ z(\hat{d},1)&z(\hat{d},2)&\ldots&z(\hat{d},c)\end{array}\right).\]
The elements of \(Z\) are the same as the parts of the partitions in \(\mathscr{P}_{\mathpzc{u}}(B^{c})\) (with multiplicities). Next, we determine their ordering in the partition. Using the fact that \(z(i,j)\) is followed by \(z(\langle i+1\rangle_{\hat{d}},\langle j-\beta_{i}\rangle_{c})\), we define the \(j\)-th column, \(z^{\prime}_{\star,j}\), \(j=1,\ldots,c\), of the matrix \(Z^{\prime}\) to be:
\[z^{\prime}_{\star,j}:=\left(\begin{array}{c}z(1,\langle 1-(j-1)\beta \rangle_{c})\\ \vdots\\ z(i,\langle 1-(j-1)\beta-\sum_{k=1}^{i-1}\beta_{k}\rangle_{c})\\ \vdots\\ z(\hat{d},\langle 1-(j-1)\beta-\sum_{k=1}^{\hat{d}-1}\beta_{k}\rangle_{c})\end{array} \right).\]
The partition class \(\mathscr{P}_{\mathpzc{u}}(B^{c})\) is equal to \(\mathscr{T}(\mathbf{z})\), where \(\mathbf{z}=\operatorname{vec}(Z^{\prime})^{T}\). Note that the unordered multiset of elements in the \(j\)-th row of \(Z\) is equal to the unordered multiset of elements in the \(j\)-th row of \(Z^{\prime}\) but the elements appear in matrices \(Z\) and \(Z^{\prime}\) in different orders.
**Example 5.8**.: Let \(n=120\), then \(\mathcal{K}_{n}(6,115)=\mathcal{K}_{n}(24,115)^{4}\), where \(q=6\), \(c=4\), \(qc=\hat{q}=24\), \(\hat{s}=115\), \(\hat{d}=5\), \(z=5\) and \(z=\hat{z}_{1}+\hat{z}_{2}+\hat{z}_{3}+\hat{z}_{4}+\hat{z}_{5}\).
Let \(B\in\mathcal{M}^{0}_{n}(24,115)\). In this example, we consider a few possibilities for partitions \(\hat{\mathbf{z}}\in\mathscr{P}_{\mathpzc{u}}(B)\). In each case, we illustrate parts of the proof and write down matrices \(Z\) and \(Z^{\prime}\) defined in Remark 5.7.
1. \(\left(\hat{z}_{1}\quad\hat{z}_{2}\quad\hat{z}_{3}\quad\hat{z}_{4}\quad\hat{z}_{5} \right)=\left(5\quad 0\quad 0\quad 0\quad 0\right).\)
**Digraph of \(\mathbf{B^{4}}\).**\(\widehat{\Gamma}\) consists of \(5\) cycles of order \(24\), \(C(24(i-1)\cdot\mathbf{e}+\mathbf{a}(24))\), \(i=1,\ldots,5\), and the edge set, \(\{(19,25),(48,49),(72,73),(96,97),(120,1)\}\) that connect the \(24\)-cycles to make the \(115\)-cycle.
\(\widehat{\Gamma}^{(4)}\) consists of \(20\) cycles of order \(6\), \(C\left((24(i-1)+j)\cdot\mathbf{e}+c\cdot\mathbf{a}_{0}(6)\right)\) for \(i=1,\ldots,5\), \(j=1,\ldots,4\), and the edge set:
\[(\langle 24i-\hat{z}_{i}-(t-1)\rangle_{n},\langle 24i+4-(t-1)\rangle_{n})\]
for \(i=1,\ldots,5\), \(t=1,\ldots,4\), that connect the \(6\)-cycles to form a cycle of order \(115\). Note that each of these connecting edges has the weight \(\alpha\) in \(\widehat{\Gamma}^{(4)}\) if the connecting edges in \(\widehat{\Gamma}\) have the weight \(\alpha\).
**Parts of the Partition.** From \(\hat{z}_{1}=5\) we get \(\beta_{1}=3\) and \(w_{1}=1\). Similarly, \(\hat{z}_{2}=\hat{z}_{3}=\hat{z}_{4}=\hat{z}_{5}=0\) give \(\beta_{2}=\beta_{3}=\beta_{4}=\beta_{5}=0\) and \(w_{2}=w_{3}=w_{4}=w_{5}=-1\).
Thus, the matrix \(Z\) defined in Remark 5.7 is equal to:
\[Z=\left(\begin{array}{cccc}1&1&1&2\\ 0&0&0&0\\ 0&0&0&0\\ 0&0&0&0\\ 0&0&0&0\end{array}\right).\]
At this point, we know that the elements in \(\mathscr{P}_{u}(B^{c})\) will have three parts equal to \(1\), one part equal to \(2\), and all other parts equal to \(0\). Since all the nonzero parts appear in the first row, we also know that every nonzero part will be followed by precisely four zeros. As this observation already uniquely defines the partition class, we move on to the next case.
2. \(\left(\hat{z}_{1}\quad\hat{z}_{2}\quad\hat{z}_{3}\quad\hat{z}_{4}\quad\hat{z} _{5}\right)=\left(3\quad 0\quad 2\quad 0\quad 0\right).\)
**Parts of the Partition.** From \(\hat{z}_{1}=3\) we get \(\beta_{1}=1\), \(w_{1}=0\). Next, \(\hat{z}_{3}=2\) gives \(\beta_{3}=2\), \(w_{3}=0\). Finally, \(\hat{z}_{2}=\hat{z}_{4}=\hat{z}_{5}=0\) gives \(\beta_{2}=\beta_{4}=\beta_{5}=0\) and \(w_{2}=w_{4}=w_{5}=-1\). This implies:
\[Z=\left(\begin{array}{cccc}0&1&1&1\\ 0&0&0&0\\ 0&0&1&1\\ 0&0&0&0\\ 0&0&0&0\end{array}\right).\]
The elements of \(\mathscr{P}_{u}(B^{c})\) have \(5\) parts equal to \(1\) and all other parts equal to \(0\).
**Ordering.**: This time the matrix \(Z\) does not determine \(\mathscr{P}_{\mathfrak{n}}(B^{c})\), and we also need:
\[Z^{\prime}=\left(\begin{array}{cccc}z(1,1)&z(1,2)&z(1,3)&z(1,4)\\ z(2,4)&z(2,1)&z(2,2)&z(2,3)\\ z(3,4)&z(3,1)&z(3,2)&z(3,3)\\ z(4,2)&z(4,3)&z(4,4)&z(4,1)\\ z(5,2)&z(5,3)&z(5,4)&z(5,1)\end{array}\right)=\left(\begin{array}{cccc}0&1&1 &1\\ 0&0&0&0\\ 1&0&0&1\\ 0&0&0&0\\ 0&0&0&0\end{array}\right).\]
Thus, the partition class \(\mathscr{P}_{\mathfrak{n}}(B^{4})\) is given by \(\mathscr{T}(\operatorname{vec}(Z^{\prime})^{T})\).
3. Taking \(\big{(}\hat{z}_{1}\quad\hat{z}_{2}\quad\hat{z}_{3}\quad\hat{z}_{4}\quad\hat{ z}_{5}\big{)}=\big{(}2\quad 0\quad 2\quad 0\quad 1\big{)}\) we get: \[Z=\left(\begin{array}{cccc}0&0&1&1\\ 0&0&0&0\\ 0&0&1&1\\ 0&0&0&0\\ 0&0&0&1\end{array}\right)\text{ and }Z^{\prime}=\left(\begin{array}{cccc}0&0&1&1\\ 0&0&0&0\\ 1&1&0&0\\ 0&0&0&0\\ 0&0&0&1\end{array}\right),\] and the partition class \(\mathscr{P}_{\mathfrak{n}}(B^{c})\) is given by \(\mathbf{z}=\mathscr{T}(\operatorname{vec}(Z^{\prime})^{T})\).
**Corollary 5.9**.: _Let \(A\in\mathcal{M}_{n}^{0}(q,c\hat{d}q-z)\) and \(\big{(}z_{1}\quad\ldots z_{cd}\big{)}\in\mathscr{P}_{\mathfrak{n}}(A)\). Then \(A=B^{c}\) for some \(B\in\mathcal{M}_{n}^{0}(cq,c\hat{d}q-z)\) if and only if:_
1. _For every_ \(i\in\{1,\ldots,\hat{d}\}\) _there exists_ \(w_{i}\) _so that_ \(z_{td+i}\in\{w_{i},w_{i}+1\}\) _for_ \(t=0,\ldots,c-1\)_. (If for some_ \(i\) _all_ \(z_{td+i}\) _are equal, then we say that they are equal to_ \(w_{i}+1\)_.)_ _For_ \(i=1\ldots,\hat{d}\)_, we define,_ \(\beta_{i}:=|\{t\in\{0\ldots,c-1\};z_{td+i}=w_{i}\}|\)_, and_ \[z(i,j):=\begin{cases}w_{i}&\text{for }j=1,\ldots,\beta_{i}\\ w_{i}+1&\text{for }j=\beta_{i}+1,\ldots,c,\end{cases}\] _where_ \(i=1,\ldots,\hat{d},j=1,\ldots,c\)_._
2. _The partition class_ \(\mathscr{P}_{\mathfrak{n}}(A)\) _consists of the parts_ \(z(i,j)\)_,_ \(i=1,\ldots,\hat{d}\)_,_ \(j=1\ldots,c\)_. Further, in the partition_ \(z(i,j)\) _is followed by_ \(z(\langle i+1\rangle_{\hat{d}},\langle j-\beta_{i}\rangle_{c})\)_._
_If the conditions above are satisfied, then \(A=B^{c}\) for \(B\in\mathcal{M}_{n}^{0}(cq,c\hat{d}q-z)\) with \(\mathscr{P}_{\mathfrak{n}}(B)=\mathscr{T}(\big{(}\hat{z}_{1}\quad\ldots\quad \hat{z}_{\hat{d}}\big{)}),\) where \(\hat{z}_{i}:=c(w_{i}+1)-\beta_{i}\), for \(i=1,\ldots,\hat{d}\)._
Proof.: Let \(A\in\mathcal{M}_{n}^{0}(q,c\hat{d}q-z)\) and \(\big{(}z_{1}\quad\ldots\quad z_{d}\big{)}\in\mathscr{P}_{\mathfrak{n}}(A)\). By Theorem 5.6, \(A=B^{c}\) for some \(B\in\mathcal{M}_{n}^{0}(cq,c\hat{d}q-z)\) if and only if the items 1. and 2. in Corollary 5.9 hold.
**Example 5.10**.: Let \(n=120\), \(q=6\) and \(s=115\). Then, \(d=20\), \(z=5\) and \(n=qd\). By Definition 4.3, any partition class containing \(20\) non-negative integers that sum to \(5\) is a partition class for some matrix in \(\mathcal{M}_{n}^{0}(6,115)\).
Let \(A\in\mathcal{M}_{n}^{0}(6,115)\) and \(\mathbf{z}=\left(z_{1}\quad\ldots\quad z_{20}\right)\in\mathscr{P}_{\mathpzc{n}}(A)\). Let \(Z_{A}^{\prime}\) be a \(\hat{d}\times c\) matrix satisfying \(\operatorname{vec}(Z_{A}^{\prime})^{T}=\mathbf{z}\):
\[Z_{A}^{\prime}=\left(\begin{array}{cccc}z_{1}&z_{6}&z_{11}&z_{16}\\ z_{2}&z_{7}&z_{12}&z_{17}\\ z_{3}&z_{8}&z_{13}&z_{18}\\ z_{4}&z_{9}&z_{14}&z_{19}\\ z_{5}&z_{10}&z_{15}&z_{20}\end{array}\right).\]
We want to determine when \(A=B^{c}\) is a power of some matrix \(B\in\mathcal{M}_{n}^{0}(24,115)\).
From the first part of the corollary, we know that each row in \(Z_{A}^{\prime}\) must have elements from the set \(\{w_{i},w_{i}+1\}\). That is, if any row of \(Z_{A}^{\prime}\) either contains three different numbers or numbers that are more than one apart, we know that \(A\) is not a power of any matrix \(B\in\mathcal{M}_{n}^{0}(24,115)\). The necessary condition to have entries at most one apart in each row of \(Z_{A}^{\prime}\), and the fact that all elements of \(Z_{A}^{\prime}\) sum up to \(5\), imply that the maximal possible entry in any one row of \(Z_{A}^{\prime}\) is \(2\), and if \(2\) is an element of \(Z_{A}^{\prime}\), then the row that contains it is the only nonzero row in the matrix. We consider two cases:
1. \(2\) **is an element of \(Z_{A}^{\prime}\).** If \(2\) is an entry in a row then the other entries in that row have to be \(1\). Without loss of generality, we can put the entries \(2\) and \(1\) in the first row of \(Z_{A}^{\prime}\). This gives us \[Z=\left(\begin{array}{cccc}1&1&1&2\\ 0&0&0&0\\ 0&0&0&0\\ 0&0&0&0\\ 0&0&0&0\end{array}\right),\] and a unique partition class \(\mathscr{P}_{\mathpzc{n}}(A)\). Referring back to Example 5.8, we see that \(\mathscr{P}_{\mathpzc{n}}(A)\) is equivalent to \(\mathscr{P}_{\mathpzc{n}}(B^{4})\) for \(B\in\mathcal{M}_{n}^{0}(24,115)\) with associated partition class \(\mathscr{T}(\hat{\mathbf{z}})\), where \(\hat{\mathbf{z}}=\left(5\quad 0\quad 0\quad 0\quad 0\right)\). In order words, \(A=B^{4}\) for some matrix \(B\in\mathcal{M}_{n}^{0}(24,115)\) with \(\left(5\quad 0\quad 0\quad 0\quad 0\right)\in\mathscr{P}_{\mathpzc{n}}(B)\).
2. **All elements of \(Z_{A}^{\prime}\) are either \(0\) or \(1\).** Under this constraint the first item in Corollary 5.9 automatically holds, and we have several options for the matrix \(Z\) as defined in Remark 5.7. In other words, we can choose the row sums of \(Z\) arbitrarily, as long as the sum of all the entries in \(Z\) is equal to \(5\). Let us look at a few specific examples: * Letting \[Z=\left(\begin{array}{cccc}z(1,1)&z(1,2)&z(1,3)&z(1,4)\\ z(2,1)&z(2,2)&z(2,3)&z(2,4)\\ z(3,1)&z(3,2)&z(3,3)&z(3,4)\\ z(4,1)&z(4,2)&z(4,3)&z(4,4)\\ z(5,1)&z(5,2)&z(5,3)&z(5,4)\end{array}\right)=\left(\begin{array}{cccc}1&1& 1&1\\ 0&0&0&1\\ 0&0&0&0\\ 0&0&0&0\\ 0&0&0&0\end{array}\right),\]
we get \(w_{1}=w_{2}=0\), \(w_{3}=w_{4}=w_{5}=-1\) and \(\beta_{1}=0\), \(\beta_{2}=3\), \(\beta_{3}=\beta_{4}=\beta_{5}=0\). In this case \(Z^{\prime}=Z\), and any matrix \(A\in\mathcal{M}_{n}^{0}(6,115)\) with \(\mathscr{P}_{\mu}(A)=\mathscr{T}(\operatorname{vec}(Z^{\prime})^{T})\) is a fourth power of some a matrix \(B\in\mathcal{M}_{n}^{0}(24,115)\).
* For \[Z=\left(\begin{array}{cccc}0&0&1&1\\ 0&0&1&1\\ 0&0&0&0\\ 0&0&0&1\\ 0&0&0&0\end{array}\right),\] we get \(w_{1}=w_{2}=0\), \(\beta_{1}=\beta_{2}=2\), \(w_{3}=w_{5}=-1\), \(\beta_{3}=\beta_{5}=0\), \(w_{4}=0\) and \(\beta_{4}=3\). We can cyclically permute the rows of \(Z\) in many ways, but this \(Z\) will give us only one partition class as in Corollary 5.9. Specifically, \(A=B^{4}\) for some \(B\in\mathcal{M}_{n}^{0}(24,115)\) if and only if \(\operatorname{vec}(Z^{\prime})^{T}\in\mathscr{P}_{\mu}(A)\), where: \[Z^{\prime}=\left(\begin{array}{cccc}z(1,1)&z(1,2)&z(1,3)&z(1,4)\\ z(2,3)&z(2,4)&z(2,1)&z(2,2)\\ z(3,1)&z(3,2)&z(3,3)&z(3,4)\\ z(4,1)&z(4,2)&z(4,3)&z(4,4)\\ z(5,2)&z(5,3)&z(5,4)&z(5,1)\end{array}\right)=\left(\begin{array}{cccc}0&0&1& 1\\ 1&1&0&0\\ 0&0&0&0\\ 0&0&0&1\\ 0&0&0&0\end{array}\right).\] In that case \(\mathscr{P}_{\mu}(B)=\mathscr{T}(\hat{\mathbf{z}})\), where \(\hat{\mathbf{z}}=\begin{pmatrix}2&2&0&1&0\end{pmatrix}\).
### Type III arc is a power of a Type III arc
Throughout this subsection we assume \(n=qc\hat{d}+y\), \(d=c\hat{d}\), \(\gcd(qc,y)=1\). By Theorem 3.10:
\[\mathcal{K}_{n}(q,qc\hat{d}+y)=\mathcal{K}_{n}(qc,qc\hat{d}+y)^{c}.\]
The theorem below determines the partition class of \(B^{c}\) for \(B\in\mathcal{M}_{n}^{0}(qc,qc\hat{d}+y)\).
**Theorem 5.11**.: _Let \(B\in\mathcal{M}_{n}^{0}(qc,qc\hat{d}+y)\) and \(\begin{pmatrix}\hat{y}_{1}&\ldots&\hat{y}_{\hat{d}}\end{pmatrix}\in\mathscr{P} _{\mathfrak{m}}(B)\). For \(i=1\ldots,\hat{d}\), let \(u_{i}\), \(\beta_{i}\), \(\eta_{i}\) and \(\gamma_{i}\) be defined by:_
\[\hat{y}_{i}=u_{i}c+\beta_{i}\text{ and }\sum_{k=1}^{i}\beta_{k}=\eta_{i}c+ \gamma_{i}\]
_where \(\beta_{i},\gamma_{i}\in\{0,\ldots,c-1\}\). For \(i=1,\ldots,\hat{d}\) and \(j=1,\ldots,c\), we define:_
\[y(i,j)=\begin{cases}u_{i}+1,&j=\langle\gamma_{i-1}+1\rangle_{c},\ldots,\langle \gamma_{i-1}+\beta_{i}\rangle_{c}\\ u_{i},&j=\langle\gamma_{i-1}+\beta_{i}+1\rangle_{c},\ldots,\langle\gamma_{i-1}+ c\rangle_{c}.\end{cases} \tag{27}\]
_The partition class \(\mathscr{P}_{\mathfrak{m}}(B^{c})\) is defined as follows:_
* _the parts of the partition are equal to_ \(y(i,j)\)_,_ \(i=1,\ldots,\hat{d}\)_,_ \(j=1\ldots,c\)
* _in the partition_ \(y(i,j)\) _is followed by_ \(y(i+1,j)\) _for_ \(i=1,\ldots,\hat{d}-1\)_. Further,_ \(y(\hat{d},j)\) _is followed by_ \(y(1,\langle j-y\rangle_{c})\)_._
Proof.: Let \(B\in\mathcal{M}^{0}_{n}(\hat{q},\hat{s})\), \(\hat{q}=qc\), \(\hat{s}=qcd\hat{d}+y\), and \(\left(\hat{y}_{1}\quad\ldots\quad\hat{y}_{\hat{d}}\right)\in\mathscr{P}_{ \textit{in}}(B)\).
**Digraph of \(B^{c}\).** Let \(\widehat{\Gamma}\) be a digraph isomorphic to \(\Gamma(B)\). In this first step, we will find a directed graph \(\widehat{\Gamma}^{(c)}\) that is isomorphic to \(\Gamma(B^{c})\). By Theorem 4.4, \(\Gamma(B)\) is isomorphic to:
\[\widehat{\Gamma}=C(\mathbf{a}(n))+\{\hat{e}_{1},\ldots,\hat{e}_{\hat{d}}\},\]
where \(\hat{e}_{i}:=\{(i\hat{q}+\sum_{k=1}^{i}\hat{y}_{k},1+(i-1)\hat{q}+\sum_{k=1}^ {i}\hat{y}_{k})\}\).
Since \(\gcd(n,c)=1\), \(C(\mathbf{a}(n))^{(c)}\) is an \(n\)-cycle by Remark 4.7:
\[C(\mathbf{a}(n))^{(c)}=C(\langle c\cdot\mathbf{a}(n)\rangle_{n}). \tag{28}\]
In addition, each edge \(\hat{e}_{i}\) contributes the following \(c\) edges to \(\widehat{\Gamma}^{(c)}\):
\[e_{i,t}:=(\langle 1+i\hat{q}+\sum_{k=1}^{i}\hat{y}_{k}-t\rangle_{n}, \langle 1+(i-1)\hat{q}+\sum_{k=1}^{i}\hat{y}_{k}+c-t\rangle_{n}),\,t=1,\ldots,c. \tag{29}\]
We denote \(e_{i,t}:=(v_{O}(i,t),v_{I}(i,t))\) for future use. We have:
\[\widehat{\Gamma}^{(c)}=C(\langle c\cdot\mathbf{a}(n)\rangle_{n})+\{e_{i,t}:i= 1,\ldots,\hat{d},t=1,\ldots c\}.\]
Note that \(\widehat{\Gamma}^{(c)}\) consists of an \(n\)-cycle \(C(\langle c\cdot\mathbf{a}(n)\rangle_{n})\) together with \(\hat{d}c\)\(q\)-cycles, where each \(q\)-cycle is formed by an edge \(e_{i,t}\) connecting two vertices of the \(n\)-cycle. Also, the weight \(1-\alpha\) on the edges \(\hat{e}_{i},i=1,\ldots,\hat{d}\), in \(\widehat{\Gamma}\) gives the weight \(1-\alpha\) to the edges \(e_{i,t},i=1,\ldots,\hat{d},t=1,\ldots,c\), in \(\widehat{\Gamma}^{(c)}\). Thus, the weights on all the \(q\)-cycles, \(C_{i,j},i=1,\ldots,\hat{d},j=1,\ldots,c\), are equal to \(1-\alpha\) in in \(\widehat{\Gamma}^{(c)}\).
**Congruence modulo \(c\).** To determine the partition class of \(B^{c}\), we will study separately each part of the graph \(\widehat{\Gamma}^{(c)}\) that (for a fixed \(j\)) involves vertices congruent to \(j\) modulo \(c\). Let \(\Pi_{j}:=P(j+c\mathbf{a}_{0}(h(j)))\), \(j=1,\ldots,c\), where
\[h(j):=\begin{cases}\lfloor\frac{n}{c}\rfloor+1&\text{ for }j=1,\ldots, \langle n\rangle_{c}\\ \lfloor\frac{n}{c}\rfloor&\text{ for }j=\langle n\rangle_{c}+1,\ldots,c,\end{cases}\]
or equivalently:
\[h(j)=1+\hat{d}q+\left\lfloor\frac{y-j}{c}\right\rfloor.\]
With this notation, we have:
\[C(\mathbf{a}(n))^{(c)}=\cup_{j=1}^{c}\left(\Pi_{j}+\{(j+y+(q\hat{d}-1)c,j)\} \right).\]
From \(j+y+(q\hat{d}-1)c\in V(\Pi_{\langle j+y\rangle_{c}})\) and \(j\in V(\Pi_{j})\), we deduce that \(\Pi_{\langle j+y\rangle_{c}}\) is connected to \(\Pi_{j}\) with the edge \((j+y+(q\hat{d}-1)c,j)\). In particular, \(\Pi_{j}\) is followed by \(\Pi_{\langle j-y\rangle_{c}}\) in \(\widehat{\Gamma}^{(c)}\).
From equation (29) we notice that \(\langle v_{O}(i,t)\rangle_{c}=\langle v_{I}(i,t)\rangle_{c}\). This implies that for any pair \(i\in\{1,\ldots,\hat{d}\}\) and \(t\in\{1,\ldots,c\}\), there exists a unique \(j\in\{1,\ldots,c\}\) such that the edge \(e_{i,t}\) connects two vertices in \(\Pi_{j}\). Moreover, the vertices of \(e_{i,t}\) belong to \(\Pi_{j}\) precisely when \(t\) is congruent to \((1-j+\sum_{k=1}^{i}\hat{y}_{k})\mod c\). We define \(e^{\prime}_{i,j}:=e_{i,\langle 1-j+\sum_{k=1}^{i}\hat{y}_{k}\rangle_{c}}\) with \(e^{\prime}_{i,j}=(v^{\prime}_{O}(i,j),v^{\prime}_{I}(i,j))\), and note that
\[\{e_{i,t}:i=1,\ldots,\hat{d},t=1,\ldots c\}=\{e^{\prime}_{i,j}:i=1,\ldots,\hat {d},j=1,\ldots c\}.\]
From now on we will work with edges \(e^{\prime}_{i,j}\). We write: \(v^{\prime}_{O}(i,j)=j+k(i,j)c\) and \(v^{\prime}_{I}(i,j)=j+l(i,j)c\), where:
\[k(i,j)=iq+\left\lfloor\frac{\sum_{k=1}^{i}\hat{y}_{k}-j}{c}\right\rfloor, \tag{30}\]
\[l(i,j)=1+(i-1)q+\left\lfloor\frac{\sum_{k=1}^{i}\hat{y}_{k}-j}{c}\right\rfloor. \tag{31}\]
Parts of the partition.We fix \(j\), and consider those \(q\)-cycles in \(\widehat{\Gamma}^{(c)}\) whose vertices are contained in \(V(\Pi_{j})=\{j,j+c,\ldots,j+(h(j)-1)c\}\). From above we already know that those are precisely the \(q\)-cycles in \(\widehat{\Gamma}^{(c)}\) that contain an edge \(e^{\prime}_{i,j}\) for some \(i=1,\ldots,\hat{d}\).
The first vertex in \(\Pi_{j}\) is \(j+0c\), followed by vertices \(j+c\), \(j+2\cdot c\),..., \(j+(h(j)-1)\cdot c\), in this order. From \(k(1,j)<k(2,j)<\ldots<k(c,j)\) we deduce that the first \(q\)-cycle on \(V(\Pi_{j})\) will contain \(e^{\prime}_{1,j}\), followed by the \(q\)-cycle containing \(e^{\prime}_{2,j}\), etc. The last \(q\)-cycle in \(\Pi_{j}\), made by the edge \(e^{\prime}_{d,j}\), contains the vertex \(v^{\prime}_{O}(\hat{d},j)=j+k(\hat{d},j)c\). Since \(k(\hat{d},j)=h(j)-1\), we conclude that \(v^{\prime}_{O}(\hat{d},j)\) is the last vertex in \(\Pi_{j}\). In other words, there are no vertices in \(\Pi_{j}\) after the last \(q\)-cycle.
We define \(y(1,j):=l(1,j)-0\) to be the number of vertices that lie before \(v^{\prime}_{I}(1,j)\) in \(\Pi_{j}\). Next, we focus on the path between two neighbouring \(q\)-cycles inside our fixed \(\Pi_{j}\). More specifically, for \(i=2,\ldots,\hat{d}\), we want to determine the number of vertices that are not contained in any \(q\)-cycle and are on the path connecting the \(q-\)cycles made by the edges \(e^{\prime}_{i-1,j}\) and \(e^{\prime}_{i,j}\). The first vertex on this path is \(v^{\prime}_{O}(i-1,j)=j+k(i-1,j)c\) and the last vertex is \(v^{\prime}_{I}(i,j)=j+l(i,j)c\). The number of vertices on \(\Pi_{j}\) (strictly) between \(v^{\prime}_{O}(i-1,j)\) and \(v^{\prime}_{I}(i,j)\) is equal to: \(y(i,j):=l(i,j)-k(i-1,j)-1\).
In summary, the vector \(\mathbf{y}(j):=\left(y(1,j)\quad\ldots\quad y(\hat{d},j)\right)\) contains the contribution to partitions in the partition class of \(\mathscr{P}_{\mathit{in}}(B^{c})\) coming from the part of the graph involving \(V(\Pi_{j})\).
To determine \(y(i,j)\) we write \(\hat{y}_{i}=u_{i}c+\beta_{i}\), where \(\beta_{i}\in\{0,\ldots,c-1\}\), and
\(\sum_{k=1}^{i}\beta_{k}=\eta_{i}c+\gamma_{i}\), where \(\gamma_{i}\in\{0,\ldots,c-1\}\). Inserting \(i=1\) in (31) we now get:
\[y(1,j)=1+\left\lfloor\frac{\hat{y}_{1}-j}{c}\right\rfloor=\begin{cases}u_{1}+1,& \text{ for }j\in\{1,\ldots,\beta_{1}\}\\ u_{1}&\text{ for }j\in\{\beta_{1}+1,\ldots,c\}.\end{cases}\]
To determine \(y(i,j)\) from (30) and (31) we compute:
\[y(i,j) =l(i,j)-k(i-1,j)-1\] \[=u_{i}+\left\lfloor\frac{\sum_{k=1}^{i}\beta_{k}-j}{c}\right\rfloor -\left\lfloor\frac{\sum_{k=1}^{i-1}\beta_{k}-j}{c}\right\rfloor\] \[=u_{i}+\left\lfloor\frac{\gamma_{i-1}+\beta_{i}-j}{c}\right\rfloor -\left\lfloor\frac{\gamma_{i-1}-j}{c}\right\rfloor.\]
We distinguish between two cases: \(\gamma_{i-1}+\beta_{i}<c\), and \(\gamma_{i-1}+\beta_{i}\geq c\). If \(\gamma_{i-1}+\beta_{i}<c\), then \(\eta_{i}=\eta_{i-1}\), \(\gamma_{i}=\gamma_{i-1}+\beta_{i}\), and
\[y(i,j)=\begin{cases}u_{i},&j=1,\ldots,\gamma_{i-1}\\ u_{i}+1,&j=\gamma_{i-1}+1,\ldots,\gamma_{i-1}+\beta_{i}\\ u_{i},&j=\gamma_{i-1}+\beta_{i}+1,\ldots,c.\end{cases}\]
For \(\gamma_{i-1}+\beta_{i}\geq c\) we get \(\eta_{i}=\eta_{i-1}+1\), \(\gamma_{i}=\gamma_{i-1}+\beta_{i}-c\), and
\[y(i,j)=\begin{cases}u_{i}+1,&j=1,\ldots,\gamma_{i-1}+\beta_{i}-c\\ u_{i},&j=\gamma_{i-1}+\beta_{i}-c+1,\ldots,\gamma_{i-1}\\ u_{i}+1,&j=\gamma_{i-1}+1,\ldots,c.\end{cases}\]
Both cases can be written in one expression as given by (27).
**Final ordering.** So far we have proved that \(\mathbf{y}\in\mathscr{P}_{\textit{in}}(B^{c})\) consists of \(\mathbf{y}(j)\), \(j=1,\ldots,c\), in some order. Since we also know that \(\Pi_{j}\) is followed by \(\Pi_{\langle j-y\rangle_{c}}\) in \(\widehat{\Gamma}^{(c)}\), we conclude that \(\mathbf{y}(j)\) is followed by \(\mathbf{y}(\langle j-y\rangle_{c})\) in \(\mathbf{y}\in\mathscr{P}_{\textit{in}}(B^{c})\). With this final observation, the partition class is uniquely defined.
**Remark 5.12**.: Given \(\hat{\mathbf{y}}\in\mathscr{P}_{\textit{in}}(B)\) we can define a matrix \(Y^{\prime}\) that satisfies \(\operatorname{vec}(Y^{\prime})^{T}\in\mathscr{P}_{\textit{in}}(B^{c})\) as follows. From \(\hat{\mathbf{y}}\) we define \(y(i,j)\) as in the statement of Theorem 5.11, and form the matrix:
\[Y:=\left(\begin{array}{cccc}y(1,1)&y(1,2)&\ldots&y(1,c)\\ \vdots&\vdots&\ddots&\vdots\\ y(\hat{d},1)&y(\hat{d},2)&\ldots&y(\hat{d},c)\end{array}\right).\]
The matrix \(Y^{\prime}\) is obtained from \(Y\) by a permutation of columns such that the \(j\)-th column of \(Y^{\prime}\) is equal to \(\mathbf{y}(\langle 1-(j-1)y\rangle_{c})\), which is the \(\langle 1-(j-1)y\rangle_{c}\)-th column of \(Y\). The partition class \(\mathscr{P}_{\textit{in}}(B^{c})\) is then equal to \(\mathscr{T}(\mathbf{y})\), where \(\mathbf{y}=\operatorname{vec}(Y^{\prime})^{T}\).
Let \(n=337\) and consider the arc \(\mathcal{K}_{n}(q,s)=\mathcal{K}_{n}(27,337)\). Using Theorem 3.10 we have:
\[\mathcal{K}_{n}(27,337)=\mathcal{K}_{n}(337,324)^{12}=\mathcal{K}_{n}(81,337)^{3 }=\mathcal{K}_{n}(108,337)^{4}. \tag{32}\]
We will consider the case \(\mathcal{K}_{n}(27,337)=\mathcal{K}_{n}(337,324)^{12}\) after Theorem 5.16. In the example below we illustrate the proof of Theorem 5.11 for \(\mathcal{K}_{n}(27,337)=\mathcal{K}_{n}(81,337)^{3}\).
**Example 5.13**.: Let \(n=337\), then \(\mathcal{K}_{n}(27,337)=\mathcal{K}_{n}(81,337)^{3}\), where \(q=27\), \(d=12\), \(s=337\), \(c=3\), \(\hat{q}=81\), \(\hat{d}=4\), \(y=13\) and \(y=\hat{y}_{1}+\hat{y}_{2}+\hat{y}_{3}+\hat{y}_{4}\). Let \(B\in\mathcal{M}_{n}^{0}(81,337)\). We will illustrate the parts of the proof of Theorem 5.11 for a few possible choices for \(\hat{\mathbf{y}}\in\mathscr{P}_{\mathit{in}}(B)\).
1. Let \(\left(\hat{y}_{1}\quad\hat{y}_{2}\quad\hat{y}_{3}\quad\hat{y}_{4}\right)= \left(5\quad 3\quad 3\quad 2\right).\) **Digraph of \(B^{c}\).**\(\Gamma(B)\) is isomorphic to the following digraph \(\widehat{\Gamma}\), \[\widehat{\Gamma}=C(\mathbf{a}(337))+\{\hat{e}_{i}:i=1,\ldots,4\},\] where \(\hat{e}_{i}=\{(81i+\sum_{k=1}^{i}\hat{y}_{k},1+81(i-1)+\sum_{k=1}^{i}\hat{y}_ {k})\}.\) Note that \(\widehat{\Gamma}\) consists of a \(337\)-cycle \(C(\mathbf{a}(337))\) and four cycles of order \(81\) that are made by the edges \(\hat{e}_{i}\). The \(337\)-cycle and the edges \(\hat{e}_{i}\) of \(\widehat{\Gamma}\) give another \(337\)-cycle and the edges \(e_{i,t}\), respectively, in \(\widehat{\Gamma}^{(3)}\). These are explained below in detail. By Theorem 5.11, \(\Gamma(B^{3})\) is isomorphic to \(\widehat{\Gamma}^{(3)}\), given by: \[\widehat{\Gamma}^{(3)}=C(\langle 3\cdot\mathbf{a}(337)\rangle_{n})+\{e_{i,t}:i=1,\ldots,4,t=1,\ldots 3\},\] where \(e_{i,t}:=(\langle 81i+\sum_{k=1}^{i}\hat{y}_{k}-t+1\rangle_{n},\langle 81(i-1)+ \sum_{k=1}^{i}\hat{y}_{k}+3-t\rangle_{n})\). Table 1 shows the edges \(e_{i,t}\). **Congruence modulo \(c\).** The \(n\)-cycle of \(\widehat{\Gamma}^{(3)}\) can be written in terms of the paths \(\Pi_{j},j=1,2,3\), as follows: \[C(\langle 3\cdot\mathbf{a}(337)\rangle_{n})=\cup_{j=1}^{3}\left(\Pi_{j}+\{(j+334,j)\}\right),\] where \(\Pi_{j}=P(j+3\mathbf{a}_{0}(h(j)))\) and \(h(j)=109+\left\lfloor\frac{13-j}{3}\right\rfloor\). Also, recall that the vertices of the edge \(e_{i,j}^{\prime}\) belong \(\Pi_{j}\) where, \(e_{i,j}^{\prime}=e_{i,(1-j+\sum_{k=1}^{i}\hat{y}_{k})_{c}}\).
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & \(t=1\) & \(t=2\) & \(t=3\) \\ \hline \(e_{1,t}\) & \((86,8)\) & \((85,7)\) & \((84,6)\) \\ \hline \(e_{2,t}\) & \((170,92)\) & \((169,91)\) & \((168,90)\) \\ \hline \(e_{3,t}\) & \((254,176)\) & \((253,175)\) & \((252,174)\) \\ \hline \(e_{4,t}\) & \((337,259)\) & \((336,258)\) & \((335,257)\) \\ \hline \end{tabular}
\end{table}
Table 1: The edges \(e_{i,t}\)
For example, \(j=1\) gives us the path \(\Pi_{1}=P(1+3{\bf a}_{0}(113))\), and the edges \(e^{\prime}_{i,1}\), \(i=1,2,3,4\), connect the vertices in \(\Pi_{1}\) to form four cycles of order \(27\).
Writing \(e^{\prime}_{i,1}=(1+3k(i,1),1+3l(i,1))\), we get the values given in Table 2 for \(k(i,1)\) and \(l(i,1)\).
Parts of the partition.To illustrate the next part of the proof, let us continue with \(j=1\). Considering the vertices of \(\Pi_{1}\) in the natural path order i.e. \(V(\Pi_{1})=\{1,1+3,\ldots,1+3(113-1)\}\), we see that the first vertex in \(\Pi_{1}\) contained in a \(27\)-cycle (made by the edge \(e^{\prime}_{1,1}\)) is the vertex, \(v^{\prime}_{I}(1,1)=7=1+3(2)\). This implies, \(l(1,1)=2\), and \(v^{\prime}_{I}(1,1)\) is the third vertex of \(\Pi_{1}\). Therefore, \(y(1,1)=2\) (recalling that \(y(i,j)\) is defined to be the number of vertices in \(\Pi_{j}\) before the vertex \(v^{\prime}_{I}(i,j)\)).
Next, \(y(2,1)\) is the number of vertices not contained in a \(27\)-cycle strictly between \(v^{\prime}_{O}(1,1)\) and \(v^{\prime}_{I}(2,1)\). Since, \(v^{\prime}_{O}(1,1)=85=1+3(28)\) and \(v^{\prime}_{I}(2,1)=91=1+3(30)\), we have \(k(1,1)=28\) and \(l(2,1)=30\). This implies, \(y(2,1)=l(2,1)-k(1,1)-1=1\). Similarly, \(y(3,1)=l(3,1)-k(2,1)-1=1\) and \(y(4,1)=l(4,1)-k(3,1)-1=1\). Also, note that \(v^{\prime}_{O}(4,1)=337=1+3(112)\) is the last vertex of \(\Pi_{1}\).
More generally, to get all the parts of the partition we express the \(k(i,j)\) and \(l(i,j)\) in terms of the parameters \(u_{i}\), \(\beta_{i}\), and \(\gamma_{i}\). In this case, we write \(\hat{y}_{i}=3u_{i}+\beta_{i}\) and \(\sum_{k=1}^{i}\beta_{k}=3\eta_{i}+\gamma_{i}\), \(\beta_{i},\gamma_{i}\in\{0,1,2\}\), to determine the following parameters:
\[u_{1}=u_{2}=u_{3}=1,\,u_{4}=0,\] \[\beta_{1}=\beta_{4}=2,\,\beta_{2}=\beta_{3}=0,\] \[\gamma_{1}=\gamma_{2}=\gamma_{3}=2,\,\gamma_{4}=1.\]
Using equation (27), we get \(y(i,j)\), \(i=1,\ldots,4\), \(j=1,\ldots,3\). Thus, the \(Y\) matrix (as defined in Remark 5.12) is equal to:
\[Y=\left(\begin{array}{cccc}y(1,1)&y(1,2)&y(1,3)\\ y(2,1)&y(2,2)&y(2,3)\\ y(3,1)&y(3,2)&y(3,3)\\ y(4,1)&y(4,2)&y(4,3)\end{array}\right)=\left(\begin{array}{cccc}2&2&1\\ 1&1&1\\ 1&1&1\\ 1&0&1\end{array}\right).\]
In particular, the elements of \(\mathscr{P}_{\it m}(B^{3})\) have one part equal to \(2\), one part equal to \(0\) and all other parts equal to \(1\).
**Final ordering.** To determine \(\mathscr{P}_{\textit{in}}(B^{3})\), we still need the ordering of the parts. For this we look at the \(Y^{\prime}\) matrix in Remark 5.12: \[Y^{\prime}=\left(\begin{array}{ccc}y(1,1)&y(1,3)&y(1,2)\\ y(2,1)&y(2,3)&y(2,2)\\ y(3,1)&y(3,3)&y(3,2)\\ y(4,1)&y(4,3)&y(4,2)\end{array}\right)=\left(\begin{array}{ccc}2&1&2\\ 1&1&1\\ 1&1&0\end{array}\right).\] Thus, the partition class \(\mathscr{P}_{\textit{in}}(B^{3})\) is given by \(\mathbf{y}=\mathscr{T}(\mathrm{vec}(Y^{\prime})^{T})\).
2. Let \(\left(\hat{y}_{1}\quad\hat{y}_{2}\quad\hat{y}_{3}\quad\hat{y}_{4}\right)= \left(11\quad 2\quad 0\quad 0\right).\) Writing \(\hat{y}_{i}=3u_{i}+\beta_{i}\) and \(\sum_{k=1}^{i}\beta_{k}=3\eta_{i}+\gamma_{i}\ \beta_{i},\gamma_{i}\in\{0,1,2\}\), gives: \(u_{1}=3\), \(u_{2}=u_{3}=u_{4}=0\); \(\beta_{1}=\beta_{2}=2\), \(\beta_{3}=\beta_{4}=0\) and \(\gamma_{1}=2\), \(\gamma_{2}=\gamma_{3}=\gamma_{4}=1\). Using equation (27) to determine \(y(i,j)\), \(i=1,\ldots,4\), \(j=1,\ldots,3\), we get the following matrices (defined in Remark 5.12): \[Y=\left(\begin{array}{ccc}4&4&3\\ 1&0&1\\ 0&0&0\\ 0&0&0\end{array}\right)\ \text{and}\ Y^{\prime}=\left(\begin{array}{ccc}4&3&4 \\ 1&1&0\\ 0&0&0\\ 0&0&0\end{array}\right).\] The partition class \(\mathscr{P}_{\textit{in}}(B^{3})\) is given by \(\mathbf{y}=\mathscr{T}(\mathrm{vec}(Y^{\prime})^{T})\).
3. Let \(\left(\hat{y}_{1}\quad\hat{y}_{2}\quad\hat{y}_{3}\quad\hat{y}_{4}\right)= \left(13\quad 0\quad 0\quad 0\right).\) From \(\hat{y}_{i}=3u_{i}+\beta_{i}\) and \(\sum_{k=1}^{i}\beta_{k}=3\eta_{i}+\gamma_{i}\), \(\beta_{i},\gamma_{i}\in\{0,1,2\}\), we get: \(u_{1}=4\), \(u_{2}=u_{3}=u_{4}=0\); \(\beta_{1}=1\), \(\beta_{2}=\beta_{3}=\beta_{4}=0\) and \(\gamma_{1}=\gamma_{2}=\gamma_{3}=\gamma_{4}=1\). In this case: \[Y=Y^{\prime}=\left(\begin{array}{ccc}5&4&4\\ 0&0&0\\ 0&0&0\\ 0&0&0\end{array}\right),\] and the partition class \(\mathscr{P}_{\textit{in}}(B^{3})\) is given by \(\mathbf{y}=\mathscr{T}(\mathrm{vec}(Y^{\prime})^{T})\).
4. Let \(\left(\hat{y}_{1}\quad\hat{y}_{2}\quad\hat{y}_{3}\quad\hat{y}_{4}\right)= \left(4\quad 3\quad 3\quad 3\right).\) Writing \(\hat{y}_{i}=3u_{i}+\beta_{i}\) and \(\sum_{k=1}^{i}\beta_{k}=\ 3\eta_{i}+\gamma_{i}\)\(\beta_{i},\gamma_{i}\in\{0,1,2\}\) gives: \(u_{1}=\ u_{2}=u_{3}=u_{4}=1\); \(\beta_{1}=1\), \(\beta_{2}=\beta_{3}=\beta_{4}=0\) and \(\gamma_{1}=\gamma_{2}=\gamma_{3}=\ \gamma_{4}=1\). Again, \[Y=Y^{\prime}=\left(\begin{array}{ccc}2&1&1\\ 1&1&1\\ 1&1&1\\ 1&1&1\end{array}\right)\] and the partition class \(\mathscr{P}_{\textit{in}}(B^{3})\) is given by \(\mathbf{y}=\mathscr{T}(\mathrm{vec}(Y^{\prime})^{T})\).
**Corollary 5.14**.: _Let \(A\in\mathcal{M}_{n}^{0}(q,qc\hat{d}+y)\) and \(\left(y_{1}\quad\ldots\quad y_{cd}\right)\in\mathscr{P}_{\textit{in}}(A)\). Then \(A=B^{c}\) for some \(B\in\mathcal{M}_{n}^{0}(qc,qc\hat{d}+y)\) if and only if:_
1. _For every_ \(i\in\{1,\ldots,\hat{d}\}\) _there exists_ \(u_{i}\) _so that_ \(y_{td+i}\in\{u_{i},u_{i}+1\}\) _for_ \(t=0,\ldots,c-1\)_. (If for some_ \(i\) _all_ \(y_{t\hat{d}+i}\) _are equal, then we say that they are equal to_ \(u_{i}\)_.)_ _We define,_ \(\beta_{i}:=|\{t\in\{0\ldots,c-1\};y_{td+i}=u_{i}+1\}|\)_,_ \(i=1,\ldots,\hat{d}\)_, and_ \(\sum_{k=1}^{i}\beta_{i}:=\eta_{i}c+\gamma_{i}\)_, where_ \(\gamma_{i}\in\{0,\ldots,c-1\}\)_. In addition,_ \(\gamma_{0}:=0\)_._ _With these parameters we define_ \(y(i,j)\) _as follows :_ \[y(i,j):=\begin{cases}u_{i}+1,&j=\langle\gamma_{i-1}+1\rangle_{c},\ldots, \langle\gamma_{i-1}+\beta_{i}\rangle_{c}\\ u_{i},&j=\langle\gamma_{i-1}+\beta_{i}+1\rangle_{c},\ldots,\langle\gamma_{i- 1}+c\rangle_{c}\end{cases},\] _where_ \(i=1,\ldots,\hat{d}\)_,_ \(j=1\ldots,c\)_._
2. _The partition class_ \(\mathscr{P}_{\mathfrak{m}}(A)\) _consists of the parts_ \(y(i,j)\)_,_ \(i=1,\ldots,\hat{d}\)_,_ \(j=1\ldots,c\)_. Further, the part_ \(y(i,j)\) _is followed by_ \(y(i+1,j)\) _for_ \(i=1,\ldots,\hat{d}-1\) _and_ \(y(\hat{d},j)\) _is followed by_ \(y(1,(j-y)_{c})\) _in_ \(\mathscr{P}_{\mathfrak{m}}(A)\)_._
_If the conditions above are satisfied, then \(A=B^{c}\) for \(B\in\mathcal{M}_{n}^{0}(qc,qc\hat{d}+y)\) with \(\mathscr{P}_{\mathfrak{m}}(B)=\mathscr{T}(\big{(}\hat{y}_{1}\quad\ldots\quad \hat{y}_{\hat{d}}\big{)}),\) where \(\hat{y}_{i}:=\sum_{j=1}^{c}y(i,j)\), for \(i=1,\ldots,\hat{d}\)._
Proof.: The result follows directly from Theorem 5.11.
**Example 5.15**.: Let \(n=s=337\), \(q=27\), \(d=12\), \(c=3\), \(y=13\) and \(n=s=qd+y\). By Definition 4.5, any partition class containing \(12\) non-negative integers that sum to \(13\) is a partition class for some matrix in \(\mathcal{M}_{n}^{0}(27,337)\).
Let \(A\in\mathcal{M}_{n}^{0}(27,337)\) and \(\mathbf{y}=\big{(}y_{1}\quad\ldots\quad y_{12}\big{)}\in\mathscr{P}_{ \mathfrak{m}}(A)\). Let \(Y_{A}^{\prime}\) be a \(\hat{d}\times c\) matrix satisfying \(\operatorname{vec}(Y_{A}^{\prime})^{T}=\mathbf{y}\):
\[Y_{A}^{\prime}=\left(\begin{array}{ccc}y_{1}&y_{5}&y_{9}\\ y_{2}&y_{6}&y_{10}\\ y_{3}&y_{7}&y_{11}\\ y_{4}&y_{8}&y_{12}\end{array}\right).\]
We want to determine when \(A=B^{3}\) for some matrix \(B\in\mathcal{M}_{n}^{0}(81,337)\).
From the first part of Corollary 5.14, we know that for \(A\) to be a power of any matrix \(B\in\mathcal{M}_{n}^{0}(81,337)\), each row in \(Y_{A}^{\prime}\) must have elements from a set of the form \(\{u_{i},u_{i}+1\}\) for some integer \(u_{i}\geq 0\). Under this constraint, we can choose the row sums of \(Y_{A}^{\prime}\) in many possible ways taking care that the sum of all the entries in \(Y_{A}^{\prime}\) is equal to \(13\). Let us look at a few specific examples.
1. Let \[Y_{A}^{\prime}=\left(\begin{array}{ccc}1&1&0\\ 0&0&0\\ 2&2&2\\ 1&2&2\end{array}\right).\] From the first item of Corollary 5.14, we get: \(u_{1}=u_{2}=0\), \(u_{3}=2\), \(u_{4}=1\), \(\beta_{1}=2\), \(\beta_{2}=\beta_{3}=0\), \(\beta_{4}=2\) and \(\gamma_{1}=\gamma_{2}=\gamma_{3}=2\), \(\gamma_{4}=1\).
Thus,
\[Y=\left(\begin{array}{cccc}y(1,1)&y(1,2)&y(1,3)\\ y(2,1)&y(2,2)&y(2,3)\\ y(3,1)&y(3,2)&y(3,3)\\ y(4,1)&y(4,2)&y(4,3)\end{array}\right)=\left(\begin{array}{cccc}1&1&0\\ 0&0&0\\ 2&2&2\\ 2&1&2\end{array}\right).\]
Using the second item of Corollary 5.14, we get:
\[Y^{\prime}=\left(\begin{array}{cccc}y(1,1)&y(1,3)&y(1,2)\\ y(2,1)&y(2,3)&y(2,2)\\ y(3,1)&y(3,3)&y(3,2)\\ y(4,1)&y(4,3)&y(4,2)\end{array}\right)=\left(\begin{array}{cccc}1&0&1\\ 0&0&0\\ 2&2&2\\ 2&2&1\end{array}\right).\]
We see that \(\operatorname{vec}(Y^{\prime}_{A})^{T}\) is permutationally equivalent to \(\operatorname{vec}(Y^{\prime})^{T}\) and thus satisfies all the conditions of Corollary 5.14. Therefore, \(A=B^{3}\) for \(B\in\mathcal{M}_{n}^{0}(81,337)\) with \(\mathscr{P}_{\textit{in}}(B)=\mathscr{T}(\hat{\mathbf{y}})\), where \(\hat{\mathbf{y}}=\begin{pmatrix}6&5&2&0\end{pmatrix}\).
2. For \[Y^{\prime}_{A}=\left(\begin{array}{cccc}4&4&3\\ 1&0&1\\ 0&0&0\\ 0&0&0\end{array}\right),\] we get \(u_{1}=3\), \(u_{2}=u_{3}=u_{4}=0\); \(\beta_{1}=\beta_{2}=2\), \(\beta_{3}=\beta_{4}=0\) and \(\gamma_{1}=2\),\(\gamma_{2}=\gamma_{3}=\gamma_{4}=1\). Thus, \[Y=\left(\begin{array}{cccc}4&4&3\\ 1&0&1\\ 0&0&0\\ 0&0&0\end{array}\right).\] Referring back to Example 5.13 (Item 2.) we know that \[Y^{\prime}=\left(\begin{array}{cccc}4&3&4\\ 1&1&0\\ 0&0&0\\ 0&0&0\end{array}\right).\] This time the second item of Corollary 5.14 is not satisfied. Hence, \(A\neq B^{3}\) for any \(B\in\mathcal{M}_{n}^{0}(81,337)\).
3. Let \[Y^{\prime}_{A}=\left(\begin{array}{cccc}2&1&1\\ 1&1&1\\ 1&1&1\\ 1&1&1\end{array}\right).\]
In this case we get: \(u_{1}=u_{2}=u_{3}=u_{4}=1\); \(\beta_{1}=1\), \(\beta_{2}=\beta_{3}=\beta_{4}=0\) and \(\gamma_{1}=\gamma_{2}=\gamma_{3}=\gamma_{4}=1\). Thus,
\[Y=\left(\begin{array}{ccc}2&1&1\\ 1&1&1\\ 1&1&1\\ 1&1&1\end{array}\right).\]
From Item \(4\) of Example 5.13 we note that \(Y^{\prime}=Y\) and \(A=B^{3}\) for \(B\in\mathcal{M}_{n}^{0}(81,337)\) with the \(\mathscr{P}_{\textit{in}}(B)=\mathscr{T}(\hat{\mathbf{y}})\), where \(\hat{\mathbf{y}}=\begin{pmatrix}4&3&3&3\end{pmatrix}.\)
### Type III arc is a power of a Type I arc
In this subsection we assume \(n=qd+y\), \(\gcd(qd,y)=1\), \(y\in\{1,\ldots,q-1\}\). By Theorem 3.10:
\[\mathcal{K}_{n}(qd+y,qd)^{d}=\mathcal{K}_{n}(q,qd+y).\]
The following theorem determines the partition class of \(B^{d}\) for \(B\in\mathcal{M}_{n}^{0}(qd+y,qd)\). We note that the result can also be obtained from Theorem 5.11 by taking \(i=1\), \(\hat{d}=1\), and \(c=d\).
**Theorem 5.16**.: _Let \(B\in\mathcal{M}_{n}^{0}(qd+y,qd)\), where \(n=s=qd+y\), \(\gcd(d,s)=1\) and \(y\in\{1,\ldots,q-1\}\). We define \(u\) and \(\beta\) by writing \(y:=ud+\beta\), where \(\beta\in\{0,\ldots,d-1\}\). For \(j=1,\ldots,d\) we define:_
\[y(j):=\begin{cases}u+1,&j=1,\ldots,\beta\\ u,&j=\beta+1,\ldots,d.\end{cases} \tag{33}\]
_The partition class \(\mathscr{P}_{\textit{in}}(B^{d})\) is defined as follows:_
* _the parts of the partition are equal to_ \(y(j)\)_,_ \(j=1,\ldots,d\)_,_
* _in the partition_ \(y(j)\) _is followed by_ \(y(\langle j-y\rangle_{d})\)_._
Proof.: Let \(B\in M_{n}^{0}(\hat{q},\hat{s})\), where \(\hat{q}=qd\), \(\hat{s}=qd+y\), \(\gcd(\hat{q},n)=1\). By Theorem 4.1, \(\Gamma(B)\) is isomorphic to \(C(\mathbf{a}(n))+\{(qd,1)\}\), which is isomorphic to:
\[\widehat{\Gamma}:=C(\mathbf{a}(n))+\{(\hat{s},y+1)\}.\]
**Digraph of \(B^{d}\).** The digraph of \(\Gamma(B^{d})\) is isomorphic to \(\widehat{\Gamma}^{(d)}\):
\[\widehat{\Gamma}^{(d)}=C(\langle d\cdot\mathbf{a}(n)\rangle_{n})+\{e_{t}:t=1, \ldots d\},\]
where
\[e_{t}=(s-t+1,y+d-t+1),t=1,\ldots d.\]
In particular, \(\widehat{\Gamma}^{(d)}\) consists of an \(n\)-cycle \(C(\langle d\cdot\mathbf{a}(n)\rangle_{n})\) together with \(d\)\(q\)-cycles, where each \(q\)-cycle is formed by an edge \(e_{t}\) connecting two vertices of the \(n\)-cycle. The edge
\((\hat{s},y+1)\) in \(\widehat{\Gamma}\) and the edges \(e_{t},t=1,\ldots,d\), in \(\widehat{\Gamma}^{(d)}\) have weights equal to \(1-\alpha\). Equivalently, all the \(q\)-cycles in \(\widehat{\Gamma}^{(d)}\) have the weight \(1-\alpha\).
**Congruence modulo \(d\).** Let \(\Pi_{j}:=P(j+d\mathbf{a}_{0}(h(j)))\), where \(h(j)=1+q+\left\lfloor\frac{y-j}{d}\right\rfloor\). We have:
\[C(\mathbf{a}(n))^{(d)}=\cup_{j=1}^{d}\left(\Pi_{j}+\left\{((q-1)d+y+j,j)\right\} \right),\]
and note that \(\Pi_{j}\) is followed by \(\Pi_{(j-y)_{d}}\) (equivalently, \(\Pi_{(j+y)_{d}}\) is followed by \(\Pi_{j}\)) in \(\widehat{\Gamma}^{(d)}\). Both vertices of \(e_{t}\) belong to \(\Pi_{j}\), where \(t\) is congruent to \(\langle 1+y-j\rangle_{d}\). In other words, the edge \(e_{\langle 1+y-j\rangle_{d}}\) connects two vertices in \(\Pi_{j}\) to form a \(q\)-cycle in \(\widehat{\Gamma}^{(d)}\). We define \(e^{\prime}_{j}:=e_{(1+y-j)_{d}}\) and note that \(\{e_{t}:t=1,\ldots,d\}=\{e^{\prime}_{j}:j=1,\ldots,d\}\).
Let \(e^{\prime}_{j}=(v^{\prime}_{O}(j),v^{\prime}_{I}(j))\) and \(y=ud+\beta\), \(\beta\in\{0,\ldots,d-1\}\). Then, \(v^{\prime}_{O}(j)=j+k(j)d\) and \(v^{\prime}_{I}(j)=j+l(j)d\), where
\[k(j):=\begin{cases}q+u&\text{for }j\leq\beta\\ q+u-1&\text{for }j>\beta,\end{cases} \tag{34}\]
\[l(j):=\begin{cases}u+1&\text{for }j\leq\beta\\ u&\text{for }j>\beta.\end{cases} \tag{35}\]
**Parts of the Partition.** For a fixed \(j\), there is only one \(q\)-cycle made by the edge \(e^{\prime}_{j}=(v^{\prime}_{O}(j),v^{\prime}_{I}(j))\) in \(\Pi_{j}\). Also, \(k(j)=h(j)-1\) implies that \(v^{\prime}_{O}(j)=j+k(j)d\) is the last vertex in \(\Pi_{j}\).
We define \(y(j):=l(j)-0\) as the number of vertices that lie before \(v^{\prime}_{I}(j)\) in \(\Pi_{j}\). Thus, \(y(j)\) is equal to \(l(j)\) and is as defined in (33) in the statement of the theorem.
**Ordering of Parts.** Since the \(q\)-cycle with the edge \(e^{\prime}_{j}\) is followed by the \(q\)-cycle containing the edge \(e^{\prime}_{(j-y)_{d}}\), the part \(y(j)\) is followed by the part \(y((j-y)_{d})\) in the partition.
**Remark 5.17**.: For \(B\in\mathcal{M}_{n}^{0}(qd,qd+y)\), let us define the row vector \(\mathbf{y}\) consisting of parts of the partition class \(\mathscr{P}_{\mathit{u}}(B^{d})\), i.e \(\mathbf{y}:=\begin{pmatrix}y(1)&\ldots&y(d)\end{pmatrix}.\) Since, \(y(j)\) is followed by \(y(\langle j-y\rangle_{d})\) in the partition class \(\mathscr{P}_{\mathit{u}}(B^{d})\), we can permute the elements of \(\mathbf{y}\) to get the row vector:
\[\mathbf{y}^{\prime}:=\begin{pmatrix}y(1)&y(\langle 1-y\rangle_{d})&\ldots&y( \langle 1-(d-1)y\rangle)_{d}\end{pmatrix}\in\mathscr{P}_{\mathit{u}}(B^{d}).\]
The partition class \(\mathscr{P}_{\mathit{u}}(B^{d})\) is equal to \(\mathscr{T}(\mathbf{y}^{\prime})\).
**Example 5.18**.: For \(n=337\) we have \(\mathcal{K}_{n}(27,337)=\mathcal{K}_{n}(337,324)^{12}\), where \(q=27\), \(n=s=337\), \(d=12\), \(y=s-qd=13\) and \(\mathcal{M}_{n}^{0}(qd+y,qd)=\mathcal{M}_{n}^{0}(337,324).\) Let \(B\in\mathcal{M}_{n}^{0}(337,324)\).
**Digraph of \(B^{12}\).** The digraph \(\Gamma(B^{12})\) is isomorphic to \(\widehat{\Gamma}^{(12)}\):
\[\widehat{\Gamma}^{(12)}=C(\langle 12\cdot{\bf a}(337)\rangle_{n})+\{e_{t}:t=1, \ldots,12\},\]
where
\[e_{t}=(338-t,26-t),t=1,\ldots,12.\]
**Congruence modulo \(d\).** For \(\Pi_{j}=(j+12{\bf a}_{0}(h(j))),\) with \(h(j)=28+\lfloor\frac{13-j}{12}\rfloor\), the edge \(e_{j}^{\prime}=e_{\langle 14-j\rangle_{12}}\) connects the vertices in \(\Pi_{j}\) to form a single \(27\)-cycle.
For example, the edge \(e_{1}^{\prime}=e_{1}=(337,25)\) forms a \(27\)-cycle by connecting two vertices of \(\Pi_{1}=P(1+12{\bf a}_{0}(29))\). Similarly, the edge \(e_{2}^{\prime}=e_{12}=(326,14)\) connects vertices in \(\Pi_{2}=P(2+12{\bf a}_{0}(28))\) to form a \(27\)-cycle.
Writing \(e_{1}^{\prime}=(1+12k(1),1+12l(1))\) gives \(k(1)=28\) and \(l(1)=2\). Similarly, \(e_{2}^{\prime}=(2+12k(2),2+12l(2))\) gives \(k(2)=27\) and \(l(2)=1\).
**Parts of the Partition.** Recall that \(y(j)\) is the number of vertices in \(\Pi_{j}\) before \(v_{I}^{\prime}(j)=j+12l(j)\). Continuing with \(\Pi_{1}\), from \(l(1)=2\) we get \(y(1)=2\). Similarly, for \(\Pi_{2}\) we have \(l(2)=1\) which gives \(y(2)=1\). More generally, to get all the parts \(y(j)\), we write \(y\) in terms of \(u\) and \(\beta\): \(y=12u+\beta\), \(\beta\in\{0,\ldots,11\}\). This gives us:
\[y(j):=\begin{cases}2&\text{ for }j\leq 1\\ 1&\text{ for }j>1.\end{cases}\]
**Ordering.** From Remark 5.17, we have:
\[{\bf y}^{\prime}=\left(\begin{array}{cccccccccccc}y(1)&y(12)&y(11)&y(10)&y( 9)&y(8)&y(7)&y(6)&y(5)&y(4)&y(3)&y(2)\end{array}\right).\]
Thus, \({\bf y}^{\prime}=\left(\begin{array}{cccccccccccc}2&1&1&1&1&1&1&1&1&1&1&1&1 \end{array}\right)\) and the partition class \(\mathscr{P}_{\text{\tiny{m}}}(B^{12})\) is given by \(\mathscr{T}({\bf y}^{\prime})\). Note that this uniquely defined partition class for \(B^{12}\) is same as the partition class \(\mathscr{P}_{\text{\tiny{m}}}(B^{\prime 3})\), where \(B^{\prime}\in\mathcal{M}_{n}^{0}(81,337)\) and \(\mathscr{P}_{\text{\tiny{m}}}(B^{\prime})=\mathscr{T}(\hat{\bf y})\) for \(\hat{\bf y}=\left(4&3&3&3\right)\) (Item \(4\), Example 5.13).
**Corollary 5.19**.: _Let \(A\in\mathcal{M}_{n}^{0}(q,qd+y)\) have the associated partition \(\left(y_{1}\quad\ldots\quad y_{d}\right)\in\mathscr{P}_{\text{\tiny{m}}}(A)\). Then \(A=B^{d}\) for some \(B\in\mathcal{M}_{n}^{0}(qd,qd+y)\) if and only if:_
1. _There exists_ \(u\) _such that_ \(y_{j}\in\{u,u+1\}\)_. (If all_ \(y_{j}\) _are equal then we say they are all equal to_ \(u\)_)._ _We define_ \(\beta:=|\{j\in\{1\ldots,d\};y_{j}=u+1\}|\) _and_ \[y(j):=\begin{cases}u+1&\text{ for }j=1,\ldots,\beta\\ u&\text{ for }j=\beta+1,\ldots,d.\end{cases}\]
2. _The partition class_ \(\mathscr{P}_{\text{\tiny{m}}}(A)\) _consists of the parts_ \(y(j)\)_,_ \(j=1,\ldots,d\)_, where the part_ \(y(j)\) _is followed by the part_ \(y((j-y)_{d})\) _in_ \(\mathscr{P}_{\text{\tiny{m}}}(A)\)_._
**Example 5.20**.: For \(n=337\) we have \(\mathcal{K}_{n}(27,337)=\mathcal{K}_{n}(337,324)^{12}\). In this case, the relevant parameters are \(q=27\), \(s=337\), \(d=12\), and \(y=13\). By Definition 4.5 any \(12\) non-negative integers that sum up to \(13\) form a partition class for some matrix in \(\mathcal{M}_{n}^{0}(27,337)\). Let \(A\in\mathcal{M}_{n}^{0}(27,337)\) and \({\bf y}^{\prime}=\left(y_{1}\quad\ldots\quad y_{12}\right)\in\mathscr{P}_{ \text{\tiny{m}}}(A)\).
The first part of Corollary 5.19 restricts the elements in \(\mathbf{y}^{\prime}\) to be in the set \(\{u,u+1\}\) for some integer \(u\). This necessary condition leaves us with just one choice for the vector \(\mathbf{y}\) (as defined in Remark 5.17), which contains the parts of the partition:
\[\mathbf{y}=\left(\begin{array}{cccccccccccc}2&1&1&1&1&1&1&1&1&1&1&1\end{array} \right).\]
The above \(\mathbf{y}\) gives \(\beta=1\) and \(u=1\). With these parameters, we get the unique partition class \(\mathscr{P}_{\mathit{in}}(A)\) which is defined in Corollary 5.19. Therefore \(A=B^{12}\) for some \(B\in\mathcal{M}_{n}^{0}(337,324)\) if and only if \(\mathbf{y}^{\prime}\in\mathscr{P}_{\mathit{in}}(A)\), where:
\[\mathbf{y}^{\prime}=\mathbf{y}=\left(\begin{array}{cccccccccccc}2&1&1&1&1&1& 1&1&1&1&1&1&1\end{array}\right).\]
In that case \(A=B^{12}=B^{\prime 3}\) for \(B\in\mathcal{M}_{n}^{0}(337,324)\) and \(B^{\prime}\in\mathcal{M}_{n}^{0}(81,337)\), where \(\mathscr{P}_{\mathit{in}}(B^{\prime})=\mathscr{T}(\hat{\mathbf{y}})\) for \(\hat{\mathbf{y}}=\left(4\quad 3\quad 3\quad 3\right).\)
**Acknowledgement.** This publication has emanated from research supported in part by a grant from Science Foundation Ireland under Grant number 18/CRT/6049. For the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission. The research of the second author is supported in part by NSERC Discovery Grant RGPIN-2019-05408.
|
2301.07851 | From English to More Languages: Parameter-Efficient Model Reprogramming
for Cross-Lingual Speech Recognition | In this work, we propose a new parameter-efficient learning framework based
on neural model reprogramming for cross-lingual speech recognition, which can
\textbf{re-purpose} well-trained English automatic speech recognition (ASR)
models to recognize the other languages. We design different auxiliary neural
architectures focusing on learnable pre-trained feature enhancement that, for
the first time, empowers model reprogramming on ASR. Specifically, we
investigate how to select trainable components (i.e., encoder) of a
conformer-based RNN-Transducer, as a frozen pre-trained backbone. Experiments
on a seven-language multilingual LibriSpeech speech (MLS) task show that model
reprogramming only requires 4.2% (11M out of 270M) to 6.8% (45M out of 660M) of
its original trainable parameters from a full ASR model to perform competitive
results in a range of 11.9% to 8.1% WER averaged across different languages. In
addition, we discover different setups to make large-scale pre-trained ASR
succeed in both monolingual and multilingual speech recognition. Our methods
outperform existing ASR tuning architectures and their extension with
self-supervised losses (e.g., w2v-bert) in terms of lower WER and better
training efficiency. | Chao-Han Huck Yang, Bo Li, Yu Zhang, Nanxin Chen, Rohit Prabhavalkar, Tara N. Sainath, Trevor Strohman | 2023-01-19T02:37:56Z | http://arxiv.org/abs/2301.07851v1 | From English to More Languages: Parameter-Efficient Model Reprogramming for Cross-Lingual Speech Recognition
###### Abstract
In this work, we propose a new parameter-efficient learning framework based on neural model reprogramming for cross-lingual speech recognition, which can **re-purpose** well-trained English automatic speech recognition (ASR) models to recognize the other languages. We design different auxiliary neural architectures focusing on learnable pre-trained feature enhancement that, for the first time, empowers model reprogramming on ASR. Specifically, we investigate how to select trainable components (i.e., encoder) of a conformer-based RNN-Transducer, as a frozen pre-trained backbone. Experiments on a seven-language multilingual LibriSpeech speech (MLS) task show that model reprogramming only requires \(4.2\)% (11M out of 270M) to \(6.8\)% (45M out of 660M) of its original trainable parameters from a full ASR model to perform competitive results in a range of \(11.9\)% to \(8.1\)% WER averaged across different languages. In addition, we discover different setups to make large-scale pre-trained ASR succeed in both monolingual and multilingual speech recognition. Our methods outperform existing ASR tuning architectures and their extension with self-supervised losses (e.g., w2v-bert) in terms of lower WER and better training efficiency.
Chao-Han Huck Yang\({}^{*1,2}\) Bo Li\({}^{1}\) Yu Zhang\({}^{1}\) Nanxin Chen\({}^{1}\)
_Rohit Prabhavalkar\({}^{1}\) Tara N. Sainath\({}^{1}\) Trevor Strohman\({}^{1}\)\({}^{1}\)Google, USA \({}^{2}\)Georgia Institute of Technology, USA_
Cross-lingual speech recognition, model reprogramming, pre-trained adaptation, and foundation speech models
## 1 Introduction
Recent advances [1, 2, 3, 4, 5, 6] in developing large-scale ASR architectures have demonstrated promising results for English speech recognition tasks. Moreover, English ASR model with self-supervised training objectives, such as wav2vec2 [7], w2v-BERT [8], and BigSSL [9], further boosts recognition performance, as an extension from the existing supervised ASR framework with annotated data. Meanwhile, the success of current neural ASR models is **still related to the scale** of training data, where training large neural ASR models does not always ensure competitive results in medium or small-scale corpora for non-English and low resource languages conditions. When current ASR data are mainly based on English [9], how to advance the power of well-trained English ASR systems (e.g., RNN-T [10]) to _other languages_[11] is still an open question that could **benefit more worldwide end-users**.
Previous works on pre-training and fine-tuning encoders of English ASR models have demonstrated some recent success in the West Germanic languages [12, 13] (e.g., English and Deutsch), atypical [14], and accent speech recognition [15]. Motivated by the previous discussion, we aim to investigate how to efficiently transfer large-scale English ASR for both monolingual and multilingual speech recognition in this work. One notorious challenge of applying large-scale ASR for mobile applications is the model complexity (e.g., trainable parameters) in terms of memory. Tuning a large-scale ASR for a new task or dataset often requires a significant training cost (e.g., time and power), which further makes large pre-trained speech models difficult to deploy on mobile and smart home voice applications in terms of latency and energy consumption.
Recently, parameter-efficient learning has been recognized as one potential solution to ameliorate the difficulties of adapting a large pre-trained language model, which aims to adapt a frozen pre-trained model by **only training some small additive modules** (e.g., residual adapter [16], neural reprogramming [17] and input prompt [18]). By integrating parameter-efficient learning, pre-trained language models [19] (PLMs) require less training time and computing resources to attain new state-of-the-art performance on different natural language processing tasks.
In sum, how to advance parameter-efficient learning with existing English ASR models is one open topic for more voice applications without sufficient data sources like English. In this work, we propose **three specific designs of ASR model reprogramming** with Conformer [20] based architectures for cross-lingual adaptation. As shown in Figure 1, our proposed **C**onformer-based **ASR**eprogramming (CAR) makes most of trainable neural architecture frozen (e.g., non-trainable) and only inserts few trainable modules for parameter-efficient model training.
### Parameter-Efficient Learning with Frozen ASR Models
We review recent advances in parameter-efficient learning with frozen ASR models and justify its difference from a neural model reprogramming perspective. Residual adapters [16] were initially introduced to vision domain applications [21] as a computationally efficient solution in contrast to full model-tuning. Houlsby _et al._[16] further advanced the design of residual adapters by developing a non-linear projection mechanism over latent features within frozen feature extractors (e.g., pre-trained transformer layers). Given the fact that acoustic feature encoders are standard components for ASR models, several recent works have demonstrated the effectiveness of applying residual adapters [16] for various speech applications, such as atypical speech [15], multilingual speech [22], and children's speech recognition [14]. Meanwhile, there are related works studying how to build trainable parameters upon latent features from speaker adaptation [23] and latent space adversarial reprogramming literature. Bapna and Firat [24] have shown that residual adapters for ASR are much more effective than hidden unit modulation-based methods. However, the connections between each solution deserve more investigation, where model reprogramming literature [17] has
recently developed one first theoretical justification for the success of pre-trained speech model adaptation on population risk analysis.
### Neural Reprogramming: from Input to Latent Space
The Neural reprogram (NR) method was first introduced in [25] to re-purpose frozen pre-trained classifiers with a small amount of trainable input noise for out-of-domain image predictions. Recently, NR has been utilized for benchmarking sequence prediction tasks, such as classifying time series signals [17] and spoken command [26]. NR mainly adds trainable parameters at its input level of a pre-trained model and thus enjoys federated advantages for distributed pre-trained models on-device. Meanwhile, NR has recently been introduced in latent space optimization [27] and demonstrates a state-of-the-art text classification performance. In the next section, we will introduce similar components of different existing parameter-efficient learning methods and provide a new design to summarize existing parameter-efficient learning techniques that empower cross-lingual speech recognition from a monolingual model.
## 2 Neural Reprogramming for ASR Model
Building upon the success of the aforementioned parameter-efficient learning techniques, our neural reprogramming has three major components: (1) input reprogramming, (2) latent space reprogramming, and (3) multilingual graphemes pre-training. As a short summary, (1) is associated with standard model reprogramming or input-prompting, (2) is related to the effectiveness of reprogramming and residual adapters, and (3) aims to resolve the existing challenges of cross-lingual learning on graphemes mismatching.
### Input Level Reprogramming
Given a pre-trained neural network \(\mathrm{M}\) with frozen model parameters \(\Theta\), an input feature \(x\), we can access its predicted output \(y^{\prime}=\mathrm{M}_{\Theta}(x)\) by feeding the input into the pre-trained model \(\mathrm{M}_{\Theta}\). The goal of input level reprogramming is to find a trainable reprogramming function of \(\mathcal{R}_{\theta}\) to minimize a prediction loss (\(\mathcal{L}_{\text{error}}\)) between \(y^{\prime}\) and its true label \(\hat{y}\). In the previous speech model reprogramming studies [17, 26], a trainable universal noise has been deployed for cross-domain adaptation, which is equivalent to our feature-independent reprogramming target \(w_{\theta_{2}}\). However, in our empirical study, we find that only applying universal noise does not show a competitive performance, and we further introduce a feature-dependent trainable feature extractor (\(\mathcal{H}_{\theta_{1}}\)) in Eq. (1).
\[\theta^{*}=\arg\min_{\theta}\left\{\mathcal{L}_{\text{error}}( \mathrm{M}_{\Theta}(\mathcal{R}_{\theta}(x)),\hat{y})\right\} \tag{1}\] \[\text{where}\quad\mathcal{R}_{\theta}(x)=\underbrace{x}_{\text{ original input}}+\underbrace{w_{\theta_{2}}}_{\text{feature-independent}}+\underbrace{\mathcal{H}_{\theta_{1}}(x)}_{\text{ feature-dependent}}\]
We have conducted an ablation study and select simple 1D-lightweight convolution [28] followed by spatial attention [29] encoding as best evaluating extractor setup for cross-lingual ASR.
### Latent Space Reprogramming with Bridged Connections
To further boost the performance of using a frozen Conformer-ASR model, we introduce extra trainable features in the latent space between each encoder. We call this baseline latent space reprogramming, which could be considered one inspiration of word-level reprogramming [27] and residual adapters [16] with optimization in the latent space. We further make a new design of "bridged connection" for latent space reprogramming to enhance additive feature learning on the frozen ASR model. Fig. 1(a) shows how bridged-connected reprogramming blocks insert trainable feature between frozen conformer encoders. Given a \(i\)-th frozen conformer encoder as function \(\mathcal{F}_{\Theta}^{i}\), we have latent feature \(h^{i}\) from \(i\)-th conformer layer extracted from input \(x\). The bridged-connection mechanism has been deployed for the following \((i+1)\)-th conformer layer \(\mathcal{F}_{\Theta}^{i+1}\) computing as the third term of Eq. (2) with a deterministic dropout rate (\(\hat{\beta}=0.15\)). In this work, we use the same reprogramming generator \(\mathcal{R}_{\theta}\) to generate additive features for cross-lingual adaptation.
\[\underbrace{\mathcal{F}_{\Theta}^{i+1}(h^{i})}_{\text{future encoder}}\to \underbrace{\mathcal{F}_{\Theta}^{i+1}(\mathcal{R}_{\theta}(h^{i}))}_{\text{ latent reprogramming}}\to\underbrace{\mathcal{F}_{\Theta}^{i+1}(\mathcal{R}_{\theta}(h^{i}+ \hat{\beta}h^{i-1}))}_{\text{bridged-connection reprogramming}} \tag{2}\]
### English Graphemes Pre-training for Multilingual Data
To effectively adapt large-scale pre-trained English ASR models to different language recognition (e.g., English to French), previous research efforts [30] suggest a solution by replacing the last prediction layer of ASR. However, we investigate that deploying "multilingual graphemes" is even more effective than replacing the final prediction head directly. As shown in Fig 1(b), we find that multilingual graphemes with English pre-training also learn some discriminative information to feed unseen utterances (e.g., Portuguese). Table 1 shows an investigation of the importance of multilingual graphemes for attaining a lower classification error rate. A unified multilingual grapheme set with \(80\) tokens [4] has been selected as output vocabulary of our ASR systems. Noted that we have investigated that the performance gap between using monolingual (e.g., English-only) and multilingual graphemes is with relatively slight degradation (\(\pm 0.11\)%). Tuning conformer models with extra dense layer (F0b & F1b), also called as linear probe, does not outperform fine-tuning (F0 & F1) baselines, with \(4\)k more trainable parameters.
Figure 1: A proposed design flow of conformer-based ASR reprogramming, which include a trainable input reprogramming for acoustic features and trainable feature reprograms for latent representations.
Figure 2: (a) Bridged-connected reprogramming mechanism. (b) Multi-lingual grapheme distribution, where discriminative information is learned by English pre-training (green) to improve generalization with a similar distribution of its ground-truth (orange).
## 3 End-to-end conformer-based ASR systems
This section presents our E2E-ASR system and introduces how model reprogramming could outperform other parameter-efficient learning solutions. We study the cross-lingual adaption tasks in two E2E-ASR systems: (1) training with a supervised training loss only; (2) a joint supervised and unsupervised loss (e.g., with w2v-Bert). Our multilingual ASR model is a Conformer-based RNN-T architecture. We conduct our parameter-efficient learning experiments mainly on the RNN-T, where the findings could generalize to the other encoder-decoder ASR pre-training for future studies.
### System 1: Supervised Training for Conformer-based ASR
We select our pre-training Conformer RNN-T system based on the previous work [4], which attains competitive multilingual recognition performance with only supervised training objective. The Conformer RNN-T includes an encoder network, a decoder network, and a joint prediction network. For the encoder, we deploy full-context Conformer layers, including an input projection layer, a relative position embedding layer followed by a stack of 17 Conformer layers.
Similar to [4] the stacked conformer layers can be categorized into three encoder blocks. The first block consists of \(4\) Conformer layers and a time stacking layer for computing time reduction; the second block consist of a single Conformer layer (the fifth conformer layer) and a projection layer to map the current feature dimension back to its original. The remaining \(12\) Conformer layers comprise the third encoder block. We use the existing convolution module in Lingvo [31] toolkit to support relative positional information and group normalization in each Conformer layer. A 2-layer of unidirectional LSTM network have been used as decoder. The supervised Conformer ASR has 270M of trainable parameter in total.
To empower parameter-efficient learning, we firstly train our model with \(44.6\)k hours of English data from MLS [32]. The Conformer ASR attains a competitive \(5.5\)% WER with the aforementioned multilingual graphemes discussed in Sec. 2.3. We then make most of the most parameter non-trainable (e.g., frozen as shown in Fig. 1) and only train the reprogram layers loading from a pre-training English Conformer RNN-T. We make an ablation study inspired by the existing residual adapters research to justify our design.
### System 2: Supervised ASR with Self-Supervised Losses
We further investigate that advanced supervised ASR with unsupervised pre-training could be trained under only small trainable parameter budgets to attain high cross-lingual recognition performance. We utilize the previous joint unsupervised and supervised training (JUST) ASR model [33] to combine the supervised RNN-T loss and the self-supervised learning (SSL) (i) contrastive and (ii) masked language modeling (MLM) losses. In the JUST ASR model, both self-supervised modules of the Contrastive net and MLM net are a stack of 8 and 16 standard conformer layers. Following the basic setup in [33], we freeze those conformer layers and inserted the proposed to reprogram layer with bridged connections in between during the model training. The total of pre-trained JUST ASR model is defined as \(\mathcal{L}_{\text{JUST}}=\mathcal{L}_{\text{m++}}+\gamma(\mathcal{L}_{\text {c}}+\mathcal{L}_{\text{mlm}}+\alpha\mathcal{L}_{\text{div}})\), where \(\mathcal{L}_{\text{m++}}\) is the supervised loss as the discussion in Sec 3.1, \(\gamma\) is the joint coefficient for SSL losses (set to be 0.01), \(\mathcal{L}_{\text{c}}\) is the loss of Contrastive net, \(\mathcal{L}_{\text{mlm}}\) is the loss of MLM net, and entropy-based diversity loss (\(\mathcal{L}_{\text{div}}\)) is used for code-book related optimization with \(\alpha=0.1\) followed the similar implantation in previous wav2vec2 study [7].
## 4 Experiment and Results
In this section, we introduce basic setups and proposed parameter-efficient reprogramming in three cross-lingual ASR tasks. Noted that the Para. in Tab 2 and Tab 4 means trainable model parameters.
### Setup and Parameter-Efficient Architectures
**Dataset:** we conduct our experiments on the popular Multilingual Librispeech [32] (MLS) benchmarks on the eight languages, including (1) Germanic languages of English (en) with 44.6k hours, German (de) with 1.96k hours and Dutch (nl) with 1.55k hours; (2) Romance languages of French (fr) with 1.1k hours, Spanish (es) with 0.9k hours, Italian (it) 0.2k hours, Portuguese (pt) with 0.16k hours; and (3) west slavic languages of Polish (pl) with 0.1k hours, which are one widely studied and standard evaluation set in the MLS. We use the official test set of MLS to report test word-error-rate (WER) performance for each setup.
**Features and ASR systems:** Log-Mel filterbank features (80-dims) are used as inputs extracted from MLS acoustic utterances (e.g., 10-20 seconds long.) The encoder layer of supervised Conformer ASR (Study 1) is introduced in Sec 3.1, also referred to in the details presented in [4]. For JUST-based Conformer ASR, both Contrastive net and MLM net have 1024 hidden units with 8 attention heads. The sample masking ratio for MLM is \(6.5\)% with a codebook size of \(1024\). Please refer to [33] for more JUST-related details.
**Trainable Parameter Budget and Hyperparameters:** we carefully control reprogramming and the other auxiliary tuning baselines under a similar parameters and report its best setup, where recent memory efficient ASR study [34] has studied the same scale of \(\sim\)10M to 30M trainable parameters. Our final goal is to find out a best architecture to adapt large-scale frozen pre-training ASR models. We train supervised Conformer-ASR with batch size 1024 on 64 TPUs and JUST-ASR with batch size \(1024\) on \(256\) TPUs. For gradient training, Adam optimizer is suggested to be \(\beta_{1}\) = \(0.9\), \(\beta=0.98\). For JUST training, we use a global learning rate scheduler as described in [33]. For residual adapter, we utilize the benchmark design from [16, 15] with a latent dimension of \(256\) after ablations.
### Cross-Lingual Speech Recognition Results
We evaluate different architecture and their performance with frozen supervised conformer-ASR (System 1 in Sec 3.1) for cross-lingual recognition results. After finding one best parameter-efficient learning setup, we further evaluate if the best setup could further train a supervised conformer model with SSL losses (System 2 in Sec 3.2).
Study\({}_{\text{1}}\)**: **Monolingual ASR from English Pre-training**
We first investigate the monolingual ASR result. For evaluation, we independently train on the seven non-en data (es, it, pt, fr, de, nl, pl) from scratch and report its average WER after 10 runs. Based on
\begin{table}
\begin{tabular}{|l|c|} \hline Setup (from a en-ASR) & WER \\ \hline \hline
**F0**: Fine-tuning all (w/ \(\text{gh}_{\text{en}}^{\text{multi}}\)) & **10.5** \\ \hline \hline F0a: F0 w/o loading \(\text{gh}_{\text{en}}^{\text{multi}}\) & 13.4 \\ \hline F0b: F0 w/ extra dense layer & 12.6 \\ \hline \hline
**F1**: Fine-tuning last conformer (w/ \(\text{gh}_{\text{en}}^{\text{multi}}\)) & **18.2** \\ \hline \hline F1a: F1 w/o loading \(\text{gh}_{\text{en}}^{\text{multi}}\) & 49.2 \\ \hline F1b: F1 w/ extra dense layer & 21.1 \\ \hline \end{tabular}
\end{table}
Table 1: The importance of multilingual grapheme pre-training on English (denoted as \(\text{gh}_{\text{en}}^{\text{multi}}\)) with pre-trained Conformer RNN-T.
the results, direct using a frozen Conformer-ASR pre-trained from en should yield WER above \(90\)%. As shown in the fifth (F0) to ninth row (F4) in Table 2, the residual adapter-based method attains a lower WER of \(13.5\)% averaged in the seven languages. Tuning the conformer layer itself does not provide good performance as the residual adapter, whether (F1) directly tuning the last layer of the Conformer encoder or (F2) training an extra conformer layer followed the encoder. Noted that we have also conducted experiments on one-layer by one-layer tuning but fine-tuning the last conformer outperforms tuning the other single conformer layer. Meanwhile, we also found that the boosted performance under the frozen ASR scheme is **mainly coming from tuning the _encoder_**, where directly tuning the _decoder_ (F4) of RNN-T produces a WER above 20%. Similarly, _bias-terms_ only fine-tuning (BitFit) [36] (F5) shows a 33% WER, which have not yet performed competitively in RNN-T based ASR modeling, according to our empirical evaluation on the MLS.
Next, we study impacts of architectural differences under the proposed scheme of conformer-ASR reprogramming (CAR). We first investigate the difference between the additive feature generators. We find that the 1D-convolution-based feature extractor had a similar performance to the residual adapters. We use spatial attention to encode input and use 2D-convolution over the acoustic feature to reduce trainable parameters. This setup shows a boosted performance compared to its convolution-based ablation and outperforms residual adapters by \(3.7\)% WER relative. We add the bridged connection to the best attention-based reprogramming setup, which reduces by \(5.6\)% WER relative. Since the bridged connection provided an additional gradient path (between each reprogram layer) apart from its backbone model, we achieve a **reduced**\(10.1\)% computing time during the model training compared to residual adapters [22]. We select attention-based reprogramming with bridged connection as our best setup (denoted as reprogram\({}_{\text{CAR3}}\)) to investigate other adaptation properties further.
\(\mathrm{Study}_{2}\)**: Tuning from Multilingual ASR Pre-trainings**
Whether "multilingual" pre-training could serve as a better format of pre-trained ASR backbone is one open question for cross-lingual ASR. In Table 3, we aim to tackle this question by carefully controlling similar total training hours (44 to 45k hours) for the same supervised Conformer-ASR with different extra mixed languages of "en+fr" or "en+es." We then report its test WER taking the average on five unseen languages of de, nl, it, pt, and pl. Interestingly, both multilingual ASR backbones demonstrate better performance than en-only Conformer-ASR by \(1.6\) to \(0.5\)% absolute WER.
\(\mathrm{Study}_{3}\)**: Tuning from ASR with Self-Supervised Losses**
Since JUST-ASR contains w2v-BERT pre-training from Libri-Light [37] with 60k hours of unannotated en-speech, we report its WER performance on en and the seven languages for recognizing multilingual speech at the same time followed the setup in [33]. For our parameter-efficient study, we only train (i) the proposed bridged-reprogram layers and (ii) its original decoder layer to update unsupervised losses discussed in Sec 3.2 in a few-shot learning setup with 100k only step updates in TPUs. As an extra finding, either reprogramming or using adapters with fully unsupervised w2v-Bert [8] would only generate poor WER over \(70\%\), which indicates the importance of supervised loss. As shown in Table 4, we notice that fine-tuning together with reprogramming modules (J1) could reduce both en (-2.7%) and unseen (-11.6%) WER relatives from FT JUST (J0). We have confirmed that the performance gains of parameter-efficient solutions, J2 and J3, **mainly come from the reprogramming or adapters** compared to FT decoder-only (J4).
**Limitation and Additional Discussion** When the current Conformer-ASR reprogramming performs well in different cross-lingual evaluations, we want to remind audiences that one challenge for adapting languages with large grapheme tokens (e.g., Mandarin and Japanese). One potential solution is to use dictionary learning to composite phoneme tokens, and we leave it for future works. Note that the current prompt-tuning [18] for speech mainly covers cross-tasks adaption and yet for ASR studies, where the input-reprogramming could be one similar approach along this direction.
## 5 Conclusion
This work introduces a novel parameter-efficient learning solution for cross-lingual ASR. Our proposed model reprogramming module leverage upon its design on adding trainable attributes on both input and latent space, which further constructs a light-weighted solution to adapter large-scale pre-trained ASR models. For the supervised ASR model, we only require 11M (\(4.8\)% of its full pre-trained model) trainable parameters to achieve \(11.9\)% WER cross seven languages in MLS benchmark. Model reprogramming shows competitive \(8.4\)% and \(9.7\)% WERs combining SSL setups, such as transferring a 660M pre-trained model with only \(6.8\)% of its original model parameters. Our proposed method and new findings on cross-lingual recognition could be considered as one preliminary pathway to designing a large "foundation speech model" for future studies.
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline Setup & en & 7-lang & Para. \\ \hline
**J0**: Fine-tuning (FT) JUST [33] & 7.5 & 9.5 & 660M \\ \hline J1: FT JUST + Reprogram\({}_{\text{CAR3}}\) & **7.3** & **8.4** & 710M \\ J2: Reprogram\({}_{\text{CAR3}}\) + FT decoder & **8.4** & **9.7** & 45M \\ J3: Adapter + FT decoder & 8.9 & 10.2 & 45M \\ \hline J4: FT decoder only & 17.4 & 22.0 & 20M \\ \hline \end{tabular}
\end{table}
Table 4: Reprogramming JUST for Multilingual ASR (\(\mathrm{Study}_{3}\))
\begin{table}
\begin{tabular}{|l|c|c|} \hline Setup & Avg. WER & Para\({}_{\text{quin}}\) \\ \hline
**B0**: Baseline frozen en-Conformer & 92.1 & 0 \\ B1: Conformer training from scratch & 10.7 & 270M \\ B2: Wav2Letter [32] & 11.8 & 100M \\ B3: XLSR-53 (w/ external data) [35] & 10.6 & 300M \\ \hline
**F0**: Fine-tuning from en & 10.5 & 270M \\ \hline F1: Fine-tuning last conformer & 18.2 & 13M \\ F2: Adding an extra conformer & 34.9 & 13M \\ F3: Residual adapters from en & \(13.5\pm 0.2\) & 11M \\ F4: Fine-tuning decoder & 20.9 & 20M \\ F5: Bias-terms fine-tuning (BitFit [36]) & 33.0 & 0.2M \\ \hline
**CAR1**: Attention-**Reprogramming** & 12.6 \(\pm\) 0.4 & 11M \\ CAR2: Conv-Reprogramming & 13.0 \(\pm\) 0.2 & 11M \\ CAR3: CAR1 + Bridged Connection & **11.9**\(\pm\) 0.3 & 11M \\ \hline \end{tabular}
\end{table}
Table 2: Conformer-ASR reprogramming (CAR) for \(\mathrm{Study}_{1}\).
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline Setup \(\backslash\) Pre-Training Languages & en & en+fr & en+es \\ \hline
**M0**: Fine-tuning all & 11.3 & **9.7** & 10.1 \\ \hline M1: Residual Adapter & 13.9 & **12.5** & 12.9 \\ M2: Reprogramming\({}_{\text{CAR3}}\) & 12.8 & **11.8** & 12.3 \\ \hline \hline Total training hours & 44.6k & 45.6k & **45.7k** \\ \hline Covered Graphemes out of 80 & 29 & **51** & 42 \\ \hline \end{tabular}
\end{table}
Table 3: Multi-to-mono ASR (\(\mathrm{Study}_{2}\)) under similar total hours |
2307.15103 | Best Ulam constants for damped linear oscillators with variable
coefficients | This study uses an associated Riccati equation to study the Ulam stability of
non-autonomous linear differential vector equations that model the damped
linear oscillator. In particular, the best (minimal) Ulam constants for these
non-autonomous linear differential vector equations are derived. These robust
results apply to vector equations with solutions that blow up in finite time,
as well as to vector equations with solutions that exist globally on
$(-\infty,\infty)$. Illustrative, non-trivial examples are presented,
highlighting the main results. | Douglas R. Anderson, Masakazu Onitsuka, Donal O'Regan | 2023-07-27T17:16:26Z | http://arxiv.org/abs/2307.15103v1 | # Best Ulam constants for damped linear oscillators with variable coefficients
###### Abstract.
This study uses an associated Riccati equation to study the Ulam stability of non-autonomous linear differential vector equations that model the damped linear oscillator. In particular, the best (minimal) Ulam constants for these non-autonomous linear differential vector equations are derived. These robust results apply to vector equations with solutions that blow up in finite time, as well as to vector equations with solutions that exist globally on \((-\infty,\infty)\). Illustrative, non-trivial examples are presented, highlighting the main results.
Key words and phrases:Ulam stability; best Ulam constant; damped linear oscillator; second-order linear differential equation; Lane-Emden differential equation; Riccati equation; variable coefficient 2
Introduction
Let \(\mathcal{F}\) be a Banach space and \(\mathcal{F}\) be a Banach space. A Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space. The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space. The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space. The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space. The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space. The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space. The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space. The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space. The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space. The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space. The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space. The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space. The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space. The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space. The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space. The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space. The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space. The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space. The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space. The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space. The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space. The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space. The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space. The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space. The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space. The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space. The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\). The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\). The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\). The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\). The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\). The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\). The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\). The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\). The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\). The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\). The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\). The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\). The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\). The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\). The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\). The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\). The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\). The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\). The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\). The Banach space \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\) if and only if \(\mathcal{F}\) is a Banach space \(\mathcal{F}\).
In 2020, Cadariu, Popa and Rasa [15] investigated the Ulam stability of the second-order non-autonomous linear differential scalar equation
\[x^{\prime\prime}+\beta(t)x^{\prime}+\gamma(t)x=0, \tag{1.3}\]
where \(\beta\in C^{1}(I,\mathbb{R})\), \(\gamma\in C(I,\mathbb{R})\) are real-valued scalar functions. They used the existence of solutions to the initial value problem of a particular Riccati equation to give a result that guarantees the Ulam stability of (1.3). Later, this result was extended to the case where (1.1) was limited to scalar equations and real-valued coefficients (see, [26]). However, unfortunately, the best Ulam constants were not obtained in the above two results. This study attempts to analyze (1.1) using their idea that the existence of solutions to a Riccati equation is useful in analyzing the Ulam stability of second-order linear differential equations. It should be noted here that the present study does not extend their results. This work does not require continuous differentiability for the coefficient \(\beta\). It is enough to assume continuity. In addition, the Riccati equation used in this study is also different from the one proposed by them, and the method is completely different. Therefore, the statements of the obtained theorems are also different from theirs. In this study, by proposing a new method, we succeeded in deriving the best Ulam constants for (1.1).
This paper is organized as follows. In Section 2, we show that when we guarantee the existence of a solution to a certain Riccati equation, we can use it to describe the solution to the initial value problem of (1.1). In Section 3, we give the main theorem and its proof. Section 4 gives the result of deriving the best Ulam constants, which is the goal of this study. In Section 5, we present various non-trivial examples centering on Lane-Emden differential equations.
## 2. Representation of solution
In this section, we show that, given the existence of a solution to a certain Riccati equation, we can use it to express the general solution of (1.1).
**Lemma 2.1**.: _Suppose that \(\alpha(t)\neq 0\) for all \(t\in I\), and there exists a solution \(\rho\in C^{1}(I,\mathbb{C})\) of the Riccati equation_
\[\alpha(t)(\rho^{\prime}+\rho^{2})+\beta(t)\rho+\gamma(t)=0. \tag{2.1}\]
_Then the solution of (1.1) with \(\mathbf{x}(t_{0})=\mathbf{x}_{0}\) and \(\mathbf{x}^{\prime}(t_{0})=\mathbf{x}^{\prime}_{0}\) is given by_
\[\mathbf{x}(t)=\Bigg{[}\mathbf{x}_{0}+\int_{t_{0}}^{t}\Bigg{(}\mathbf{x}^{\prime}_{0}-\rho( t_{0})\mathbf{x}_{0}+\int_{t_{0}}^{s}\frac{e^{\int_{t_{0}}^{\mu}\big{(}\rho(\nu)+ \frac{\beta(\nu)}{\alpha(\nu)}\big{)}d\nu}}{\alpha(\mu)}\mathbf{f}(\mu)d\mu\Bigg{)} e^{-\int_{t_{0}}^{s}\big{(}2\rho(\mu)+\frac{\beta(\mu)}{\alpha(\mu)}\big{)}d\mu}ds \Bigg{]}e^{\int_{t_{0}}^{t}\rho(s)ds}\]
_for \(t\in I\), where \(t_{0}\in I\)._
Proof.: Assume that \(\alpha(t)\neq 0\) for all \(t\in I\). Let \(\rho(t)\) be a solution of (2.1) on \(I\). Then we see that the function
\[\mathbf{y}(t)=\mathbf{c}e^{\int_{t_{0}}^{t}\rho(s)ds},\quad\mathbf{c}\in\mathbb{C}^{n}\]
is a solution to the damped linear oscillator
\[\alpha(t)\boldsymbol{y}^{\prime\prime}+\beta(t)\boldsymbol{y}^{\prime}+\gamma(t) \boldsymbol{y}=\boldsymbol{0}\]
on \(I\). Now we use the reduction of order method. Letting
\[\boldsymbol{x}(t)=\boldsymbol{z}(t)e^{\int_{t_{0}}^{t}\rho(s)ds},\]
we have
\[\boldsymbol{x}^{\prime}(t)=\left(\boldsymbol{z}^{\prime}(t)+\rho(t) \boldsymbol{z}(t)\right)e^{\int_{t_{0}}^{t}\rho(s)ds},\]
and
\[\boldsymbol{x}^{\prime\prime}(t)=\left[\boldsymbol{z}^{\prime\prime}(t)+2\rho (t)\boldsymbol{z}^{\prime}(t)+\left(\rho^{\prime}(t)+\rho^{2}(t)\right) \boldsymbol{z}(t)\right]e^{\int_{t_{0}}^{t}\rho(s)ds}.\]
Substituting these into (1.1) and using (2.1), we obtain
\[\alpha(t)\boldsymbol{z}^{\prime\prime}(t)+\left(2\alpha(t)\rho(t)+\beta(t) \right)\boldsymbol{z}^{\prime}(t)=\boldsymbol{f}(t)e^{-\int_{t_{0}}^{t}\rho( s)ds}.\]
Since \(\alpha(t)\neq 0\) for all \(t\in I\), we have
\[\left(\boldsymbol{z}^{\prime}(t)e^{\int_{t_{0}}^{t}\left(2\rho(s)+\frac{ \beta(s)}{\alpha(s)}\right)ds}\right)^{\prime}=\frac{e^{\int_{t_{0}}^{t} \left(\rho(s)+\frac{\beta(s)}{\alpha(s)}\right)ds}}{\alpha(t)}\boldsymbol{f}( t).\]
This implies that
\[\boldsymbol{z}^{\prime}(t)=\left(\boldsymbol{z}^{\prime}(t_{0})+\int_{t_{0}}^ {t}\frac{e^{\int_{t_{0}}^{s}\left(\rho(\mu)+\frac{\beta(\mu)}{\alpha(\mu)} \right)d\mu}}{\alpha(s)}\boldsymbol{f}(s)ds\right)e^{-\int_{t_{0}}^{t}\left(2 \rho(s)+\frac{\beta(s)}{\alpha(s)}\right)ds},\]
and that
\[\boldsymbol{z}(t)=\boldsymbol{z}(t_{0})+\int_{t_{0}}^{t}\left(\boldsymbol{z}^ {\prime}(t_{0})+\int_{t_{0}}^{s}\frac{e^{\int_{t_{0}}^{\mu}\left(\rho(\nu)+ \frac{\beta(\nu)}{\alpha(\nu)}\right)d\nu}}{\alpha(\mu)}\boldsymbol{f}(\mu)d \mu\right)e^{-\int_{t_{0}}^{s}\left(2\rho(\mu)+\frac{\beta(\mu)}{\alpha(\mu)} \right)d\mu}ds.\]
From
\[\boldsymbol{z}(t_{0})=\boldsymbol{x}(t_{0})=\boldsymbol{x}_{0},\quad \boldsymbol{z}^{\prime}(t_{0})=\boldsymbol{x}^{\prime}(t_{0})-\rho(t_{0}) \boldsymbol{x}(t_{0})=\boldsymbol{x}_{0}^{\prime}-\rho(t_{0})\boldsymbol{x}_{ 0},\]
we obtain
\[\boldsymbol{x}(t)=\left[\boldsymbol{x}_{0}+\int_{t_{0}}^{t}\left(\boldsymbol{x }_{0}^{\prime}-\rho(t_{0})\boldsymbol{x}_{0}+\int_{t_{0}}^{s}\frac{e^{\int_{t _{0}}^{\mu}\left(\rho(\nu)+\frac{\beta(\nu)}{\alpha(\nu)}\right)d\nu}}{ \alpha(\mu)}\boldsymbol{f}(\mu)d\mu\right)e^{-\int_{t_{0}}^{s}\left(2\rho(\mu) +\frac{\beta(\mu)}{\alpha(\mu)}\right)d\mu}ds\right]e^{\int_{t_{0}}^{t}\rho( s)ds}\]
for \(t\in I\). This completes the proof.
**Remark 2.2**.: If we assume \(\alpha(t)\neq 0\) almost everywhere \(t\in I\) and with all the obvious functions in the integrands in the expression two lines after (2.1) in \(L^{1}\), then we can get a solution of (1.1) in \(W^{1,1}(I,\mathbb{C}^{n})\).
## 3. Ulam stability
The first theorem of this paper is as follows.
**Theorem 3.1**.: _Let \(I\) be either \((\tau,\sigma)\), \((\tau,\sigma]\), \([\tau,\sigma)\) or \([\tau,\sigma]\), where \(-\infty\leq\tau<\sigma\leq\infty\). Suppose that \(\alpha(t)\neq 0\) for all \(t\in I\), and there exists a solution \(\rho:I\to\mathbb{C}\) of (2.1). Let \(\Re(z)\) be the real part of \(z\in\mathbb{C}\). Then the following (i), (ii) and (iii) below hold:_
* _if the functions_ \[f_{1}(t):=\int_{t}^{\sigma}\frac{e^{\int_{t}^{s}\Re\left(\rho(\mu)+\frac{\beta (\mu)}{\alpha(\mu)}\right)d\mu}}{|\alpha(s)|}ds\] (3.1) _and_ \[f_{2}(t):=\int_{t}^{\sigma}e^{-\int_{t}^{s}\Re(\rho(\mu))d\mu}ds\] (3.2) _exist for all_ \(t\in I\)_, and_ \(\sup_{t\in I}f_{1}(t)<\infty\) _and_ \(\sup_{t\in I}f_{2}(t)<\infty\) _hold. Then (_1.1_) is Ulam stable on_ \(I\)_, with an Ulam constant_ \[L_{1}:=\sup_{t\in I}\int_{t}^{\sigma}\left(\int_{s}^{\sigma}\frac{e^{\int_{s} ^{\mu}\Re\left(\rho(\nu)+\frac{\beta(\nu)}{\alpha(\nu)}\right)d\nu}}{|\alpha( \mu)|}d\mu\right)e^{-\int_{t}^{s}\Re(\rho(\mu))d\mu}ds;\]
* _if the functions_ \(f_{1}(t)\) _and_ \[f_{3}(t):=\int_{\tau}^{t}e^{\int_{s}^{t}\Re(\rho(\mu))d\mu}ds\] (3.3) _exist for all_ \(t\in I\)_, and_ \(\sup_{t\in I}f_{1}(t)<\infty\) _and_ \(\sup_{t\in I}f_{3}(t)<\infty\) _hold, where_ \(f_{1}(t)\) _is given by (_3.1_). Then (_1.1_) is Ulam stable on_ \(I\)_, with an Ulam constant_ \[L_{2}:=\sup_{t\in I}\int_{\tau}^{t}\left(\int_{s}^{\sigma}\frac{e^{\int_{s}^{ \mu}\Re\left(\rho(\nu)+\frac{\beta(\nu)}{\alpha(\nu)}\right)d\nu}}{|\alpha( \mu)|}d\mu\right)e^{\int_{s}^{t}\Re(\rho(\mu))d\mu}ds;\]
* _if the functions_ \(f_{3}(t)\) _and_ \[f_{4}(t):=\int_{\tau}^{t}\frac{e^{-\int_{s}^{t}\Re\left(\rho(\mu)+\frac{\beta (\mu)}{\alpha(\mu)}\right)d\mu}}{|\alpha(s)|}ds\] (3.4) _exist for all_ \(t\in I\)_, and_ \(\sup_{t\in I}f_{3}(t)<\infty\) _and_ \(\sup_{t\in I}f_{4}(t)<\infty\) _hold, where_ \(f_{3}(t)\) _is given by (_3.3_). Then (_1.1_) is Ulam stable on_ \(I\)_, with an Ulam constant_ \[L_{3}:=\sup_{t\in I}\int_{\tau}^{t}\left(\int_{\tau}^{s}\frac{e^{-\int_{\mu}^ {s}\Re\left(\rho(\nu)+\frac{\beta(\nu)}{\alpha(\nu)}\right)d\nu}}{|\alpha(\mu )|}d\mu\right)e^{\int_{s}^{t}\Re(\rho(\mu))d\mu}ds.\]
Proof.: Assume that \(\alpha(t)\neq 0\) for all \(t\in I\). Assume also that there exists a solution \(\rho:I\to\mathbb{C}\) of (2.1). Let \(\varepsilon>0\) be given, and let the twice continuously differentiable function \(\boldsymbol{\xi}:I\to\mathbb{C}^{n}\) satisfy
\[\sup_{t\in I}\|\alpha(t)\boldsymbol{\xi}^{\prime\prime}+\beta(t)\boldsymbol{ \xi}^{\prime}+\gamma(t)\boldsymbol{\xi}-\boldsymbol{f}(t)\|\leq\varepsilon.\]
Define
\[\mathbf{g}(t):=\alpha(t)\mathbf{\xi}^{\prime\prime}+\beta(t)\mathbf{\xi}^{\prime}+\gamma(t)\bm {\xi}-\mathbf{f}(t)\]
for \(t\in I\). Then we have \(\sup_{t\in I}\|\mathbf{g}(t)\|\leq\varepsilon\). Let \(\mathbf{p}(t)\) be a solution to (1.1) on \(I\), and let \(\mathbf{q}(t):=\mathbf{\xi}(t)-\mathbf{p}(t)\) for \(t\in I\). Then \(\mathbf{q}(t)\) is a solution to the equation
\[\alpha(t)\mathbf{q}^{\prime\prime}+\beta(t)\mathbf{q}^{\prime}+\gamma(t)\mathbf{q}=\mathbf{g}(t)\]
for \(t\in I\). Therefore, by Lemma 2.1, we see that the function \(\mathbf{q}(t)\) is expressed as
\[\mathbf{q}(t)=\left[\mathbf{q}_{0}+\int_{t_{0}}^{t}\left(\mathbf{q}_{0}^{\prime}-\rho(t_{ 0})\mathbf{q}_{0}+\int_{t_{0}}^{s}\frac{e^{\int_{t_{0}}^{\mu}\left(\rho(\nu)+ \frac{\beta(\nu)}{\alpha(\nu)}\right)d\nu}}{\alpha(\mu)}\mathbf{g}(\mu)d\mu\right) e^{-\int_{t_{0}}^{s}\left(2\rho(\mu)+\frac{\beta(\mu)}{\alpha(\mu)}\right)d\mu} ds\right]e^{\int_{t_{0}}^{t}\rho(s)ds} \tag{3.5}\]
for \(t\in I\), where \(t_{0}\in I\), \(\mathbf{q}_{0}=\mathbf{q}(t_{0})=\mathbf{\xi}(t_{0})-\mathbf{p}(t_{0})\) and \(\mathbf{q}_{0}^{\prime}=\mathbf{q}^{\prime}(t_{0})=\mathbf{\xi}^{\prime}(t_{0})-\mathbf{p}^{ \prime}(t_{0})\). Hereafter, the proofs are given for each of the three cases (i)-(iii).
Case (i). Assume that \(f_{1}(t)\) and \(f_{2}(t)\) given by (3.1) and (3.2) exist on \(I\), and \(\sup_{t\in I}f_{1}(t)<\infty\) and \(\sup_{t\in I}f_{2}(t)<\infty\) are satisfied. Now we define
\[\mathbf{c}_{1}:=\mathbf{q}_{0}^{\prime}-\rho(t_{0})\mathbf{q}_{0}+\int_{t_{0}}^{\sigma} \frac{e^{\int_{t_{0}}^{\mu}\left(\rho(\nu)+\frac{\beta(\nu)}{\alpha(\nu)} \right)d\nu}}{\alpha(\mu)}\mathbf{g}(\mu)d\mu. \tag{3.6}\]
Note that the integral contained in the right-hand side always converge. Actually, by using \(\sup_{t\in I}f_{1}(t)<\infty\), we can check that
\[\left\|\int_{t_{0}}^{\sigma}\frac{e^{\int_{t_{0}}^{\mu}\left(\rho(\nu)+\frac{ \beta(\nu)}{\alpha(\nu)}\right)d\nu}}{\alpha(\mu)}\mathbf{g}(\mu)d\mu\right\|\leq \int_{t_{0}}^{\sigma}\frac{e^{\int_{t_{0}}^{\mu}\Re\left(\rho(\nu)+\frac{\beta (\nu)}{\alpha(\nu)}\right)d\nu}}{|\alpha(\mu)|}\|\mathbf{g}(\mu)\|d\mu\leq \varepsilon f_{1}(t_{0})<\infty.\]
That is, \(\mathbf{c}_{1}\) is a well-defined constant vector. Therefore, (3.5) can be rewritten as
\[\mathbf{q}(t) =\left[\mathbf{q}_{0}+\int_{t_{0}}^{t}\left(\mathbf{c}_{1}-\int_{s}^{ \sigma}\frac{e^{\int_{t_{0}}^{\mu}\left(\rho(\nu)+\frac{\beta(\nu)}{\alpha( \nu)}\right)d\nu}}{\alpha(\mu)}\mathbf{g}(\mu)d\mu\right)e^{-\int_{t_{0}}^{s}\left( 2\rho(\mu)+\frac{\beta(\mu)}{\alpha(\mu)}\right)d\mu}ds\right]e^{\int_{t_{0}}^ {t}\rho(s)ds}\] \[=\left[\mathbf{q}_{0}+\mathbf{c}_{1}\int_{t_{0}}^{t}e^{-\int_{t_{0}}^{s} \left(2\rho(\mu)+\frac{\beta(\mu)}{\alpha(\mu)}\right)d\mu}ds\right.\] \[\quad-\int_{t_{0}}^{t}\left(\int_{s}^{\sigma}\frac{e^{\int_{s}^{ \mu}\left(\rho(\nu)+\frac{\beta(\nu)}{\alpha(\nu)}\right)d\nu}}{\alpha(\mu)} \mathbf{g}(\mu)d\mu\right)e^{-\int_{t_{0}}^{s}\rho(\mu)d\mu}ds\Bigg{]}e^{\int_{t_{0 }}^{t}\rho(s)ds} \tag{3.7}\]
for \(t\in I\). Moreover, we define
\[\mathbf{c}_{2}:=\mathbf{q}_{0}-\int_{t_{0}}^{\sigma}\left(\int_{s}^{\sigma}\frac{e^{ \int_{s}^{\mu}\left(\rho(\nu)+\frac{\beta(\nu)}{\alpha(\nu)}\right)d\nu}}{ \alpha(\mu)}\mathbf{g}(\mu)d\mu\right)e^{-\int_{t_{0}}^{s}\rho(\mu)d\mu}ds.\]
Since
\[\left\|\int_{t_{0}}^{\sigma}\left(\int_{s}^{\sigma}\frac{e^{\int_{s} ^{\mu}\left(\rho(\nu)+\frac{\beta(\nu)}{\alpha(\nu)}\right)d\nu}}{\alpha(\mu)} \boldsymbol{g}(\mu)d\mu\right)e^{-\int_{t_{0}}^{s}\rho(\mu)d\mu}ds\right\|\] \[\leq\int_{t_{0}}^{\sigma}\left\|\int_{s}^{\sigma}\frac{e^{\int_{s }^{\mu}\left(\rho(\nu)+\frac{\beta(\nu)}{\alpha(\nu)}\right)d\nu}}{\alpha(\mu)} \boldsymbol{g}(\mu)d\mu\right\|e^{-\int_{t_{0}}^{s}\Re(\rho(\mu))d\mu}ds\] \[\leq\int_{t_{0}}^{\sigma}\left(\int_{s}^{\sigma}\frac{e^{\int_{s }^{\mu}\Re\left(\rho(\nu)+\frac{\beta(\nu)}{\alpha(\nu)}\right)d\nu}}{|\alpha( \mu)|}\|\boldsymbol{g}(\mu)\|d\mu\right)e^{-\int_{t_{0}}^{s}\Re(\rho(\mu))d\mu }ds\] \[\leq\varepsilon\int_{t_{0}}^{\sigma}\left(\int_{s}^{\sigma} \frac{e^{\int_{s}^{\mu}\Re\left(\rho(\nu)+\frac{\beta(\nu)}{\alpha(\nu)} \right)d\nu}}{|\alpha(\mu)|}d\mu\right)e^{-\int_{t_{0}}^{s}\Re(\rho(\mu))d\mu }ds\] \[\leq\varepsilon\left(\sup_{t\in I}f_{1}(t)\right)\int_{t_{0}}^{ \sigma}e^{-\int_{t_{0}}^{s}\Re(\rho(\mu))d\mu}ds\leq\varepsilon\left(\sup_{t \in I}f_{1}(t)\right)f_{2}(t_{0})<\infty\]
holds, we can conclude that \(\boldsymbol{c}_{2}\) is well-defined. Hence, (3.7) is rewritten as
\[\boldsymbol{q}(t) =\left[\boldsymbol{c}_{2}+\boldsymbol{c}_{1}\int_{t_{0}}^{t}e^{ -\int_{t_{0}}^{s}\left(2\rho(\mu)+\frac{\beta(\mu)}{\alpha(\mu)}\right)d\mu} ds+\int_{t}^{\sigma}\left(\int_{s}^{\sigma}\frac{e^{\int_{s}^{\mu}\left( \rho(\nu)+\frac{\beta(\nu)}{\alpha(\nu)}\right)d\nu}}{\alpha(\mu)}\boldsymbol {g}(\mu)d\mu\right)e^{-\int_{t_{0}}^{s}\rho(\mu)d\mu}ds\right]e^{\int_{t_{0}}^ {t}\rho(s)ds}\] \[=\left(\boldsymbol{c}_{2}+\boldsymbol{c}_{1}\int_{t_{0}}^{t}e^{ -\int_{t_{0}}^{s}\left(2\rho(\mu)+\frac{\beta(\mu)}{\alpha(\mu)}\right)d\mu} ds\right)e^{\int_{t_{0}}^{t}\rho(s)ds}+\int_{t}^{\sigma}\left(\int_{s}^{ \sigma}\frac{e^{\int_{s}^{\mu}\left(\rho(\nu)+\frac{\beta(\nu)}{\alpha(\nu)} \right)d\nu}}{\alpha(\mu)}\boldsymbol{g}(\mu)d\mu\right)e^{-\int_{t}^{s}\rho( \mu)d\mu}ds,\]
for \(t\in I\).
Next we consider the function
\[\boldsymbol{w}(t):=\left(\boldsymbol{c}_{2}+\boldsymbol{c}_{1}\int_{t_{0}}^{t }e^{-\int_{t_{0}}^{s}\left(2\rho(\mu)+\frac{\beta(\mu)}{\alpha(\mu)}\right)d \mu}ds\right)e^{\int_{t_{0}}^{t}\rho(s)ds}\]
for \(t\in I\). Note that this function is for \(\boldsymbol{g}(t)\equiv\boldsymbol{0}\) in \(\boldsymbol{q}(t)\) above. So it is a solution of the differential equation \(\alpha(t)\boldsymbol{w}^{\prime\prime}+\beta(t)\boldsymbol{w}^{\prime}+\gamma( t)\boldsymbol{w}=\boldsymbol{0}\). Hence we see that the function
\[\boldsymbol{x}_{1}(t):=\boldsymbol{w}(t)+\boldsymbol{p}(t)\]
is a solution of (1.1) for \(t\in I\), where \(\boldsymbol{p}(t)\) is a solution of (1.1) given at the beginning of the proof. This says that
\[\boldsymbol{\xi}(t)-\boldsymbol{x}_{1}(t)=\boldsymbol{q}(t)-\boldsymbol{w}(t) =\int_{t}^{\sigma}\left(\int_{s}^{\sigma}\frac{e^{\int_{s}^{\mu} \left(\rho(\nu)+\frac{\beta(\nu)}{\alpha(\nu)}\right)d\nu}}{\alpha(\mu)} \boldsymbol{g}(\mu)d\mu\right)e^{-\int_{t}^{s}\rho(\mu)d\mu}ds,\]
and so that
\[\|\boldsymbol{\xi}(t)-\boldsymbol{x}_{1}(t)\|\leq\varepsilon\int_{t}^{\sigma} \left(\int_{s}^{\sigma}\frac{e^{\int_{s}^{\mu}\Re\left(\rho(\nu)+\frac{\beta( \nu)}{\alpha(\nu)}\right)d\nu}}{|\alpha(\mu)|}d\mu\right)e^{-\int_{t}^{s}\Re( \rho(\mu))d\mu}ds\leq\varepsilon\left(\sup_{t\in I}f_{1}(t)\right)\left(\sup_{t \in I}f_{2}(t)\right)<\infty\]
for \(t\in I\). Thus, (1.1) is Ulam stable on \(I\). Moreover,
\[L_{1}=\sup_{t\in I}\int_{t}^{\sigma}\left(\int_{s}^{\sigma}\frac{e^{\int_{s}^{\mu }\Re\left(\rho(\nu)+\frac{\beta(\nu)}{\alpha(\nu)}\right)d\nu}}{|\alpha(\mu)|}d \mu\right)e^{-\int_{t}^{s}\Re(\rho(\mu))d\mu}ds\]
is an Ulam constant for (1.1).
Case (ii). Assume that \(f_{1}(t)\) and \(f_{3}(t)\) given by (3.1) and (3.3) exist on \(I\), and \(\sup_{t\in I}f_{1}(t)<\infty\) and \(\sup_{t\in I}f_{3}(t)<\infty\) hold. As in Case (i), we can rewrite \(\boldsymbol{q}(t)\) as (3.7), using the constant \(\boldsymbol{c}_{1}\) defined in (3.6). Now we define
\[\boldsymbol{c}_{3}:=\boldsymbol{q}_{0}+\int_{\tau}^{t_{0}}\left(\int_{s}^{ \sigma}\frac{e^{\int_{s}^{\mu}\left(\rho(\nu)+\frac{\beta(\nu)}{\alpha(\nu)} \right)d\nu}}{\alpha(\mu)}\boldsymbol{g}(\mu)d\mu\right)e^{\int_{s}^{t_{0}} \rho(\mu)d\mu}ds.\]
Since
\[\left\|\int_{\tau}^{t_{0}}\left(\int_{s}^{\sigma}\frac{e^{\int_{s }^{\mu}\Re\left(\rho(\nu)+\frac{\beta(\nu)}{\alpha(\nu)}\right)d\nu}}{|\alpha( \mu)|}\boldsymbol{g}(\mu)d\mu\right)e^{\int_{s}^{t_{0}}\rho(\mu)d\mu}ds\right\|\] \[\leq\varepsilon\left(\sup_{t\in I}f_{1}(t)\right)\int_{\tau}^{t_{ 0}}e^{\int_{s}^{t_{0}}\Re(\rho(\mu))d\mu}ds\leq\varepsilon\left(\sup_{t\in I}f _{1}(t)\right)f_{3}(t_{0})<\infty\]
holds, \(\boldsymbol{c}_{3}\) is a well-defined constant. Thus, (3.7) is rewritten as
\[\boldsymbol{q}(t) =\left[\boldsymbol{c}_{3}+\boldsymbol{c}_{1}\int_{t_{0}}^{t}e^{- \int_{t_{0}}^{s}\left(2\rho(\mu)+\frac{\beta(\mu)}{\alpha(\mu)}\right)d\mu}ds -\int_{\tau}^{t}\left(\int_{s}^{\sigma}\frac{e^{\int_{s}^{\mu}\left(\rho(\nu) +\frac{\beta(\nu)}{\alpha(\nu)}\right)d\nu}}{\alpha(\mu)}\boldsymbol{g}(\mu)d \mu\right)e^{\int_{s}^{t_{0}}\rho(\mu)d\mu}ds\right]e^{\int_{t_{0}}^{t}\rho( \mu)d\mu}ds\] \[=\left(\boldsymbol{c}_{3}+\boldsymbol{c}_{1}\int_{t_{0}}^{t}e^{- \int_{t_{0}}^{s}\left(2\rho(\mu)+\frac{\beta(\mu)}{\alpha(\mu)}\right)d\mu}ds \right)e^{\int_{t_{0}}^{t}\rho(s)ds}-\int_{\tau}^{t}\left(\int_{s}^{\sigma} \frac{e^{\int_{s}^{\mu}\left(\rho(\nu)+\frac{\beta(\nu)}{\alpha(\nu)}\right)d \nu}}{\alpha(\mu)}\boldsymbol{g}(\mu)d\mu\right)e^{\int_{s}^{t}\rho(\mu)d\mu}ds\]
for \(t\in I\).
Next we consider the solution of (1.1) given by
\[\boldsymbol{x}_{2}(t):=\left(\boldsymbol{c}_{3}+\boldsymbol{c}_{1}\int_{t_{0} }^{t}e^{-\int_{t_{0}}^{s}\left(2\rho(\mu)+\frac{\beta(\mu)}{\alpha(\mu)} \right)d\mu}ds\right)e^{\int_{t_{0}}^{t}\rho(s)ds}+\boldsymbol{p}(t)\]
for \(t\in I\). Then
\[\boldsymbol{x}_{2}(t)-\boldsymbol{\xi}(t)=\int_{\tau}^{t}\left(\int_{s}^{ \sigma}\frac{e^{\int_{s}^{\mu}\left(\rho(\nu)+\frac{\beta(\nu)}{\alpha(\nu)} \right)d\nu}}{\alpha(\mu)}\boldsymbol{g}(\mu)d\mu\right)e^{\int_{s}^{t}\rho( \mu)d\mu}ds,\]
and so that
\[\|\boldsymbol{x}_{2}(t)-\boldsymbol{\xi}(t)\|\leq\varepsilon\int_{\tau}^{t} \left(\int_{s}^{\sigma}\frac{e^{\int_{s}^{\mu}\Re\left(\rho(\nu)+\frac{\beta( \nu)}{\alpha(\nu)}\right)d\nu}}{|\alpha(\mu)|}d\mu\right)e^{\int_{s}^{t}\Re( \rho(\mu))d\mu}ds\leq\varepsilon\left(\sup_{t\in I}f_{1}(t)\right)\left(\sup_{t \in I}f_{3}(t)\right)<\infty\]
for \(t\in I\). Thus, (1.1) is Ulam stable on \(I\). Moreover,
\[L_{2}=\sup_{t\in I}\int_{\tau}^{t}\left(\int_{s}^{\sigma}\frac{e^{\int_{s}^{\mu} \Re\left(\rho(\nu)+\frac{\beta(\nu)}{\alpha(\nu)}\right)d\nu}}{|\alpha(\mu)|}d \mu\right)e^{\int_{s}^{t}\Re(\rho(\mu))d\mu}ds\]
is an Ulam constant for (1.1).
Case (iii). Assume that \(f_{3}(t)\) and \(f_{4}(t)\) given by (3.3) and (3.4) exist on \(I\), and \(\sup_{t\in I}f_{3}(t)<\infty\) and \(\sup_{t\in I}f_{4}(t)<\infty\) are satisfied. Now we define
\[\mathbf{c}_{4}:=\mathbf{q}_{0}^{\prime}-\rho(t_{0})\mathbf{q}_{0}-\int_{\tau}^{t_{0}}\frac {e^{-\int_{\mu}^{t_{0}}\left(\rho(\nu)+\frac{\beta(\nu)}{\alpha(\nu)}\right)d \nu}}{\alpha(\mu)}\mathbf{g}(\mu)d\mu.\]
Since \(\sup_{t\in I}f_{4}(t)<\infty\) holds, we see that
\[\left\|\int_{\tau}^{t_{0}}\frac{e^{-\int_{\mu}^{t_{0}}\left(\rho(\nu)+\frac{ \beta(\nu)}{\alpha(\nu)}\right)d\nu}}{\alpha(\mu)}\mathbf{g}(\mu)d\mu\right\|\leq \int_{\tau}^{t_{0}}\frac{e^{-\int_{\mu}^{t_{0}}\Re\left(\rho(\nu)+\frac{\beta (\nu)}{\alpha(\nu)}\right)d\nu}}{|\alpha(\mu)|}\|\mathbf{g}(\mu)\|d\mu\leq\varepsilon f _{4}(t_{0})<\infty,\]
and that \(\mathbf{c}_{4}\) is a well-defined constant vector. Therefore, (3.5) can be rewritten as
\[\mathbf{q}(t)=\left[\mathbf{q}_{0}+\mathbf{c}_{4}\int_{t_{0}}^{t}e^{-\int_{t_{0}}^{s} \left(2\rho(\mu)+\frac{\beta(\nu)}{\alpha(\nu)}\right)d\mu}ds+\int_{t_{0}}^{t} \left(\int_{\tau}^{s}\frac{e^{-\int_{\mu}^{s}\left(\rho(\nu)+\frac{\beta(\nu) }{\alpha(\nu)}\right)d\nu}}{\alpha(\mu)}\mathbf{g}(\mu)d\mu\right)e^{-\int_{t_{0} }^{s}\rho(\mu)d\mu}ds\right]\!e^{\int_{t_{0}}^{t}\rho(s)ds}\]
for \(t\in I\). Moreover, we define
\[\mathbf{c}_{5}:=\mathbf{q}_{0}-\int_{\tau}^{t_{0}}\left(\int_{\tau}^{s}\frac{e^{-\int _{\mu}^{s}\left(\rho(\nu)+\frac{\beta(\nu)}{\alpha(\nu)}\right)d\nu}}{\alpha( \mu)}\mathbf{g}(\mu)d\mu\right)e^{\int_{s}^{t_{0}}\rho(\mu)d\mu}ds.\]
Since
\[\left\|\int_{\tau}^{t_{0}}\left(\int_{\tau}^{s}\frac{e^{-\int_{ \mu}^{s}\left(\rho(\nu)+\frac{\beta(\nu)}{\alpha(\nu)}\right)d\nu}}{\alpha( \mu)}\mathbf{g}(\mu)d\mu\right)e^{\int_{s}^{t_{0}}\rho(\mu)d\mu}ds\right\|\] \[\leq\int_{\tau}^{t_{0}}\left(\int_{\tau}^{s}\frac{e^{-\int_{\mu}^ {s}\Re\left(\rho(\nu)+\frac{\beta(\nu)}{\alpha(\nu)}\right)d\nu}}{|\alpha(\mu) |}\|\mathbf{g}(\mu)\|d\mu\right)e^{\int_{s}^{t_{0}}\Re(\rho(\mu))d\mu}ds\] \[\leq\varepsilon\left(\sup_{t\in I}f_{4}(t)\right)\int_{\tau}^{t_ {0}}e^{\int_{s}^{t_{0}}\Re(\rho(\mu))d\mu}ds\leq\varepsilon\left(\sup_{t\in I }f_{4}(t)\right)f_{3}(t_{0})<\infty\]
holds, we see that \(\mathbf{c}_{5}\) is well-defined. Hence, \(\mathbf{q}(t)\) is rewritten as
\[\mathbf{q}(t)=\left(\mathbf{c}_{5}+\mathbf{c}_{4}\int_{t_{0}}^{t}e^{-\int_{t_{0}}^{s}\left( 2\rho(\mu)+\frac{\beta(\mu)}{\alpha(\mu)}\right)d\mu}ds\right)e^{\int_{t_{0} }^{t}\rho(s)ds}+\int_{\tau}^{t}\left(\int_{\tau}^{s}\frac{e^{-\int_{\mu}^{s} \left(\rho(\nu)+\frac{\beta(\nu)}{\alpha(\nu)}\right)d\nu}}{\alpha(\mu)}\mathbf{g} (\mu)d\mu\right)e^{\int_{s}^{t}\rho(\mu)d\mu}ds\]
for \(t\in I\).
Next we consider the solution of (1.1) given by
\[\mathbf{x}_{3}(t):=\left(\mathbf{c}_{5}+\mathbf{c}_{4}\int_{t_{0}}^{t}e^{-\int_{t_{0}}^{s} \left(2\rho(\mu)+\frac{\beta(\mu)}{\alpha(\mu)}\right)d\mu}ds\right)e^{\int_{t_ {0}}^{t}\rho(s)ds}+\mathbf{p}(t)\]
for \(t\in I\). Then
\[\boldsymbol{\xi}(t)-\boldsymbol{x}_{3}(t)=\int_{\tau}^{t}\left(\int_{\tau}^{s} \frac{e^{-\int_{\mu}^{s}\Re\left(\rho(\nu)+\frac{\beta(\nu)}{\alpha(\nu)}\right)d \nu}}{\alpha(\mu)}\boldsymbol{g}(\mu)d\mu\right)e^{\int_{s}^{t}\rho(\mu)d\mu}ds,\]
and so that
\[\|\boldsymbol{\xi}(t)-\boldsymbol{x}_{3}(t)\|\leq\varepsilon\int_{\tau}^{t} \left(\int_{\tau}^{s}\frac{e^{-\int_{\mu}^{s}\Re\left(\rho(\nu)+\frac{\beta( \nu)}{\alpha(\nu)}\right)d\nu}}{|\alpha(\mu)|}d\mu\right)e^{\int_{s}^{t}\Re \left(\rho(\mu)\right)d\mu}ds\leq\varepsilon\left(\sup_{t\in I}f_{3}(t)\right) \left(\sup_{t\in I}f_{4}(t)\right)<\infty\]
for \(t\in I\). Thus, (1.1) is Ulam stable on \(I\). Moreover,
\[L_{3}=\sup_{t\in I}\int_{\tau}^{t}\left(\int_{\tau}^{s}\frac{e^{-\int_{\mu}^{ s}\Re\left(\rho(\nu)+\frac{\beta(\nu)}{\alpha(\nu)}\right)d\nu}}{|\alpha(\mu)|}d \mu\right)e^{\int_{s}^{t}\Re\left(\rho(\mu)\right)d\mu}ds\]
is an Ulam constant for (1.1). The proof is now complete.
Consider the constant coefficients linear oscillator
\[a_{0}\boldsymbol{x}^{\prime\prime}+a_{1}\boldsymbol{x}^{\prime}+a_{2} \boldsymbol{x}=0, \tag{3.8}\]
where \(a_{0}\), \(a_{1}\) and \(a_{2}\) are complex-valued constants, and \(a_{0}\neq 0\). Then we obtain the following result.
**Corollary 3.2**.: _Let \(I=\mathbb{R}\), and \(\lambda_{1}\) and \(\lambda_{2}\) be the roots of the characteristic equation_
\[a_{0}\lambda^{2}+a_{1}\lambda+a_{2}=0.\]
_If \(a_{0}\Re(\lambda_{1})\Re(\lambda_{2})\neq 0\), then (3.8) is Ulam stable on \(\mathbb{R}\), and an Ulam constant is \(\frac{1}{|a_{0}\Re(\lambda_{1})\Re(\lambda_{2})|}\)._
Proof.: Assume \(a_{0}\neq 0\). The proof is divided into three cases (i) \(\Re(\lambda_{1})\geq\Re(\lambda_{2})>0\), (ii) \(\Re(\lambda_{1})>0>\Re(\lambda_{2})\), and (iii) \(0>\Re(\lambda_{1})\geq\Re(\lambda_{2})\). First, we notice that, since \(\lambda_{1}\) and \(\lambda_{2}\) are the roots of the characteristic equation, they are constant solutions of (2.1); that is, we can choose \(\rho(t)=\lambda_{1}\) or \(\rho(t)=\lambda_{2}\) for all \(t\in\mathbb{R}\). Moreover, if
\[\rho(t)=\lambda_{2}=\frac{-a_{1}+\sqrt{a_{1}^{2}-4a_{0}a_{2}}}{2a_{0}},\]
then
\[\rho(t)+\frac{\beta(t)}{\alpha(t)}=\lambda_{2}+\frac{a_{1}}{a_{0}}=-\lambda_{ 1}.\]
We will use Theorem 3.1 with \(\tau=-\infty\) and \(\sigma=\infty\).
Case (i). Suppose \(\Re(\lambda_{1})\geq\Re(\lambda_{2})>0\). In this case, Theorem 3.1 (i) will be used. Set \(\rho(t)=\lambda_{2}\). From
\[f_{1}(t)=\int_{t}^{\sigma}\frac{e^{\int_{t}^{s}\Re\left(\rho(\mu)+\frac{\beta( \mu)}{\alpha(\mu)}\right)d\mu}}{|\alpha(s)|}ds=\int_{t}^{\infty}\frac{e^{-\int_ {t}^{s}\Re(\lambda_{1})d\mu}}{|a_{0}|}ds=\frac{1}{|a_{0}|\Re(\lambda_{1})}\]
and
\[f_{2}(t)=\int_{t}^{\sigma}e^{-\int_{t}^{s}\Re(\rho(\mu))d\mu}ds=\int_{t}^{ \infty}e^{-\int_{t}^{s}\Re(\lambda_{2})d\mu}ds=\frac{1}{\Re(\lambda_{2})}\]
for all \(t\in\mathbb{R}\), \(f_{1}(t)\) and \(f_{2}(t)\) exist and are bounded on \(\mathbb{R}\). Hence, by Theorem 3.1 (i), (3.8) is Ulam stable on \(\mathbb{R}\), and an Ulam constant is
\[\sup_{t\in I}\int_{t}^{\sigma}\left(\int_{s}^{\sigma}\frac{e^{\int_{s}^{\mu} \Re\left(\rho(\nu)+\frac{\beta(\nu)}{\alpha(\nu)}\right)d\nu}}{|\alpha(\mu)|}d \mu\right)e^{-\int_{t}^{s}\Re(\rho(\mu))d\mu}ds=\sup_{t\in I}f_{1}(t)f_{2}(t)= \frac{1}{|a_{0}|\Re(\lambda_{1})\Re(\lambda_{2})}.\]
Case (ii). Suppose \(\Re(\lambda_{1})>0>\Re(\lambda_{2})\). Set \(\rho(t)=\lambda_{2}\). From \(f_{1}(t)=\frac{1}{|a_{0}|\Re(\lambda_{1})}\) and
\[f_{3}(t)=\int_{\tau}^{t}e^{\int_{s}^{t}\Re(\rho(\mu))d\mu}ds=\int_{-\infty}^{t }e^{\int_{s}^{t}\Re(\lambda_{2})d\mu}ds=\frac{1}{-\Re(\lambda_{2})}\]
for all \(t\in\mathbb{R}\), \(f_{1}(t)\) and \(f_{3}(t)\) exist and are bounded on \(\mathbb{R}\). Hence, by Theorem 3.1 (ii), (3.8) is Ulam stable on \(\mathbb{R}\), and an Ulam constant is
\[\sup_{t\in I}\int_{\tau}^{t}\left(\int_{s}^{\sigma}\frac{e^{\int_{s}^{\mu} \Re\left(\rho(\nu)+\frac{\beta(\nu)}{\alpha(\nu)}\right)d\nu}}{|\alpha(\mu)|}d \mu\right)e^{\int_{s}^{t}\Re(\rho(\mu))d\mu}ds=\sup_{t\in I}f_{1}(t)f_{3}(t)= \frac{1}{|a_{0}\Re(\lambda_{1})\Re(\lambda_{2})|}.\]
Case (iii). Suppose \(0>\Re(\lambda_{1})\geq\Re(\lambda_{2})\). Set \(\rho(t)=\lambda_{2}\). From \(f_{3}(t)=\frac{1}{-\Re(\lambda_{2})}\) and
\[f_{4}(t)=\int_{\tau}^{t}\frac{e^{-\int_{s}^{t}\Re\left(\rho(\mu)+\frac{\beta( \nu)}{\alpha(\nu)}\right)d\mu}}{|\alpha(\mu)|}ds=\int_{-\infty}^{t}\frac{e^{ \int_{s}^{t}\Re(\lambda_{1})d\mu}}{|a_{0}|}ds=\frac{1}{-|a_{0}|\Re(\lambda_{1})}\]
for all \(t\in\mathbb{R}\), \(f_{3}(t)\) and \(f_{4}(t)\) exist and are bounded on \(\mathbb{R}\). Hence, by Theorem 3.1 (iii), (3.8) is Ulam stable on \(\mathbb{R}\), and an Ulam constant is
\[\sup_{t\in I}\int_{\tau}^{t}\left(\int_{\tau}^{s}\frac{e^{-\int_{\mu}^{s}\Re \left(\rho(\nu)+\frac{\beta(\nu)}{\alpha(\nu)}\right)d\nu}}{|\alpha(\mu)|}d \mu\right)e^{\int_{s}^{t}\Re(\rho(\mu))d\mu}ds=\sup_{t\in I}f_{3}(t)f_{4}(t)= \frac{1}{|a_{0}\Re(\lambda_{1})\Re(\lambda_{2})|}.\]
Therefore, in any case, an Ulam constant is \(\frac{1}{|a_{0}\Re(\lambda_{1})\Re(\lambda_{2})|}\).
## 4. Minimum Ulam constants
In this section, we show that the Ulam constants given in Theorem 3.1 are the minimum Ulam constants by restricting to real-valued scalar functions.
**Theorem 4.1**.: _Let \(I\) be either \((\tau,\sigma)\), \((\tau,\sigma]\), \([\tau,\sigma)\) or \([\tau,\sigma]\), where \(-\infty\leq\tau<\sigma\leq\infty\). Suppose that \(\alpha\), \(\beta\), \(\gamma:I\to\mathbb{R}\) are real-valued continuous functions, and \(\alpha(t)\neq 0\) for all \(t\in I\), and there exists a real-valued solution \(\rho:I\to\mathbb{R}\) of (2.1). Then the following (i), (ii) and (iii) below hold:_
* _suppose that_ \(f_{1}(t)\) _and_ \(f_{2}(t)\) _given by (_3.1_) and (_3.2_) exist for all_ \(t\in I\)_, and_ \(\sup_{t\in I}f_{1}(t)<\infty\) _and_ \(\sup_{t\in I}f_{2}(t)<\infty\) _hold. If_ \[\lim_{t\to\sigma^{-}}\int_{t_{0}}^{t}\rho(s)ds=\infty\quad\text{and}\quad \lim_{t\to\sigma^{-}}e^{\int_{t_{0}}^{t}\rho(s)ds}\int_{t_{0}}^{t}e^{-\int_{t_ {0}}^{s}\left(2\rho(\mu)+\frac{\beta(\mu)}{\alpha(\mu)}\right)d\mu}ds=\infty, \quad t_{0}\in(\tau,\sigma),\] (4.1)
_then (1.1) is Ulam stable on \(I\), and the minimum Ulam constant is_
\[B_{1}:=\sup_{t\in I}\int_{t}^{\sigma}\left(\int_{s}^{\sigma}\frac{e^{\int_{s}^{ \mu}\left(\rho(\nu)+\frac{\beta(\nu)}{\alpha(\nu)}\right)d\nu}}{|\alpha(\mu)|}d \mu\right)e^{-\int_{t}^{s}\rho(\mu)d\mu}ds; \tag{4.2}\]
* _suppose that_ \(f_{1}(t)\) _and_ \(f_{3}(t)\) _given by (_3.1_) and (_3.3_) exist for all_ \(t\in I\)_, and_ \(\sup_{t\in I}f_{1}(t)<\infty\) _and_ \(\sup_{t\in I}f_{3}(t)<\infty\) _hold. If_ \[\lim_{t\to\tau^{+}}\int_{t_{0}}^{t}\rho(s)ds=\infty\quad\text{and}\quad\lim_{t \to\sigma^{-}}e^{\int_{t_{0}}^{t}\rho(s)ds}\int_{t_{0}}^{t}e^{-\int_{t_{0}}^{s }\left(2\rho(\mu)+\frac{\beta(\mu)}{\alpha(\mu)}\right)d\mu}ds=\infty,\quad t _{0}\in(\tau,\sigma),\] (4.3) _then (_1.1_) is Ulam stable on_ \(I\)_, and the minimum Ulam constant is_ \[B_{2}:=\sup_{t\in I}\int_{\tau}^{t}\left(\int_{s}^{\sigma}\frac{e^{\int_{s}^{ \mu}\left(\rho(\nu)+\frac{\beta(\nu)}{\alpha(\nu)}\right)d\nu}}{|\alpha(\mu)| }d\mu\right)e^{\int_{s}^{t}\rho(\mu)d\mu}ds;\]
* _suppose that_ \(f_{3}(t)\) _and_ \(f_{4}(t)\) _given by (_3.3_) and (_3.4_) exist for all_ \(t\in I\)_, and_ \(\sup_{t\in I}f_{3}(t)<\infty\) _and_ \(\sup_{t\in I}f_{4}(t)<\infty\) _hold. If_ \[\lim_{t\to\tau^{+}}\int_{t_{0}}^{t}\rho(s)ds=\infty\quad\text{and}\quad\lim_{ t\to\tau^{+}}e^{\int_{t_{0}}^{t}\rho(s)ds}\int_{t_{0}}^{t}e^{-\int_{t_{0}}^{s} \left(2\rho(\mu)+\frac{\beta(\mu)}{\alpha(\mu)}\right)d\mu}ds=-\infty,\quad t _{0}\in(\tau,\sigma),\] (4.4) _then (_1.1_) is Ulam stable on_ \(I\)_, and the minimum Ulam constant is_ \[B_{3}:=\sup_{t\in I}\int_{\tau}^{t}\left(\int_{\tau}^{s}\frac{e^{-\int_{\mu} ^{s}\left(\rho(\nu)+\frac{\beta(\nu)}{\alpha(\nu)}\right)d\nu}}{|\alpha(\mu)| }d\mu\right)e^{\int_{s}^{t}\rho(\mu)d\mu}ds.\] (4.5)
Proof.: Assume that \(\alpha\), \(\beta\) and \(\gamma\) are real-valued continuous functions, and \(\alpha(t)\neq 0\) for all \(t\in I\). Let \(\rho\) be a real-valued solution of (2.1). Throughout this proof, let \(f_{1}\), \(f_{2}\), \(f_{3}\) and \(f_{4}\) be the functions defined by (3.1)-(3.4), respectively. Note that \(f_{1}\), \(f_{2}\), \(f_{3}\) and \(f_{4}\) are real-valued functions on \(I\). Let \(t_{0}\in(\tau,\sigma)\).
Case (i). Assume that \(f_{1}(t)\) and \(f_{2}(t)\) exist for all \(t\in I\), and \(\sup_{t\in I}f_{1}(t)<\infty\) and \(\sup_{t\in I}f_{2}(t)<\infty\) hold. By Theorem 3.1, we see that (1.1) is Ulam stable on \(I\), with an Ulam constant \(B_{1}\), where \(B_{1}\) is defined by (4.2). Let \(\varepsilon>0\). Now we consider the function
\[\boldsymbol{q}(t):=\left(\boldsymbol{c}_{2}+\boldsymbol{c}_{1}\int_{t_{0}}^{t }e^{-\int_{t_{0}}^{s}\left(2\rho(\mu)+\frac{\beta(\mu)}{\alpha(\mu)}\right)d \mu}ds\right)e^{\int_{t_{0}}^{t}\rho(s)ds}+\varepsilon\left[\int_{t}^{\sigma} \left(\int_{s}^{\sigma}\frac{e^{\int_{s}^{\mu}\left(\rho(\nu)+\frac{\beta(\nu )}{\alpha(\nu)}\right)d\nu}}{\alpha(\mu)}d\mu\right)e^{-\int_{t}^{s}\rho(\mu)d \mu}ds\right]\boldsymbol{u}\]
for all \(t\in I\), where \(\boldsymbol{c}_{1}\) and \(\boldsymbol{c}_{2}\) are well-defined constants given in the proof of Theorem 3.1 (i) and \(\boldsymbol{u}\) is the unit vector. From the proof of Theorem 3.1 (i), we find that \(\boldsymbol{q}(t)\) is a solution of the equation
\[\alpha(t)\boldsymbol{q}^{\prime\prime}+\beta(t)\boldsymbol{q}^{\prime}+\gamma( t)\boldsymbol{q}=\varepsilon\boldsymbol{u}.\]
Let \(\boldsymbol{p}(t)\) be a solution to (1.1) on \(I\), and let \(\boldsymbol{\xi}(t):=\boldsymbol{q}(t)+\boldsymbol{p}(t)\) for \(t\in I\). Then
\[\|\alpha(t)\boldsymbol{\xi}^{\prime\prime}+\beta(t)\boldsymbol{\xi}^{\prime}+ \gamma(t)\boldsymbol{\xi}-\boldsymbol{f}(t)\|=\varepsilon \tag{4.6}\]
is satisfied for \(t\in I\). By the Ulam stability for (1.1), we find that there exists a solution \(\mathbf{x}_{1}:I\to\mathbb{R}^{n}\) of (1.1) such that
\[\sup_{t\in I}\|\mathbf{\xi}(t)-\mathbf{x}_{1}(t)\|\leq B_{1}\varepsilon. \tag{4.7}\]
More precisely, from the proof of Theorem 3.1 (i), we know that \(\mathbf{x}_{1}(t)\) is given as
\[\mathbf{x}_{1}(t)=\left(\mathbf{c}_{2}+\mathbf{c}_{1}\int_{t_{0}}^{t}e^{-\int_{t_{0}}^{s} \left(2\rho(\mu)+\frac{\beta(\mu)}{\alpha(\mu)}\right)d\mu}ds\right)e^{\int_{t _{0}}^{t}\rho(s)ds}+\mathbf{p}(t).\]
We show that \(B_{1}\) is the minimum Ulam constant by using the following two steps.
Step 1. We first show that \(\mathbf{x}_{1}(t)\) is the unique solution of (1.1) satisfying (4.7). To show this fact using contradiction, we assume that there exists a solution \(\mathbf{y}_{1}(t)\) of (1.1) such that \(\mathbf{y}_{1}(t)\neq\mathbf{x}_{1}(t)\) for all \(t\in I\). That is, \(\mathbf{y}_{1}(t)\) is written as
\[\mathbf{y}_{1}(t)=\left(\mathbf{d}_{2}+\mathbf{d}_{1}\int_{t_{0}}^{t}e^{-\int_{t_{0}}^{s} \left(2\rho(\mu)+\frac{\beta(\mu)}{\alpha(\mu)}\right)d\mu}ds\right)e^{\int_{t _{0}}^{t}\rho(s)ds}+\mathbf{p}(t)\]
with \((\mathbf{d}_{1},\mathbf{d}_{2})\neq(\mathbf{c}_{1},\mathbf{c}_{2})\). Thus, we have
\[\|\mathbf{y}_{1}(t)-\mathbf{x}_{1}(t)\|\leq\|\mathbf{\xi}(t)-\mathbf{y}_{1}(t)\|+\|\mathbf{\xi}(t) -\mathbf{x}_{1}(t)\|\leq 2B_{1}\varepsilon\]
for all \(t\in I\). However, with (4.1), the following holds:
\[\lim_{t\to\sigma^{-}}\|\mathbf{y}_{1}(t)-\mathbf{x}_{1}(t)\|=\lim_{t\to\sigma^{-}}\left \|\left[(\mathbf{d}_{2}-\mathbf{c}_{2})+(\mathbf{d}_{1}-\mathbf{c}_{1})\int_{t_{0}}^{t}e^{- \int_{t_{0}}^{s}\left(2\rho(\mu)+\frac{\beta(\mu)}{\alpha(\mu)}\right)d\mu}ds \right]e^{\int_{t_{0}}^{t}\rho(s)ds}\right\|=\infty.\]
This contradicts the above inequality.
Step 2. We next show that \(B_{1}\) is the minimum Ulam constant. By way of contradiction, we assume that there exists \(0<U_{1}<B_{1}\) such that
\[\sup_{t\in I}\|\mathbf{\xi}(t)-\mathbf{x}_{1}(t)\|\leq U_{1}\varepsilon.\]
Note that \(\mathbf{\xi}(t)\) satisfies (4.6), and that there is no other possible solution of (1.1) that satisfies this inequality other than \(\mathbf{x}_{1}(t)\) by Step 1. However, we see that
\[\sup_{t\in I}\|\mathbf{\xi}(t)-\mathbf{x}_{1}(t)\|=\sup_{t\in I}\left\|\varepsilon \left[\int_{t}^{\sigma}\left(\int_{s}^{\sigma}\frac{e^{\int_{s}^{\mu}\left( \rho(\nu)+\frac{\beta(\nu)}{\alpha(\nu)}\right)d\nu}}{\alpha(\mu)}d\mu\right)e ^{-\int_{t}^{s}\rho(\mu)d\mu}ds\right]\mathbf{u}\right\|=B_{1}\varepsilon,\]
and thus,
\[\sup_{t\in I}\|\mathbf{\xi}(t)-\mathbf{x}_{1}(t)\|\leq U_{1}\varepsilon<B_{1} \varepsilon=\sup_{t\in I}\|\mathbf{\xi}(t)-\mathbf{x}_{1}(t)\|.\]
This is a contradiction. Hence we can conclude that \(B_{1}\) is the minimum Ulam constant.
Cases (ii) and (iii) can be shown by the same technique as in Case (i). The proof is now complete.
Consider the linear oscillator (3.8) again, where \(a_{0}\), \(a_{1}\) and \(a_{2}\) are real-valued constants, and \(a_{0}\neq 0\). Theorem 4.1 implies the following result.
**Corollary 4.2**.: _Let \(I=\mathbb{R}\), and \(\lambda_{1}\) and \(\lambda_{2}\) be non-zero real roots of the characteristic equation_
\[a_{0}\lambda^{2}+a_{1}\lambda+a_{2}=0.\]
_If \(a_{0}\neq 0\), then (3.8) is Ulam stable on \(\mathbb{R}\), and the minimum Ulam constant is \(\frac{1}{|a_{0}\lambda_{1}\lambda_{2}|}\)._
Proof.: The proof is divided into three cases (i) \(\lambda_{1}\geq\lambda_{2}>0\), (ii) \(\lambda_{1}>0>\lambda_{2}\), and (iii) \(0>\lambda_{1}\geq\lambda_{2}\). Recall the proof of Corollary 3.2. Since \(\lambda_{1}\) and \(\lambda_{2}\) are the roots of the characteristic equation, they are constant solutions of (2.1), and if \(\rho(t)=\lambda_{2}\), then \(\rho(t)+\frac{\beta(t)}{\alpha(t)}=-\lambda_{1}\). We will use Theorem 4.1 with \(\tau=-\infty\) and \(\sigma=\infty\). Let \(t_{0}\in(-\infty,\infty)\).
Case (i). Suppose \(\lambda_{1}\geq\lambda_{2}>0\). Set \(\rho(t)=\lambda_{2}\). From the facts obtained in Case (i) in the proof of Corollary 3.2, we have \(f_{1}(t)=\frac{1}{|a_{0}|\lambda_{1}}\) and \(f_{2}(t)=\frac{1}{\lambda_{2}}\), and thus, \(f_{1}(t)\) and \(f_{2}(t)\) exist and are bounded on \(\mathbb{R}\). Moreover,
\[\lim_{t\to\sigma^{-}}\int_{t_{0}}^{t}\rho(s)ds=\lim_{t\to\infty}\int_{t_{0}}^{ t}\lambda_{2}ds=\infty\]
and
\[\lim_{t\to\sigma^{-}}e^{\int_{t_{0}}^{t}\rho(s)ds}\int_{t_{0}}^{t}e^{-\int_{t_{ 0}}^{s}\left(2\rho(\mu)+\frac{\beta(\mu)}{\alpha(\mu)}\right)d\mu}ds=\lim_{t \to\infty}e^{\int_{t_{0}}^{t}\lambda_{2}ds}\int_{t_{0}}^{t}e^{\int_{t_{0}}^{s} (\lambda_{1}-\lambda_{2})d\mu}ds=\infty\]
are satisfied; that is, (4.1) holds. Hence, by Theorem 4.1 (i), (3.8) is Ulam stable on \(I\), and the minimum Ulam constant is \(\frac{1}{|a_{0}|\lambda_{1}\lambda_{2}}\).
Case (ii). Suppose \(\lambda_{1}>0>\lambda_{2}\). Set \(\rho(t)=\lambda_{2}\). From the facts obtained in Case (ii) in the proof of Corollary 3.2, we have \(f_{1}(t)=\frac{1}{|a_{0}|\lambda_{1}}\) and \(f_{3}(t)=\frac{1}{-\lambda_{2}}\), and thus, \(f_{1}(t)\) and \(f_{3}(t)\) exist and are bounded on \(\mathbb{R}\). Moreover,
\[\lim_{t\to\tau^{+}}\int_{t_{0}}^{t}\rho(s)ds=\lim_{t\to-\infty}\int_{t_{0}}^{ t}\lambda_{2}ds=\infty\]
and
\[\lim_{t\to\sigma^{-}}e^{\int_{t_{0}}^{t}\rho(s)ds}\int_{t_{0}}^{t}e^{-\int_{t_ {0}}^{s}\left(2\rho(\mu)+\frac{\beta(\mu)}{\alpha(\mu)}\right)d\mu}ds=\lim_{t \to\infty}e^{\int_{t_{0}}^{t}\lambda_{2}ds}\int_{t_{0}}^{t}e^{\int_{t_{0}}^{s} (\lambda_{1}-\lambda_{2})d\mu}ds=\infty\]
are satisfied; that is, (4.3) holds. Hence, by Theorem 4.1 (ii), (3.8) is Ulam stable on \(I\), and the minimum Ulam constant is \(\frac{1}{|a_{0}\lambda_{1}\lambda_{2}|}\).
Case (iii). Suppose \(0>\lambda_{1}\geq\lambda_{2}\). Set \(\rho(t)=\lambda_{2}\). From the facts obtained in Case (iii) in the proof of Corollary 3.2, we have \(f_{3}(t)=\frac{1}{-\lambda_{2}}\) and \(f_{4}(t)=\frac{1}{-|a_{0}|\lambda_{1}}\), and thus, \(f_{3}(t)\) and \(f_{4}(t)\) exist and are bounded on \(\mathbb{R}\). Moreover, \(\lim_{t\to\tau^{+}}\int_{t_{0}}^{t}\rho(s)ds=\infty\) and
\[\lim_{t\to\tau^{+}}e^{\int_{t_{0}}^{t}\rho(s)ds}\int_{t_{0}}^{t}e^{-\int_{t_{0 }}^{s}\left(2\rho(\mu)+\frac{\beta(\mu)}{\alpha(\mu)}\right)d\mu}ds=\lim_{t \to-\infty}e^{\int_{t_{0}}^{t}\lambda_{2}ds}\int_{t_{0}}^{t}e^{\int_{t_{0}}^{s} (\lambda_{1}-\lambda_{2})d\mu}ds=-\infty\]
are satisfied; that is, (4.4) holds. Hence, by Theorem 4.1 (iii), (3.8) is Ulam stable on \(I\), and the minimum Ulam constant is \(\frac{1}{|a_{0}\lambda_{1}\lambda_{2}|}\).
**Remark 4.3**.: When \(a_{0}=0\), Corollary 4.2 is completely consistent with the results given in [10, 25].
## 5. Examples
We now present some examples that utilize the main results of this work. In a few examples, we apply the previous theorems directly to guarantee Ulam stability of the given equation, and to find the minimal Ulam constant. In other examples, we show how the criteria of the theorems do not hold, in the cases of Ulam instability.
**Example 5.1**.: For dimension \(n=1\), consider (1.1) in the form of a homogeneous singular differential equation given by
\[t(1-t)x^{\prime\prime}(t)+(2-t)x^{\prime}(t)+x(t)=0,\quad t\in I=(0,1), \tag{5.1}\]
where \(\alpha(t)=t(1-t)\), \(\beta(t)=(2-t)\), and \(\gamma(t)=1\) are continuous scalar functions with \(\alpha(t)\neq 0\) for all \(t\in I=(0,1)\). The associated Riccati equation (2.1) for (5.1) is
\[t(1-t)\left(\rho^{\prime}+\rho^{2}\right)+(2-t)\rho+1=0,\]
which has as a solution the function
\[\rho(t)=-\frac{1}{t}.\]
We then find that the general solution for (5.1) with \(x\left(\frac{1}{2}\right)=x_{0}\) and \(x^{\prime}\left(\frac{1}{2}\right)=x^{\prime}_{0}\) is
\[x(t)=2x_{0}+x^{\prime}_{0}-\frac{2x_{0}+3x^{\prime}_{0}}{8t}-\frac{1}{2}\left( 2x_{0}+x^{\prime}_{0}\right)t,\quad t\in I=(0,1),\]
for arbitrary constants \(x_{0},x^{\prime}_{0}\in\mathbb{R}\). Using (3.3) and (3.4), we calculate that both
\[f_{3}(t)=\int_{0}^{t}e^{\int_{s}^{t}\left(-\frac{1}{\mu}\right)d\mu}ds=\frac{ t}{2}\]
and
\[f_{4}(t)=\int_{0}^{t}\frac{e^{-\int_{s}^{t}\left(-\frac{1}{\mu}+\frac{2-\mu}{ \mu(1-\mu)}\right)d\mu}}{|s(1-s)|}ds=\int_{0}^{t}\left(\frac{1}{s(1-s)}\right) \left(\frac{s}{1-s}\right)\left(\frac{1-t}{t}\right)ds=1\]
are bounded on \(I=(0,1)\). Moreover, we have
\[\lim_{t\to 0^{+}}\int_{t_{0}}^{t}\left(-\frac{1}{s}\right)ds=\lim_{t\to 0^{+}} \ln\frac{t_{0}}{t}=\infty\]
and
\[\lim_{t\to 0^{+}}e^{\int_{t_{0}}^{t}\left(-\frac{1}{s}\right)ds}\int_{t_{0}}^{t} e^{-\int_{t_{0}}^{s}\left(-\frac{2}{\mu}+\frac{2-\mu}{\mu(1-\mu)}\right)d\mu}ds= \lim_{t\to 0^{+}}\frac{t_{0}}{t}\int_{t_{0}}^{t}\frac{1-s}{1-t_{0}}ds=-\infty,\]
where \(t_{0}\in I\). Thus, (4.4) holds, so that Theorems 3.1 (iii) and 4.1 (iii) apply. It follows that (5.1) is Ulam stable on \(I=(0,1)\) in this case, with minimum Ulam constant
\[B_{3} :=\sup_{t\in I}\int_{0}^{t}\left(\int_{0}^{s}\frac{e^{-\int_{\mu} ^{s}\left(-\frac{1}{\mu}+\frac{2-\nu}{\nu(1-\nu)}\right)d\nu}}{|\mu(1-\mu)|}d \mu\right)e^{\int_{s}^{t}\left(-\frac{1}{\mu}\right)d\mu}ds\] \[=\sup_{t\in I}\int_{0}^{t}1e^{\int_{s}^{t}\left(-\frac{1}{\mu} \right)d\mu}ds=\sup_{t\in I}\frac{t}{2}=\frac{1}{2}\]
by (4.5). This best constant from Theorem 4.1 (iii) can be verified directly. Given \(\varepsilon>0\), let \(x(t)=\varepsilon\left(1-\frac{t}{2}\right)\) and \(\xi(t)\equiv\varepsilon\). Then \(x\) is a solution of (5.1) and \(\xi\) satisfies
\[t(1-t)\xi^{\prime\prime}(t)+(2-t)\xi^{\prime}(t)+\xi(t)=\varepsilon,\quad t\in I =(0,1),\]
with
\[\sup_{t\in(0,1)}|\xi(t)-x(t)|=\sup_{t\in(0,1)}\frac{t\varepsilon}{2}=\frac{1} {2}\varepsilon.\]
In summary, (5.1) is Ulam stable with minimum Ulam stability constant \(B_{3}=\frac{1}{2}\). As can be seen by looking at the general solution, our theorems have the strength to apply to solutions that blow up at \(t=0\).
The following four examples all deal with the Lane-Emden differential equation, either of index \(0\) or index \(1\). See also the recent paper [18].
**Example 5.2**.: Consider the Lane-Emden differential equation (1.1) given by
\[x^{\prime\prime}(t)+\frac{2}{t}x^{\prime}(t)+1=0,\quad t\in I=(1,\infty), \tag{5.2}\]
where \(\alpha(t)=1\), \(\beta(t)=\frac{2}{t}\), \(\gamma(t)=0\), and \(f(t)\equiv-1\) are continuous scalar functions with \(\alpha(t)\neq 0\) for all \(t\in I=(1,\infty)\). The associated Riccati equation (2.1) is
\[1\left(\rho^{\prime}+\rho^{2}\right)+\frac{2}{t}\rho=0,\]
which has as a solution the function
\[\rho(t)=-\frac{1}{t}.\]
We then find that the general solution for (5.2) with \(x(1)=x_{0}\) and \(x^{\prime}(1)=x^{\prime}_{0}\) is
\[x(t)=\frac{-2+3t-t^{3}+6tx_{0}-6x^{\prime}_{0}+6tx^{\prime}_{0}}{6t},\quad t \in(1,\infty),\]
for arbitrary constants \(x_{0},x^{\prime}_{0}\in\mathbb{R}\). Using (3.1) and (3.3), we calculate that
\[f_{1}(t)=\int_{t}^{\infty}\frac{e^{\int_{t}^{s}\Re\left(-\frac{1}{\mu}+\frac{2 }{\mu}\right)d\mu}}{|1|}ds=\int_{t}^{\infty}\frac{s}{t}ds=\infty\]
and
\[f_{3}(t)=\int_{1}^{t}e^{\int_{s}^{t}\Re(-\frac{1}{\mu})d\mu}ds=\frac{t}{2}- \frac{1}{2t}\]
is unbounded on \(I=(1,\infty)\), so that Theorem 3.1 does not apply. Indeed, given an arbitrary \(\varepsilon>0\),
\[\xi^{\prime\prime}(t)+\frac{2}{t}\xi^{\prime}(t)+1=\varepsilon,\quad t\in I=(1,\infty)\]
has a solution \(\xi(t)=\frac{(\varepsilon-1)t^{2}}{6}\), and thus
\[\sup_{t\in I}|\xi(t)-x(t)|=\sup_{t\in I}\left|\frac{2+6x_{0}^{\prime}-3t(1+2x_ {0}+2x_{0}^{\prime})+t^{3}\varepsilon}{6t}\right|=\infty\]
for any choice of \(x_{0},x_{0}^{\prime}\in\mathbb{R}\), so that in fact (5.2) is not Ulam stable on \((1,\infty)\).
**Example 5.3**.: In this example we modify the interval \(I\) for the Lane-Emden equation (5.2) by considering
\[x^{\prime\prime}(t)+\frac{2}{t}x^{\prime}(t)+1=0,\quad t\in I=(0,\sigma), \quad 0<\sigma<\infty. \tag{5.3}\]
It is easy to verify that \(f_{3}\) in (3.3) and \(f_{4}\) in (3.4) have expression
\[f_{3}(t)=\frac{t}{2}=f_{4}(t),\quad t\in(0,\sigma),\]
as \(\rho(t)=-\frac{1}{t}\), \(\alpha(t)=1\), and \(\beta(t)=-2\rho(t)\). Moreover, we have
\[\lim_{t\to 0^{+}}\int_{t_{0}}^{t}\left(-\frac{1}{s}\right)ds=\lim_{t\to 0^{+}} \ln\frac{t_{0}}{t}=\infty\]
and
\[\lim_{t\to 0^{+}}e^{\int_{t_{0}}^{t}\left(-\frac{1}{s}\right)ds}\int_{t_{0}}^{ t}e^{-\int_{t_{0}}^{s}\left(-\frac{2}{\mu}+\frac{2}{\mu}\right)d\mu}ds=\lim_{t \to 0^{+}}\frac{t_{0}}{t}(t-t_{0})=-\infty,\]
where \(t_{0}\in I\). Then (4.4) holds, so that (5.3) is Ulam stable on \(I=(0,\sigma)\) for any \(\sigma\in(0,\infty)\), with minimum Ulam constant
\[B_{3} :=\sup_{t\in I}\int_{0}^{t}\left(\int_{0}^{s}e^{-\int_{\mu}^{s} \left(\frac{1}{\nu}\right)d\nu}d\mu\right)e^{\int_{s}^{t}\left(-\frac{1}{\mu} \right)d\mu}ds\] \[=\sup_{t\in(0,\sigma)}\int_{0}^{t}\left(\frac{s}{2}\right)\left( \frac{s}{t}\right)ds=\sup_{t\in I}\frac{t^{2}}{6}=\frac{\sigma^{2}}{6}\]
by (4.5), after employing Theorems 3.1 (iii) and 4.1 (iii). In summary, (5.3) is Ulam stable on \((0,\sigma)\) for any \(\sigma\in(0,\infty)\), with minimum Ulam stability constant \(B_{3}=\frac{\sigma^{2}}{6}\).
**Example 5.4**.: Consider (1.1) in the form of the Lane-Emden differential equation given by
\[x^{\prime\prime}(t)+\frac{2}{t}x^{\prime}(t)+x(t)=0,\quad t\in I=(0,\sigma), \quad\sigma\in\left(0,\frac{\pi}{2}\right), \tag{5.4}\]
where \(\alpha(t)=1\), \(\beta(t)=\frac{2}{t}\), \(\gamma(t)=1\), and \(f(t)\equiv 0\) are continuous scalar functions with \(\alpha(t)\neq 0\) for all \(t\in I=(0,\sigma)\), where \(\sigma\in\left(0,\frac{\pi}{2}\right)\). The associated Riccati equation (2.1) is
\[1\left(\rho^{\prime}+\rho^{2}\right)+\frac{2}{t}\rho+1=0,\]
which has as a solution the function
\[\rho(t)=-\tan(t)-\frac{1}{t},\quad t\in(0,\sigma).\]
We then find that the general solution for (5.4) is
\[x(t)=\frac{c_{1}\cos t}{t}+\frac{c_{2}\sin t}{t},\quad t\in(0,\sigma),\]
for arbitrary constants \(c_{1},c_{2}\in\mathbb{R}\). Using (3.3) and (3.4), we calculate that both
\[f_{3}(t)=\int_{0}^{t}e^{\int_{s}^{t}(-\tan(\mu)-\frac{1}{\mu})d\mu}ds=\int_{0}^ {t}\frac{s\cos(t)}{t\cos(s)}ds\]
and
\[f_{4}(t)=\int_{0}^{t}e^{-\int_{s}^{t}(-\tan(\mu)+\frac{1}{\mu})d\mu}ds=\tan(t) +\frac{\cos(t)-1}{t\cos(t)}\]
are bounded on \(I=(0,\sigma)\), because
\[\lim_{t\to 0^{+}}\int_{0}^{t}\frac{s\cos(t)}{t\cos(s)}ds=0\]
holds. Moreover, we have
\[\lim_{t\to 0^{+}}\int_{t_{0}}^{t}\left(-\tan(s)-\frac{1}{s}\right)ds=\lim_{t \to 0^{+}}\ln\frac{t_{0}\cos(t)}{t\cos(t_{0})}=\infty\]
and
\[\lim_{t\to 0^{+}}e^{\int_{t_{0}}^{t}\left(-\tan(s)-\frac{1}{s} \right)ds}\int_{t_{0}}^{t}e^{-\int_{t_{0}}^{s}\left[2\left(-\tan(\mu)-\frac{1 }{\mu}\right)+\frac{2}{\mu}\right]d\mu}ds =\frac{t_{0}\cos(t)}{t\cos(t_{0})}\int_{t_{0}}^{t}\frac{\cos^{2} (t_{0})}{\cos^{2}(s)}ds\] \[=\frac{\sin(t-t_{0})}{t}=-\infty,\]
where \(t_{0}\in I\). Then (4.4) holds, so that Theorems 3.1 (iii) and 4.1 (iii) apply. It follows that (5.4) is Ulam stable on \(I=(0,\sigma)\) in this case, with minimum Ulam constant
\[B_{3} :=\sup_{t\in I}\int_{0}^{t}\left(\int_{0}^{s}e^{-\int_{\mu}^{s} \left(-\tan(\nu)-\frac{1}{\nu}+\frac{2}{\nu}\right)d\nu}d\mu\right)e^{\int_{ s}^{t}\left(-\tan(\mu)-\frac{1}{\mu}\right)d\mu}ds\] \[=\sup_{t\in I}\int_{0}^{t}\left(\tan(s)+\frac{\cos(s)-1}{s\cos( s)}\right)\left(\frac{s\cos(t)}{t\cos(s)}\right)ds\] \[=\sup_{t\in I}\left(\frac{\cos(t)}{t}\right)\left(\frac{t-\sin( t)}{\cos(t)}\right)=1-\frac{\sin(\sigma)}{\sigma}\]
by (4.5). In summary, (5.4) is Ulam stable on \((0,\sigma)\) for any \(\sigma\in\left(0,\frac{\pi}{2}\right)\), with minimum Ulam stability constant \(B_{3}=1-\frac{\sin(\sigma)}{\sigma}\).
**Example 5.5**.: In this example we modify the interval \(I\) for the Lane-Emden equation (5.4) by considering
\[x^{\prime\prime}(t)+\frac{2}{t}x^{\prime}(t)+x(t)=0,\quad t\in I=(\tau,\infty), \quad 0\leq\tau<\infty. \tag{5.5}\]
Once again, the corresponding Riccati equation has solution \(\rho(t)=-\tan(t)-\frac{1}{t}\), which does not exist infinitely often on \(I=(\tau,\infty)\) due to the tangent function, so Theorems 3.1 and 4.1 cannot be applied. Given \(\varepsilon>0\), consider
\[\xi(t)=\frac{\varepsilon}{8t}\left(\cos(t)+2t\sin(t)-2t^{2}\cos(t)\right), \quad t\in(\tau,\infty).\]
Note that
\[\sup_{t\in I}\left|\xi^{\prime\prime}(t)+\frac{2}{t}\xi^{\prime}(t)+\xi(t) \right|=\sup_{t\in I}\left|\varepsilon\sin(t)\right|=\varepsilon.\]
Since
\[x(t)=\frac{c_{1}\cos t}{t}+\frac{c_{2}\sin t}{t},\quad t\in(\tau,\infty)\]
is the general solution for (5.5), we have
\[\sup_{t\in I}\left|\xi(t)-x(t)\right|=\infty,\]
making (5.5) unstable in the Ulam sense on \(I=(\tau,\infty)\) for any \(\tau\in[0,\infty)\).
**Example 5.6**.: In this example, we consider an extension of Example 5.3 to include more general power functions. We consider the second-order linear differential equation
\[t^{1-a}x^{\prime\prime}(t)+bt^{-a}x^{\prime}(t)+(b-2)t^{-1-a}x(t)+t^{b-2}=0, \quad t\in I=(0,\sigma),\quad 0<\sigma<\infty, \tag{5.6}\]
where \(a\) and \(b\) are real-valued constants with
\[1-a<b\leq 2.\]
If \(a=1\) and \(b=2\), then this equation reduces to (5.3). It is easy to verify that \(\rho(t)=-\frac{1}{t}\) is a solution of the associated Riccati equation
\[t^{1-a}\left(\rho^{\prime}+\rho^{2}\right)+bt^{-a}\rho+(b-2)t^{-1-a}=0,\]
and \(f_{3}\) in (3.3) and \(f_{4}\) in (3.4) have expression
\[f_{3}(t)=\frac{t}{2},\quad f_{4}(t)=\frac{t^{a}}{a+b-1},\quad t\in(0,\sigma),\]
as \(\alpha(t)=t^{1-a}\), and \(\beta(t)=bt^{-a}\). Moreover, we have
\[\lim_{t\to 0^{+}}\int_{t_{0}}^{t}\left(-\frac{1}{s}\right)ds=\lim_{t\to 0^{+}} \ln\frac{t_{0}}{t}=\infty\]
and
\[\lim_{t\to 0^{+}}e^{\int_{t_{0}}^{t}\left(-\frac{1}{s}\right)ds}\int_{t_{0}}^{ t}e^{-\int_{t_{0}}^{s}\left(-\frac{2}{\mu}+\frac{b}{\mu}\right)d\mu}ds=\lim_{t \to 0^{+}}\frac{t_{0}}{t}\int_{t_{0}}^{t}\left(\frac{s}{t_{0}}\right)^{2-b}ds= \lim_{t\to 0^{+}}\frac{t_{0}^{b-1}}{3-b}\left(t^{2-b}-\frac{t_{0}^{3-b}}{t} \right)=-\infty,\]
where \(t_{0}\in I\). Then (4.4) holds, so that (5.6) is Ulam stable on \(I=(0,\sigma)\) for any \(\sigma\in(0,\infty)\), with minimum Ulam constant
\[B_{3} :=\sup_{t\in I}\int_{0}^{t}\left(\int_{0}^{s}\frac{e^{-\int_{\mu}^ {s}\left(-\frac{1}{\nu}+\frac{b}{\nu}\right)d\nu}}{|s^{1-a}|}d\mu\right)e^{ \int_{s}^{t}\left(-\frac{1}{\mu}\right)d\mu}ds\] \[=\sup_{t\in(0,\sigma)}\int_{0}^{t}\left(\frac{s^{a}}{a+b-1} \right)\left(\frac{s}{t}\right)ds=\sup_{t\in I}\frac{t^{a+1}}{(a+2)(a+b-1)}= \frac{\sigma^{a+1}}{(a+2)(a+b-1)}\]
by (4.5), after employing Theorems 3.1 (iii) and 4.1 (iii). In summary, (5.6) is Ulam stable on \((0,\sigma)\) for any \(\sigma\in(0,\infty)\), with minimum Ulam stability constant \(B_{3}=\frac{\sigma^{a+1}}{(a+2)(a+b-1)}\).
## 6. Conclusions
This work investigated the Ulam stability of second-order linear differential vector equations with variable coefficients. Sufficient conditions for Ulam stability and explicit Ulam stability constants are given. In particular, if restricted to real-valued coefficients, the best Ulam constants are derived. To the best of the authors' knowledge, no best Ulam constants are known so far for second-order non-autonomous equations other than periodic systems. Therefore, this is the first study to derive best Ulam constants for second-order non-periodic non-autonomous linear differential equations. Various non-trivial examples, mainly Lane-Emden differential equations, were provided to illustrate the results obtained. Some examples show that Ulam stability can be guaranteed even for solutions that blow up in finite time, and that best Ulam constants can be derived. Note that it is emphasized here that it is also the first time that a Ulam stability analysis of blow-up solutions of the second-order equation has been presented. In addition, examples of instability are also presented.
## Acknowledgments
M. O. was supported by the Japan Society for the Promotion of Science (JSPS) KAKENHI (grant number JP20K03668).
|
2302.01970 | Efficient Gradient Approximation Method for Constrained Bilevel
Optimization | Bilevel optimization has been developed for many machine learning tasks with
large-scale and high-dimensional data. This paper considers a constrained
bilevel optimization problem, where the lower-level optimization problem is
convex with equality and inequality constraints and the upper-level
optimization problem is non-convex. The overall objective function is
non-convex and non-differentiable. To solve the problem, we develop a
gradient-based approach, called gradient approximation method, which determines
the descent direction by computing several representative gradients of the
objective function inside a neighborhood of the current estimate. We show that
the algorithm asymptotically converges to the set of Clarke stationary points,
and demonstrate the efficacy of the algorithm by the experiments on
hyperparameter optimization and meta-learning. | Siyuan Xu, Minghui Zhu | 2023-02-03T19:34:56Z | http://arxiv.org/abs/2302.01970v1 | # Efficient Gradient Approximation Method for Constrained Bilevel Optimization
###### Abstract
Bilevel optimization has been developed for many machine learning tasks with large-scale and high-dimensional data. This paper considers a constrained bilevel optimization problem, where the lower-level optimization problem is convex with equality and inequality constraints and the upper-level optimization problem is non-convex. The overall objective function is non-convex and non-differentiable. To solve the problem, we develop a gradient-based approach, called gradient approximation method, which determines the descent direction by computing several representative gradients of the objective function inside a neighborhood of the current estimate. We show that the algorithm asymptotically converges to the set of Clarke stationary points, and demonstrate the efficacy of the algorithm by the experiments on hyperparameter optimization and meta-learning.
## 1 Introduction
A general constrained bilevel optimization problem is formulated as follows:
\[\min_{x\in\mathbb{R}^{d_{x}}}\,\varPhi(x)=f\left(x,y^{*}(x)\right) \tag{1}\] \[\text{s.t.}\,\,\,r\left(x,y^{*}(x)\right)\leq 0;\,\,s\left(x,y^{*}(x )\right)=0;\] \[y^{*}(x)=\underset{y\in\mathbb{R}^{d_{y}}}{\arg\min}\{g(x,y):p \left(x,y\right)\leq 0;q\left(x,y\right)=0\}.\]
The bilevel optimization minimizes the overall objective function \(\varPhi(x)\) with respect to (w.r.t.) \(x\), where \(y^{*}(x)\) is the optimal solution of the lower-level optimization problem and parametric in the upper-level decision variable \(x\). In this paper, we assume that \(y^{*}(x)\) is unique for any \(x\in\mathbb{R}^{d_{x}}\).
Existing methods to solve problem (1) can be categorized into two classes: single-level reduction methods [1, 1, 2, 3] and descent methods [1, 2, 3]. Single-level reduction methods use the KKT conditions to replace the lower-level optimization problem when it is convex. Then, they reformulate the bilevel optimization problem (1) as a single-level constrained optimization problem. Descent methods aim to find descent directions in which the new point is feasible and meanwhile reduces the objective function. Paper [1] computes a descent direction of the objective function by solving a quadratic program. Paper [1] applies the gradient of the objective function computed in [1, 2] to compute a generalized Clarke Jacobian, and uses a bundle method [1] for the optimization. When applied to machine learning, bilevel optimization faces additional challenges as the dimensions of decision variables in the upper-level and lower-level problems are high [1].
Gradient-based methods have been shown to be effective in handling large-scale and high-dimensional data in a variety of machine learning tasks [1]. They have been extended to solve the bilevel optimization problem where there is no constraint in the lower-level optimization. The methods can be categorized into the approximate implicit differentiation (AID) based approaches [1, 1, 2, 3] and the iterative differentiation (ITD) approaches [1, 2, 3, 4, 5]. The AID based approaches evaluate the gradients of \(y^{*}(x)\) and \(\varPhi(x)\) based on implicit differentiation (Bengio 2000). The ITD based approaches treat the iterative optimization steps in the lower-level optimization as a dynamical system, impose \(y^{*}(x)\) as its stationary point, and compute \(\nabla y^{*}(x)\) at each iterative step. The gradient-based algorithms have been applied to solve several machine learning tasks, including meta-learning [1, 2, 3], hyperparameter optimization [1, 2], reinforcement learning [1, 2], and network architecture search [1]. The above methods are limited to unconstrained bilevel optimization and require the objective function to be differentiable. They cannot be directly applied when constraints are present in the lower-level optimization, as the objective function is non-differentiable.
**Contributions.** In this paper, we consider a special case of problem (1) where the upper-level constraints \(r\) and \(s\) are not included. In general, the objective function \(\varPhi\) is nonconvex and non-differentiable, even if the upper-level and lower-level problems are convex and functions \(f\), \(g\), \(p\), \(q\) are differentiable [1, 2]. Most
methods for this bilevel optimization problem are highly complicated and computationally expensive, especially when the dimension of the problem is large [11, 12]. Addressing the challenge, we determine the descent direction by computing several gradients which can represent the gradients of the objective function of all points in a ball, and develop a computationally efficient algorithm with convergence guarantee for the constrained bilevel optimization problem. The overall contributions are summarized as follows. (i) Firstly, we derive the conditions under which the lower-level optimal solution \(y^{*}(x)\) is continuously differentiable or directional differentiable. In addition, we provide analytical expressions for the gradient of \(y^{*}(x)\) when it is continuously differentiable and the directional derivative of \(y^{*}(x)\) when it is directional differentiable. (ii) Secondly, we propose the gradient approximation method, which applies the Clarke subdifferential approximation of the non-convex and non-differentiable objective function \(\varPhi\) to the line search method. In particular, a set of derivatives is used to approximate the gradients or directional derivatives on all points in a neighborhood of the current estimate. Then, the Clarke subdifferential is approximated by the derivatives, and the approximate Clarke subdifferential is employed as the descent direction for line search. (iii) It is shown that, the Clarke subdifferential approximation errors are small, the line search is always feasible, and the algorithm asymptotically converges to the set of Clarke stationary points. (iv) We empirically verify the efficacy of the proposed algorithm by conducting experiments on hyperparameter optimization and meta-learning.
**Related Works.** Differentiation of the optimal solution of a constrained optimization problem has been studied for a long time. Sensitivity analysis of constrained optimization [12, 13] shows the optimal solution \(y^{*}(x)\) of a convex optimization problem is directional differentiable but may not differentiable at all points. It implies that the objective function \(\varPhi(x)\) in problem (1) may not be differentiable. Based on the implicit differentiation of the KKT conditions, the papers also compute \(\nabla y^{*}(x)\) when \(y^{*}\) is differentiable at \(x\). Optnet [10, 11, 1] applies the gradient computation to the constrained bilevel optimization, where a deep neural network is included in the upper-level optimization problem. In particular, the optimal solution \(y^{*}(x)\) serves as a layer in the deep neural network and \(\nabla y^{*}(x)\) is used as the backpropagation gradients to optimize the neural network parameters. However, all the above methods do not explicitly consider the non-differentiability of \(y^{*}(x)\) and \(\varPhi(x)\), and cannot guarantee convergence. Recently, papers [11, 12] consider that the lower-level optimization problem has simple constraints, such that projection onto the constraint set can be easily computed, and require that the constraint set is bounded. In this paper, we consider inequality and equality constraints, which are more general than those in [11, 12].
**Notations.** Denote \(a>b\) for vectors \(a,b\in\mathbb{R}^{n}\), when \(a_{i}>b_{i}\) for all \(1\leq i\leq n\). Notations \(a\geq b\), \(a=b\), \(a\leq b\), and \(a<b\) are defined in an analogous way. Denote the \(l_{2}\) norm of vectors by \(\|\cdot\|\). The directional derivative of a function \(f\) at \(x\) on the direction \(d\) with \(\|d\|=1\) is defined as \(\nabla_{d}f(x)\triangleq\lim_{h\to 0^{+}}\frac{f(x+hd)-f(x)}{h}\). A ball centered at \(x\) with radius \(\epsilon\) is denoted as \(\mathcal{B}(x,\epsilon)\). The complementary set of a set \(S\) is denoted as \(S^{C}\). The distance between the point \(x\) and the set \(S\) is defined as \(d(x,S)\triangleq\inf\{\|x-a\|\mid a\in S\}\). The convex hull of \(S\) is denoted by \(\operatorname{conv}S\). For set \(S\) and function \(f\), we define the image set \(f(S)\triangleq\{f(x)\mid x\in S\}\). For a finite positive integer set \(I\) and a vector function \(p\), we denote the subvector function \(p_{I}\triangleq[p_{k_{1}},\cdots,p_{k_{j}},\cdots]^{\top}\) where \(k_{j}\in I\).
## 2 Problem Statement
Consider the constrained bilevel optimization problem:
\[\min_{x\in\mathbb{R}^{d_{x}}}\varPhi(x)=f\left(x,y^{*}(x)\right) \tag{2}\] \[\text{s.t.}\ y^{*}(x)=\operatorname*{arg\,min}_{y\in\mathbb{R}^{ d_{y}}}\{g(x,y):p\left(x,y\right)\leq 0;q\left(x,y\right)=0\},\]
where \(f,g:\mathbb{R}^{d_{x}}\times\mathbb{R}^{d_{y}}\rightarrow\mathbb{R}\); \(p:\mathbb{R}^{d_{x}}\times\mathbb{R}^{d_{y}}\rightarrow\mathbb{R}^{m}\); \(q:\mathbb{R}^{d_{x}}\times\mathbb{R}^{d_{y}}\rightarrow\mathbb{R}^{n}\). Given \(x\in\mathbb{R}^{d_{x}}\), we denote the lower-level optimization problem in (2) as \(P(x)\). The feasible set of \(P(x)\) is defined as \(K\left(x\right)\triangleq\{y\in\mathbb{R}^{d_{y}}:p\left(x,y\right)\leq 0,q \left(x,y\right)=0\}\). Suppose the following assumptions hold.
**Assumption 1**.: _The functions \(f\), \(g\), \(p\) and \(q\) are twice continuously differentiable._
**Assumption 2**.: _For all \(x\in\mathbb{R}^{d_{x}}\), the function \(g(x,y)\) is \(\mu\)-strongly-convex w.r.t. \(y\); \(p_{j}(x,y)\) is convex w.r.t. \(y\) for each \(j\); \(q_{i}(x,y)\) is affine w.r.t. \(y\) for each \(i\)._
Note that the upper-level objective function \(f(x,y)\) and the overall objective function \(\varPhi(x)\) are non-convex. The lower-level problem \(P(x)\) is convex and its Lagrangian is \(\mathcal{L}(y,\lambda,\nu,x)\triangleq g(x,y)+\lambda^{\top}p(x,y)+\nu^{\top}q(x,y)\), where \((\lambda,\nu)\) are Lagrange multipliers and \(\lambda\geq 0\).
**Definition 1**.: _Suppose that the KKT conditions hold at \(y\) for \(P(x)\) with the Lagrangian multipliers \(\lambda\) and \(\nu\). The set of active inequality constraints at \(y\) for \(P(x)\) is defined as: \(J(x,y)\triangleq\{j:1\leq j\leq m,\ p_{j}(x,y)=0\}\). An inequality constraint is called inactive if it is not included in \(J(x,y)\) and the set of inactive constraints is denoted as \(J(x,y)^{C}\). The set of strictly active inequality constraints at \(y\) is defined as: \(J^{+}(x,y,\lambda)\triangleq\{j:j\in J\left(x,y\right),\ \lambda_{j}>0\}\). The set of non-strictly active inequality constraints at \(y\) is defined as: \(J^{0}(x,y,\lambda)\triangleq J(x,y)\setminus J^{+}(x,y,\lambda)\). Notice that \(\lambda_{j}\geq 0\) for \(j\in J(x,y)\) and \(\lambda_{j}=0\) for \(j\in J^{0}(x,y,\lambda)\)._
**Definition 2**.: _The Linear Independence Constraint Qualification (LICQ) holds at \(y\) for \(P(x)\) if the vectors \(\{\nabla_{y}p_{j}\left(x,y\right),j\in J\left(x,y\right);\nabla_{y}q_{i}\left(x, y\right),1\leq i\leq n\}\) are linearly independent._
**Assumption 3**.: _Suppose that for all \(x\in\mathbb{R}^{d_{x}}\), the solution \(y^{*}(x)\) exists for \(P\left(x\right)\), and the LICQ holds at \(y^{*}(x)\) for \(P(x)\)._
## 3 Differentiability and Gradient of \(y^{*}(x)\)
In this section, we provide sufficient conditions under which the lower-level optimal solution \(y^{*}(x)\) is continuously differentiable or directional differentiable. We compute the gradient of \(y^{*}(x)\) when it is continuously differentiable and the
directional derivative of \(y^{*}(x)\) when it is directional differentiable. Moreover, we give a necessary condition that \(y^{*}(x)\) is not differentiable and illustrate it by a numerical example.
In problem (2), if the upper-level objective function \(f\) and the solution of lower-level problem \(y^{*}\) are continuously differentiable, so is \(\varPhi\), and by the gradient computation of composite functions, we have
\[\nabla\varPhi(x)=\nabla_{x}f(x,y^{*}(x))+\nabla y^{*}(x)^{\top}\nabla_{y}f(x,y ^{*}(x)). \tag{3}\]
It is shown in [10] that, when \(p\) and \(q\) are absent, \(y^{*}\) and \(\varPhi\) are differentiable under certain assumptions. The differentiability of \(y^{*}\) and \(\varPhi\) is used by the AID based approaches in [11, 12, 13, 14] to approximate \(\nabla y^{*}\) and minimize \(\varPhi\) by gradient descent. However, it is not the case as the lower-level problem (2) is constrained.
Theorem 1 states the conditions under which \(y^{*}(x)\) is directional differentiable.
**Theorem 1**.: _Suppose Assumptions 1, 2, 3 hold. The following properties hold for any \(x\)._
1. _The global minimum_ \(y^{*}(x)\) _of_ \(P\left(x\right)\) _exists and is unique. The KKT conditions hold at_ \(y^{*}(x)\) _with unique Lagrangian multipliers_ \(\lambda(x)\) _and_ \(\nu(x)\)_._
2. _The vector function_ \(z(x)\triangleq[y^{*}(x)^{\top},\lambda(x)^{\top},\nu(x)^{\top}]^{\top}\) _is continuous and locally Lipschitz. The directional derivative of_ \(z(x)\) _on any direction exists._
As shown in part (i) of Theorem 1, \(y^{*}(x)\), \(\lambda(x)\) and \(\nu(x)\) are uniquely determined by \(x\). So we simplify the notations of Definition 1 in the rest of this paper: \(J(x,y^{*}(x))\) is denoted as \(J(x)\), \(J^{+}(x,y^{*}(x),\lambda(x))\) is denoted as \(J^{+}(x)\), and \(J^{0}(x,y^{*}(x),\lambda(x))\) is denoted as \(J^{0}(x)\). In part (ii), the computation of the directional derivative of \(z(x)\) is given in Theorem 6 in Appendix C.
**Definition 3**.: _Suppose that the KKT conditions hold at \(y\) for \(P(x)\) with the Lagrangian multipliers \(\lambda\) and \(\nu\). The Strict Complementarity Slackness Condition (SCSC) holds at \(y\) w.r.t. \(\lambda\) for \(P(x)\), if \(\lambda_{j}>0\) for all \(j\in J(x,y)\)._
**Remark 1**.: _The KKT conditions include the Complementarity Slackness Condition (CSC). The SCSC is stronger than the CSC, which only requires that \(\lambda_{j}\geq 0\) for all \(j\in J(x,y)\)._
Theorem 2 states the conditions under which \(y^{*}(x)\) is continuously differentiable and derives \(\nabla y^{*}(x)\).
**Theorem 2**.: _Suppose Assumptions 1, 2, 3 hold. If the SCSC holds at \(y^{*}(x)\) w.r.t. \(\lambda(x)\), then \(z(x)\) is continuously differentiable at \(x\) and the gradient is computed as_
\[\left[\nabla_{x}y^{*}(x)^{\top},\nabla_{x}\lambda_{J(x)}^{\top}(x),\nabla_{x }\nu(x)^{\top}\right]^{\top}=-M_{+}^{-1}(x)N_{+}(x) \tag{4}\]
_and \(\nabla_{x}\lambda_{J(x)^{C}}(x)=0\), where \(M_{+}(x)\triangleq\)_
\[\left[\begin{array}{ccc}\nabla_{y}^{2}\mathcal{L}&\nabla_{y}p_{J^{+}(x)}^{ \top}&\nabla_{y}q^{\top}\\ \nabla_{y}p_{J^{+}(x)}&0&0\\ \nabla_{y}q&0&0\end{array}\right](x,y^{*}(x),\lambda(x),\nu(x))\]
_is nonsingular and \(N_{+}(x)\triangleq\)_
\[[\nabla_{xy}^{2}\mathcal{L}^{\top},\nabla_{x}p_{J^{+}(x)}^{\top},\nabla_{x}q ^{\top}]^{\top}(x,y^{*}(x),\lambda(x),\nu(x)).\]
Theorem 2 shows that, if \(z(x)\) is not continuously differentiable, then the SCSC does not hold at \(y^{*}(x)\) w.r.t. \(\lambda(x)\). Definition 3 implies that the SCSC holds at \(y\) w.r.t. \(\lambda\) for \(P(x)\) if and only if \(J^{0}(x)=\emptyset\). It concludes that if \(y^{*}(x)\) is not continuously differentiable at \(x\), \(J^{0}(x)\neq\emptyset\), i.e., the non-differentiability of \(y^{*}(x)\) occurs at points with non-strictly active constraints. Example 1 illustrates such claim.
**Example 1**.: _Consider a bilevel optimization problem \(\varPhi(x)=y^{*}(x)\) and the lower-level problem \(P(x)\): \(y^{*}(x)=\arg\min_{y}\{(y-x^{2})^{2}:p_{1}(x,y)=-x-y\leq 0\}\), where \(x\), \(y\in\mathbb{R}\). The analytical solution of \(z(x)=[y^{*}(x),\lambda(x)]\) is given by: \(y^{*}(x)=x^{2}\), \(\lambda(x)=0\) when \(x\in(-\infty,-1]\cup[0,+\infty)\); \(y^{*}(x)=-x\), \(\lambda(x)=-2x(1+x)\) when \(x\in(-1,0]\). Correspondingly, when \(x\in(-1,0)\), \(J(x)=\{1\}\), \(J^{+}(x)=\{1\}\), \(J^{0}(x)=\emptyset\); when \(x\in(-\infty,-1)\cup(0,+\infty)\), \(J(x)=\emptyset\), \(J^{+}(x)=\emptyset\), \(J^{0}(x)=\emptyset\); when \(x\in\{-1,0\}\), \(J(x)=\{1\}\), \(J^{+}(x)=\emptyset\), \(J^{0}(x)=\{1\}\). As shown in Fig. 1, \(y^{*}(x)\) is continuously differentiable everywhere except when \(J^{0}(x)\neq\emptyset\)._
The computation of the gradient of \(z(x)\) in (4) is derived from the implicit differentiation of the KKT conditions of problem \(P(x)\), which is also used in [11, 12, 13, 14]. Compared with these papers, Theorem 2 directly determines \(\nabla_{x}\lambda_{J(x)^{C}}(x)=0\) and excludes \(\lambda_{J(x)^{C}}(x)\) from the computation of the inverse matrix in (4), when \(z(x)\) is continuously differentiable. Theorem 6 in Appendix C derives the directional derivative of \(z(x)\) when it is not differentiable.
Consider a special case where the lower-level optimization problem \(P(x)\) is unconstrained. Since the SCSC is not needed anymore, the assumptions in Theorem 2 reduce to that \(g\) is twice continuously differentiable and \(g(x,y)\) is \(\mu\)-strongly-convex w.r.t. \(y\) for \(x\in\mathbb{R}^{d_{x}}\). By Theorem 2, the optimal solution \(y^{*}(x)\) is continuously differentiable, the matrix \(\nabla_{y}^{2}g(x,y)\) is non-singular, and the gradient is computed as \(\nabla y^{*}(x)=-[\nabla_{y}^{2}g(x,y)]^{-1}\nabla_{xy}^{2}g(x,y)\). These results are well-known and widely used in unconstrained bilevel optimization analysis and applications [11, 12, 13, 14].
## 4 The Gradient Approximation Method
In this section, we develop the gradient approximation method to efficiently solve problem (2), whose objective function is non-differentiable and non-convex. First, we define the
Figure 1: Occurrence of non-differentiability.
Clarke subdifferential (Section 4.1) and efficiently approximate the Clarke subdifferential of the objective function \(\Phi(x)\) (Section 4.2). Next, we propose the gradient approximation algorithm, provide its convergence guarantee (Section 4.3), and present its implementation details (Section 4.4).
### Clarke Subdifferential of \(\Phi\)
As shown in Section 2 and also shown in [11, 10], the objective function \(\Phi\left(x\right)\) of problem (2) is usually non-differentiable and non-convex. To deal with the non-smoothness and non-convexity, we introduce Clarke subdifferential and Clarke stationary point.
**Definition 4** (Clarke subdifferential and Clarke stationary point [10]).: _For a locally Lipschitz function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\), the Clarke subdifferential of \(f\) at \(x\) is defined by the convex hull of the limits of gradients of \(f\) on sequences converging to \(x\), i.e., \(\bar{\partial}f(x)\triangleq\operatorname{conv}\left\{\lim_{j\rightarrow\infty }\nabla f\left(y^{j}\right):\left\{y^{j}\right\}\to x\right.\) where \(f\) is differentiable at \(y^{j}\) for all \(j\in\mathbb{N}\). The Clarke \(\epsilon\)-subdifferential of \(f\) at \(x\) is defined by \(\bar{\partial}_{\epsilon}f(x)\triangleq\operatorname{conv}\{\bar{\partial}f( x^{\prime}):x^{\prime}\in\mathcal{B}(x,\epsilon)\}\). A point \(x\) is Clarke stationary for \(f\) if \(0\in\bar{\partial}f(x)\)._
If \(y^{*}\) is differentiable at \(x\), we have \(\bar{\partial}y^{*}(x)=\{\nabla y^{*}(x)\}\) and \(\bar{\partial}\Phi(x)=\{\nabla_{x}f(x,y^{*}(x))+\nabla y^{*}(x)^{\top}\nabla_{ y}f(x,y^{*}(x))\}\); otherwise, \(\bar{\partial}\Phi(x)=\{\nabla_{x}f(x,y^{*}(x))+w^{\top}\nabla_{y}f(x,y^{*}(x )):w\in\bar{\partial}y^{*}(x)\}\). Take the functions shown in Example 1 and Fig. 1 as an example, \(\bar{\partial}_{\epsilon}\Phi(-1)=\bar{\partial}_{\epsilon}y^{*}(-1)= \operatorname{conv}\{[-2-2\epsilon,-2]\cup\{-1\}\}=[-2-2\epsilon,-1]\), and \(\bar{\partial}_{\epsilon}\Phi(0)=\bar{\partial}_{\epsilon}y^{*}(0)= \operatorname{conv}\{[0,2\epsilon]\cup\{-1\}\}=[-1,2\epsilon]\).
### Clarke Subdifferential Approximation
Gradient-based methods have been applied to convex and non-convex optimization problems [1, 10]. The convergence requires that the objective function is differentiable. If there exist points where the objective function is not differentiable, the probability for the algorithms to visit these points is non-zero and the gradients at these points are not defined [1]. Moreover, oscillation may occur even if the objective function is differentiable at all visited points [1].
To handle the non-differentiability, the gradient sampling method [1, 10, 11] uses gradients in a neighborhood of the current estimate to approximate the Clarke subdifferential and determine the descent direction. Specifically, the method samples a set of points inside the neighborhood \(\mathcal{B}(x^{0},\epsilon)\), select the points where the objective function is differentiable, and then compute the convex hull of the gradients on the sampled points.
However, in problem (2), the point sampling is highly computationally expensive. For each sampled point \(x^{j}\), to check the differentiability of \(\Phi\), we need to solve the lower-level optimization \(P(x^{j})\) to obtain \(y^{*}(x^{j})\), \(\lambda(x^{j})\) and \(\nu(x^{j})\), and check the SCSC. Moreover, after the points are sampled, the gradient on each point is computed by (4). As the dimension \(d_{x}\) increases, the sampling number increases to ensure the accuracy of the approximation. More specifically, as shown in [11], the algorithm is convergent if the sampling number is large than \(d_{x}+1\). The above procedure is executed in each optimization iteration.
Addressing the computational challenge, we approximate the Clarke \(\epsilon\)-subdifferential by a small number of gradients, which can represent the gradients on all points in the neighborhood. The following propositions distinguish two cases: \(\Phi\) is continuously differentiable on \(\mathcal{B}(x^{0},\epsilon)\) (Proposition 1) and it is not (Proposition 2).
**Proposition 1**.: _Suppose Assumptions 1, 2, 3 hold. Consider \(x^{0}\in R^{d_{x}}\). There is sufficiently small \(\epsilon>0\) such that, if the SCSC holds at \(y^{*}(x)\) w.r.t. \(\lambda(x)\) for any \(x\in\mathcal{B}(x^{0},\epsilon)\), then \(\nabla\Phi(x^{0})\in\bar{\partial}_{\epsilon}\Phi(x^{0})\) and_
\[\|\nabla\Phi(x^{0})\|-d(0,\bar{\partial}_{\epsilon}\Phi(x^{0}))|<o(\epsilon).\]
Proposition 1 shows that the gradient \(\nabla\Phi\) at a single point \(x^{0}\) can be used to approximate the Clarke \(\epsilon\)-subdifferential \(\bar{\partial}_{\epsilon}\Phi(x^{0})\) and the approximation error is in the order of \(\epsilon\). Recall that the gradient \(\nabla\Phi(x^{0})\) can be computed by (3) and (4). Fig. 2 illustrates the approximation on the problem in Example 1. The SCSC holds at \(y^{*}(x)\) and \(\Phi(x)\) is continuously differentiable on \(\mathcal{B}(x^{0},\epsilon)\), then \(\bar{\partial}_{\epsilon}\Phi(x^{0})=[2x^{0}-2\epsilon,2x^{0}+2\epsilon]\) can be approximated by \(\nabla\Phi(x^{0})=2x^{0}\), and the approximation error is \(2\epsilon\). The approximations of \(\bar{\partial}_{\epsilon}\Phi(x^{1})\) and \(\bar{\partial}_{\epsilon}\Phi(x^{2})\) can be done in an analogous way.
Consider \(\Phi(x)\) is not continuously differentiable at some points in \(\mathcal{B}(x^{0},\epsilon)\). Define the set \(I^{\epsilon}(x^{0})\) which contains all \(j\) such that there exist \(x^{\prime}\), \(x^{\prime\prime}\in\mathcal{B}(x^{0},\epsilon)\) with \(j\in J^{+}(x^{\prime})^{C}\) and \(j\in J^{+}(x^{\prime\prime})\). Define the set \(I^{\epsilon}_{+}(x^{0})\) which contains all \(j\) such that \(j\in J^{+}(x)\) for any \(x\in\mathcal{B}(x^{0},\epsilon)\). If \(I^{\epsilon}(x^{0})\) is not empty, there exists a point \(x\in\mathcal{B}(x^{0},\epsilon)\) such that the SCSC does not hold at \(y^{*}(x)\). The power set of \(I^{\epsilon}(x^{0})\) partitions \(\mathcal{B}(x^{0},\epsilon)\) into a number of subsets, where \(\Phi(x)\) and \(y^{*}(x)\) are continuously differentiable in each subset. An illustration on the problem in Example 1 is shown in Fig. 3. The point \(x^{\prime}=-1\) belongs to \((x^{0}-\epsilon,x^{0}+\epsilon)\) and the SCSC does not hold at \(y^{*}(x^{\prime})\). Notice that \(I^{\epsilon}_{+}(x^{0})=\emptyset\) and \(I^{\epsilon}(x^{0})=\{1\}\). Then, \(I^{\epsilon}(x^{0})=\{1\}\) has the power set \(\{S_{(1)},S_{(2)}\}\) with \(S_{(1)}=\emptyset\) and \(S_{(2)}=\{1\}\). Then, \(\mathcal{B}(x^{0},\epsilon)\) is partitioned to two subsets: the subset where the constraint \(p_{1}\) is inactive (blue side in the ball) which corresponds to \(S_{(1)}\), and the subset where the constraint \(p_{1}\) is strictly active (red side in the ball) which corresponds to \(S_{(2)}\). Their boundary is the point \(x^{\prime}\) where the constraint \(p_{1}\) is non-strictly active. It can be seen that \(y^{*}(x)\) is continuously differentiable on each subset and the gradient variations are small inside the subset when \(\epsilon\) is small. In contrast, the gradient variations between two subsets are large. Inspired by Proposition 1, we compute a representative gradient to approximate \(\nabla y^{*}(x)\) inside each subset of \(\mathcal{B}(x^{0},\epsilon)\).
Now we proceed to generalize the above idea. Recall that \(\nabla\Phi(x)\) is computed by (3) and \(f\) is twice continuously differentiable. Define
\[\begin{split} G(x^{0},\epsilon)\triangleq\{\nabla_{x}f\left(x^{0}, y^{*}\left(x^{0}\right)\right)+w^{S}(x^{0})^{\top}\\ \nabla_{y}f\left(x^{0},y^{*}\left(x^{0}\right)\right):S\subseteq I ^{\epsilon}(x^{0})\},\end{split} \tag{5}\]
where \(w^{S}(x^{0})\) is obtained by extracting the first \(d_{x}\) rows
from matrix \(-M_{\epsilon}^{S}(x^{0},y^{*}(x^{0}))^{-1}N_{\epsilon}^{S}(x^{0},y^{*}(x^{0}))\), with
\[M_{\epsilon}^{S}\triangleq\left[\begin{array}{cccc}\nabla_{y}^{2}\mathcal{L}& \nabla_{y}p_{I_{+}^{*}(x^{0})}^{\top}&\nabla_{y}q^{\top}&\nabla_{y}p_{S}^{\top} \\ \nabla_{y}p_{I_{+}^{*}(x^{0})}&0&0&0\\ \nabla_{y}q&0&0&0\\ \nabla_{y}p_{S}&0&0&0\end{array}\right],\]
and \(N_{\epsilon}^{S}\triangleq\left[\nabla_{xy}^{2}\mathcal{L}^{\top},\nabla_{x}p_{ I_{+}^{*}(x^{0})}^{\top},\nabla_{x}q^{\top},\nabla_{x}p_{S}^{\top}\right]^{\top}\). Here, \(S\) is a subset of \(I^{\epsilon}(x^{0})\), and \(w^{S}(x^{0})\) is the representative gradient to approximate \(\nabla y^{*}(x)\) inside the subset of \(\mathcal{B}(x^{0},\epsilon)\) which corresponds \(S\). Proposition 2 shows that the Clarke \(\epsilon\)-subdifferential \(\partial_{\epsilon}y^{*}(x^{0})\) can be approximated by representative gradient set \(G(x^{0},\epsilon)\), and the approximation error is in the order of \(\epsilon\).
**Proposition 2**.: _Suppose Assumptions 1, 2, 3 hold. Consider \(x^{0}\in\mathbb{R}^{d_{x}}\), and assume there exists a sufficiently small \(\epsilon>0\) such that, there exists \(x\in\mathcal{B}(x^{0},\epsilon)\) such that \(y^{*}(x)\) is not continuously differentiable at \(x\). Then, the following inequality holds for any \(z\in\mathbb{R}^{d_{x}}\),_
\[|d(z,\operatorname{conv}G(x^{0},\epsilon))-d(z,\bar{\partial}_{\epsilon} \varPhi(x^{0}))|<o(\epsilon).\]
The computation of the representative gradient \(w^{S}(x^{0})\) of Example 1 is demonstrated in Fig. 4. Since \(x^{0}\) is near the boundary of two subsets, Proposition 2 employs \(w^{S_{(1)}}(x^{0})=\nabla y^{*}(x^{0})\) to approximate the gradients of the subset with the inactive constraint (blue side), and \(w^{S_{(2)}}(x^{0})=\nabla\tilde{y}^{*}(x^{0})\) to approximate the gradients in the subset with the strictly active constraint (red side). The twice-differentiable function \(\tilde{y}^{*}(x)\) is an extension of \(y^{*}(x)\) (refer to the definition of \(x^{I}(\cdot)\) in (12.8) of (Dempe, 1998)). The gradients \(\nabla y^{*}(x^{0})\) and \(\nabla\tilde{y}^{*}(x^{0})\) are computed in the matrices \(-{M_{\epsilon}^{S_{(1)}}}^{-1}N_{\epsilon}^{S_{(1)}}\) and \(-{M_{\epsilon}^{S_{(2)}}}^{-1}N_{\epsilon}^{S_{(2)}}\), respectively. Then, the representative gradients \(w^{S_{(1)}}(x^{0})\) and \(w^{S_{(2)}}(x^{0})\) are used to approximate \(\bar{\partial}_{\epsilon}y^{*}(x^{0})\). Then, we can compute \(G(x^{0},\epsilon)=\{2x^{0},-1\}\) and \(\bar{\partial}_{\epsilon}\varPhi(x^{0})=[2x^{0}-2\epsilon,-1]\) with \(-1\in[x^{0}-\epsilon,x^{0}+\epsilon]\). The approximation error \(|d(z,\operatorname{conv}G(x^{0},\epsilon))-d(z,\bar{\partial}_{\epsilon} \varPhi(x^{0}))|\) is smaller than or equal to \(2\epsilon\) for any \(z\).
### The Gradient Approximation Algorithm
Our proposed gradient approximation algorithm, summarized in Algorithm 1, is a line search algorithm. It uses the approximation of the Clarke subdifferential as the descent direction for line search. In iteration \(k\), we firstly solve the lower-level optimization problem \(P(x^{k})\) and obtain \(y^{*}(x^{k})\), \(\lambda(x^{k})\) and \(\nu(x^{k})\). To reduce the computation complexity, the solution in iteration \(k\) serves as the initial point to solve \(P(x^{k+1})\) in iteration \(k+1\). Secondly, we check the differentiability of \(y^{*}\) on \(\mathcal{B}\left(x^{k},\epsilon_{k}\right)\) and its implementation details are shown in Section 4.4. If \(y^{*}\) is continuously differentiable on \(\mathcal{B}\left(x^{k},\epsilon_{k}\right)\), we use \(\nabla\varPhi(x^{k})\) to approximate \(\bar{\partial}_{\epsilon}\varPhi(x^{k})\) which corresponds to Proposition 1. Otherwise, \(G(x^{k},\epsilon_{k})\) is used which corresponds to Proposition 2. The details of computing \(G(x^{k},\epsilon_{k})\) are shown in (5) and Section 4.4. Thirdly, the line search direction \(g^{k}\) is determined by a vector which has the small
est norm over all vectors in the convex hull of \(G(x^{k},\epsilon_{k})\). During the optimization steps, as the iteration number \(k\) increases, the approximation radius \(\epsilon_{k}\) decreases. According to Propositions 1 and 2, the approximation error of the Clarke subdifferential is diminishing. We next characterize the convergence of Algorithm 1.
**Theorem 3**.: _Suppose Assumptions 1, 2, 3 hold and \(\Phi(x)\) is lower bounded on \(\mathbb{R}^{d_{x}}\). Let \(\{x^{k}\}\) be the sequence generated by Algorithm 1 with \(\nu_{\mathrm{opt}}=\epsilon_{\mathrm{opt}}=0\). Then,_
1. _[label=()]_
2. _For each_ \(k\)_, the line search in line_ 17 _has a solution_ \(t_{k}\)_._
3. \(\lim_{k\to\infty}\nu_{k}=0\)_,_ \(\lim_{k\to\infty}\epsilon_{k}=0\)_._
4. \(\liminf_{k\to\infty}d(0,\overline{\partial}\Phi(x^{k}))=0\)_._
5. _Every limit point of_ \(\{x^{k}\}\) _is Clarke stationary for_ \(\Phi\)_._
If the objective function \(\Phi\) is non-convex but smooth, property (iii) reduces to \(\liminf_{k\to\infty}\|\nabla\Phi(x^{k})\|=0\), which is a widely used convergence criterion for smooth and non-convex optimization [11, 13]. A sufficient condition for the existence of limit point of \(\{x^{k}\}\) is that the sequence is bounded.
### Implementation Details
Check differentiability of \(y^{*}\) on \(\mathcal{B}\left(x^{0},\epsilon_{0}\right)\)We propose Proposition 3 to check differentiability of \(y^{*}\) on \(\mathcal{B}\left(x^{0},\epsilon_{0}\right)\), which is required by line 3 of Algorithm 1.
**Proposition 3**.: _Consider \(x^{0}\in\mathbb{R}^{d_{x}}\) and \(\epsilon>0\). Suppose Assumptions 1, 2, 3 hold. Then, Lipschitz constants of functions \(\Phi(x)\), \(\lambda_{j}(x)\) and \(p_{j}(x,y^{*}(x))\) on \(\mathcal{B}(x^{0},\epsilon)\) exist and are denoted by \(l_{\mathcal{G}}(x^{0},\epsilon)\), \(l_{\lambda_{j}}(x^{0},\epsilon)\) and \(l_{p_{j}}(x^{0},\epsilon)\), respectively. Further, suppose the SCSC holds at \(y^{*}(x^{0})\) w.r.t. \(\lambda(x^{0})\). If there exists \(\epsilon_{1}>0\) such that_
\[\lambda_{j}(x^{0})>l_{\lambda_{j}}(x^{0},\epsilon_{1})\epsilon_{ 1}\ \text{ for all }j\in J(x^{0}), \tag{6}\] \[p_{j}(x^{0},y^{*}(x^{0}))<-l_{p_{j}}(x^{0},\epsilon_{1})\epsilon _{1}\ \text{ for all }j\not\in J(x^{0}),\]
_then \(y^{*}\) is continuously differentiable on \(\mathcal{B}(x^{0},\epsilon_{1})\)._
Proposition 3 shows that, \(y^{*}\) is continuously differentiable on a neighborhood of \(x^{0}\), if for any \(j\), either (i) \(\lambda_{j}\) is larger than zero with non-trivial amount when the constraint \(p_{j}(x^{0},y^{*}(x^{0}))\) is active; or (ii) the satisfaction of \(p_{j}(x^{0},y^{*}(x^{0}))\) is non-trivial when it is inactive. For case (i), \(\lambda_{j}(x)>0\) and the constraint is strictly active for all \(x\in\mathcal{B}(x^{0},\epsilon)\); for case (ii), \(p_{j}(x,y^{*}(x))<0\) and the constraint is inactive for all \(x\in\mathcal{B}(x^{0},\epsilon)\). As a illustration on the problem in Example 1 shown in Fig. 2, \(y^{*}\) is continuously differentiable on \(\mathcal{B}(x^{0},\epsilon)\), \(\mathcal{B}(x^{1},\epsilon)\) and \(\mathcal{B}(x^{2},\epsilon)\), and the constraint is inactive or strictly active in each ball.
We evaluate the differentiability of \(y^{*}(x)\) and \(\Phi(x)\) on \(\mathcal{B}(x^{0},\epsilon)\) by Proposition 3. In particular, we approximatively regard that \(y^{*}\) and \(\Phi\) is continuously differentiable on \(\mathcal{B}(x^{0},\epsilon)\) if (6) is satisfied; otherwise, there exists \(x\in\mathcal{B}(x^{0},\epsilon)\) such that \(y^{*}\) and \(\Phi\) is not continuously differentiable at \(x\). The Lipschitz constants \(l_{\lambda_{j}}(x^{0},\epsilon)\) and \(l_{p_{j}}(x^{0},\epsilon)\) are computed as
\[l_{\lambda_{j}}(x^{0},\epsilon)=\|\nabla\lambda_{j}(x^{0})\|+\delta, \tag{7}\] \[l_{p_{j}}(x^{0},\epsilon)=\|\nabla_{x}p_{j}(x^{0},y^{*}(x^{0}))+\] \[\nabla y^{*}(x^{0})^{\top}\nabla_{y}p_{j}(x^{0},y^{*}(x^{0}))\|+\delta,\]
where \(\delta\) is a small parameter, and \(\nabla_{x}p_{j}(x^{0},y^{*}(x^{0}))\) and \(\nabla\lambda_{j}(x^{0})\) are given in (4). Here, for a function \(f\), we approximate its Lipschitz constant on \(\mathcal{B}(x^{0},\epsilon)\), which is defined as \(l_{f}(x^{0},\epsilon)\triangleq\sup_{x}\{\|\nabla f(x)\|:x\in\mathcal{B}(x^{0},\epsilon)\}\), as \(l_{f}(x^{0},\epsilon)\approx\|\nabla f(x^{0})\|+\delta\). As \(\epsilon\) decreases, \(f\) in \(\mathcal{B}(x^{0},\epsilon)\) is approximating to an affine function, and then the approximation error of \(l_{f}(x^{0},\epsilon)\) decreases.
Computation of \(G(x^{0},\epsilon)\)To compute \(G(x^{0},\epsilon)\) in line 7 of Algorithm 1, we need to compute the sets \(I^{\epsilon}_{+}(x^{0})\) and \(I^{\epsilon}(x^{0})\) defined in Proposition 2. Similar to the idea in Proposition 3, we evaluate \(I^{\epsilon}_{+}(x^{0})\) and \(I^{\epsilon}(x^{0})\) as
\[I^{\epsilon}_{+}(x^{0})=\left\{j\in J(x^{0}):\lambda_{j}(x^{0} )>l_{\lambda_{j}}(x^{0},\epsilon)\epsilon\right\}, \tag{8}\] \[I^{\epsilon}_{-}(x^{0})=\left\{j\not\in J(x^{0}):\,p_{j}(x^{0},y^ {*}(x^{0}))<-l_{p_{j}}(x^{0},\epsilon)\epsilon\right\},\] \[I^{\epsilon}(x^{0})=\left\{j:j\not\in I^{\epsilon}_{+}(x^{0})\cup I ^{\epsilon}_{-}(x^{0})\right\}.\]
Recall that the KKT conditions hold at \(y^{*}(x^{0})\) for problem \(P(x^{0})\), then for any \(x\in\mathcal{B}(x^{0},\epsilon)\), \(p_{j}(x,y^{*}(x))=0\) for \(j\in I^{\epsilon}_{-}(x^{0})\) and \(\lambda_{j}(x)=0\) for \(j\in I^{\epsilon}_{-}(x^{0})\). Here, we also use \(l_{\lambda_{j}}\) and \(l_{p_{j}}\) given in (7) as the Lipschitz constants. When \(y^{*}\) and \(\lambda\) are not differentiable at \(x^{0}\), we sample a point \(x^{\prime}\) near \(x^{0}\) such that \(y^{*}\) and \(\lambda\) are differentiable at \(x^{\prime}\), then replace \(\nabla\lambda(x^{0})\) and \(\nabla y^{*}(x^{0})\) in (7) by \(\nabla\lambda(x^{\prime})\) and \(\nabla y^{*}(x^{\prime})\).
## 5 Experiments
### Hyperparameter Optimization
Hyperparameter optimization has been widely studied [14, 15, 16]. However, existing methods cannot handle hyperparameter optimization of constrained learning problems, such as the supported vector machine (SVM) classification [15], constrained reinforcement learning [17, 18, 19]. We apply the proposed algorithm to hyperparameter optimization of constrained learning.
Hyperparameter Optimization of SVMWe optimize the hyperparameter in the SVM optimization, i.e., the penalty terms of the separation violations. We conduct the experiment on linear SVM and kernelized SVM on the dataset of diabetes in [16]. It is the first time to solve hyperparameter optimization for SVM. We provide details of the problem formulation and the implementation setting in Appendix B.1. As shown in Fig. 5, the loss is nearly convergent for both linear and kernelized SVM, and the final test accuracy is much better than that of randomly selected hyper-parameters, which is the initial point of the optimization.
Data Hyper-CleaningData hyper-cleaning[15, 16] is to train a classifier in a setting where the labels of training data are corrupted with a probability \(p\) (i.e., the corruption rate). We formulate the problem as a hyperparameter optimization of SVM and conduct experiments on a breast cancer dataset provided in [16]. The problem formulation and the implementation setting are provided in Appendix B.1. We compare our gradient approximation method with directly gradient
descent used in [1, 1]. It is shown in Fig. 6 that, our method converges faster than the benchmark method in terms of the loss and accuracy in both the training and test stages. Moreover, both the two methods achieve the test accuracy \(96.2\%\) using the corrupt data (\(p=0.4\)). The accuracy is comparable to the test accuracy \(96.5\%\) of an SVM model where the data is not corrupted.
### Meta-Learning
Meta-learning approaches for few-shot learning have been formulated as bilevel optimization problems in [10, 11, 12]. In particular, the problem in MetaOptNet [11] has the form of problem (2) with the lower-level constraints. However, its optimization does not explicitly consider the non-differentiability of the objective function and cannot guarantee convergence. In the experiment, we compare our algorithm with the optimization in MetaOptNet on datasets CIFAR-FS [1] and FC100 [13], which are widely used for few-shot learning. Appendix B.2 provides details of the problem formulation and the experiment setting.
Fig. 7 compares our gradient approximation method and the direct gradient descent in MetaOptNet [11]. The two algorithms share all training configurations, including the network structure, the learning rate in each epoch and the batch size. For both CIFAR-FS and FC100 datasets, our method converges faster than the optimization in MetaOptNet in terms of the training loss and test accuracy, and achieves a higher final test accuracy. Note that the only difference between the two algorithms in this experiment is the computation of the descent direction. The result shows the Clarke subdifferential approximation in our algorithm works better than the gradient as the descent direction. This is consistent with Proposition 2, where a set of representative gradients instead one gradient is more suitable to approximate the Clarke subdifferential. More comparison results with other meta-learning approaches are given in Appendix B.2.
## 6 Conclusion
We develop a gradient approximation method for the bilevel optimization where the lower-level optimization problem is convex with equality and inequality constraints and the upper-level optimization is non-convex. The proposed method efficiently approximates the Clarke Subdifferential of the non-smooth objective function, and theoretically guarantees convergence. Our experiments validate our theoretical analysis and demonstrate the superior effectiveness of the algorithm.
Figure 5: Loss and accuracy v.s. running time in hyperparameter optimization of linear and kernelized SVM
Figure 6: Comparison of gradient descent (GD) and gradient approximation method (GAM) in data hyper-cleaning with the corruption rate \(p=0.4\). Left: training and test losses of GD and GAM v.s. running time; right: training and test accuracy of GD and GAM with \(p=0.4\) and training and test accuracy with \(p=0\) v.s. running time.
Figure 7: Comparison of MetaOptNet and gradient approximation method (GAM). For each dataset, left: training loss v.s. running time; right: test accuracy v.s. running time.
Acknowledgements
This work was partially supported by NSF awards ECCS 1846706 and ECCS 2140175.
|
2303.13085 | AstroSat and NuSTAR observations of XTE J1739-285 during the 2019-2020
outburst | We report results from a study of XTE J1739-285, a transient neutron star low
mass X-ray binary observed with AstroSat and NuSTAR during its 2019-2020
outburst. We detected accretion-powered X-ray pulsations at 386 Hz during very
short intervals (0.5--1 s) of X-ray flares. These flares were observed during
the 2019 observation of XTE J1739-285. During this observation, we also
observed a correlation between intensity and hardness ratios, suggesting an
increase in hardness with the increase in intensity. Moreover, a thermonuclear
X-ray burst detected in our AstroSat observation during the 2020 outburst
revealed the presence of coherent burst oscillations at 383 Hz during its decay
phase. The frequency drift of 3 Hz during X-ray burst can be explained with r
modes. Thus, making XTE J1739-285 belong to a subset of NS-LMXBs which exhibit
both nuclear- and accretion-powered pulsations. The power density spectrum
created using the AstroSat-LAXPC observations in 2020 showed the presence of a
quasi-periodic oscillation at ~ 0.83 Hz. Our X-ray spectroscopy revealed
significant changes in the spectra during the 2019 and 2020 outburst. We found
a broad iron line emission feature in the X-ray spectrum during the 2020
observation, while this feature was relatively narrow and has a lower
equivalent width in 2019,~when the source was accreting at higher rates than
2020. | Aru Beri, Rahul Sharma, Pinaki Roy, Vishal Gaur, Diego Altamirano, Nils Andersson, Fabian Gittins, T. Celora | 2023-03-23T07:50:33Z | http://arxiv.org/abs/2303.13085v1 | # _AstroSat_ and _NuSTAR_ observations of XTE J1739\(-\)285 during the 2019-2020 outburst
###### Abstract
We report results from a study of XTE J1739\(-\)285, a transient neutron star low mass X-ray binary observed with _AstroSat_ and _NuSTAR_ during its 2019-2020 outburst. We detected accretion-powered X-ray pulsations at 386 Hz during very short intervals (0.5-1 s) of X-ray flares. These flares were observed during the 2019 observation of XTE J1739\(-\)285. During this observation, we also observed a correlation between intensity and hardness ratios, suggesting an increase in hardness with the increase in intensity. Moreover, a thermonuclear X-ray burst detected in our _AstroSat_ observation during the 2020 outburst revealed the presence of coherent burst oscillations at 383 Hz during its decay phase. The frequency drift of 3 Hz during X-ray burst can be explained with r modes. Thus, making XTE J1739\(-\)285 belong to a subset of NS-LMXBs which exhibit both nuclear- and accretion-powered pulsations. The power density spectrum created using the _AstroSat_-LAXPC observations in 2020 showed the presence of a quasi-periodic oscillation at \(\sim\) 0.83 Hz. Our X-ray spectroscopy revealed significant changes in the spectra during the 2019 and 2020 outburst. We found a broad iron line emission feature in the X-ray spectrum during the 2020 observation, while this feature was relatively narrow and has a lower equivalent width in 2019, when the source was accreting at higher rates than 2020. Hard X-ray tail was observed during the 2019 observations, indicating the presence of non-thermal component in the X-ray spectra.
keywords: accretion, accretion discs - stars: neutron - X-rays: bursts - X-ray: binaries - X-rays: individual (XTE J1739\(-\)285)
## 1 Introduction
Low-mass X-ray binary (LMXB) systems consist of neutron star (NS) or a black hole (BH) that accretes matter from a low-mass (\(\leq 1~{}M_{\sun}\)) companion star via Roche-lobe overflow, forming an accretion disc (Shakura and Sunyaev, 1973). Weakly magnetized, accreting NSs in LMXBs can be spun up to rates of several 100 Hz (Alpar et al., 1982). Accretion-powered millisecond X-ray pulsars (AMXPs) (see e.g., Wijnands and van der Klis, 1998; Wijnands, 2006) and nuclear-powered X-ray millisecond pulsars (NMXPs) (see e.g., Strohmayer et al., 1998, 1999; Strohmayer, 2001) belong to this class of NS-LMXB systems. Till date, only 25 AMXPs (see e.g., Patruno and Watts, 2012; Campana and Di Salvo, 2018; Di Salvo and Sanna, 2020; Bult et al., 2022; Ng et al., 2022) and 19 confirmed NMXPs (see e.g., Galloway et al., 2008; Bhattacharyya, 2021, for a recent review) are known. All these AMXPs are transient in nature which means they spend most of their time in quiescence with X-ray luminosity of \(L_{X}\sim 10^{30}-10^{33}\) erg s\({}^{-1}\), interrupted by an occasional outburst episode. For the vast majority of AMXPs, \(L_{X}\) during outburst remains below 10% the Eddington luminosity (see Table 2 of Marino et al., 2019). No spectral state transitions between hard and soft are often observed during these outbursts (Di Salvo and Sanna, 2020). In NMXPs coherent millisecond period brightness oscillations have been observed during thermonuclear X-ray bursts (sudden eruptions in X-rays, intermittently observed from NS-LMXBs). There also exists a partial overlap between AMXPs and NMXPs which means some AMXPs are also NMXPs, and vice versa (see e.g., Chakrabarty et al., 2003; Strohmayer et al., 2003; Altamirano et al., 2010; Bhattacharyya, 2021).
XTE J1739\(-\)285 is a transient NS LMXB system, discovered in October 1999 with the Rossi X-ray Timing Explorer (_RXTE_; Markwardt et al., 1999). This source has displayed irregular outburst patterns. During the 1999 outburst, the \(2-10\) keV source flux evolved between \(1-5\times 10^{-9}\) erg s\({}^{-1}\) cm\({}^{-2}\) over a period of roughly two weeks (Markwardt et al., 1999). Bulge scans performed with _RXTE_-PCA revealed two short and weak outbursts of XTE J1739\(-\)285 in 2001 and 2003 (Kaaret et al., 2007). In August 2005, the source became active again and was first detected with _INTEGRAL_ at a \(3-10\) keV flux of \(\sim 2\times 10^{-9}\) erg s\({}^{-1}\) cm\({}^{-2}\)(Bodaghee et al., 2005). In about a month the value of flux changed by ten times (Shaw et al., 2005). Further observations made with _RXTE_
between October and November 2005 showed that the flux evolved between 4 \(\times\) 10\({}^{-10}\) and 1.5 \(\times\) 10\({}^{-9}\) erg s\({}^{-1}\) cm\({}^{-2}\). Moreover, after a period of Solar occultation, XTE J1739\(-\)285 was still visible in early 2006 (Chencevez et al., 2006). In 2012, the source underwent another outburst (Sanchez-Fernandez et al., 2012). After seven years of a quiet period, the 2019 outburst occurred, which was first detected with _INTEGRAL_(Mereminskiy & Grebenev, 2019) and was later followed-up with the Neutron Star Interior Composition Explorer (NICER) (Bult et al., 2019). The 2-10 keV peak flux was about 5 \(\times\) 10\({}^{-9}\) erg s\({}^{-1}\) cm\({}^{-2}\) as measured with _MAXI_-GSC during its 2019 outburst (Negoro et al., 2020). Very recently in 2020, XTE J1739\(-\)285 was again found to be active with _INTEGRAL_(Sanchez-Fernandez et al., 2020), the rebrightening phase of XTE J1739\(-\)285 was soon confirmed with _Swift_(Bozzo et al., 2020), and the source was extensively followed with _NICER_.
Since its discovery, several X-ray bursts have been found in this source. 43 events have been cataloged in the Multi-Instrument Burst Archive (MINBAR, Galloway et al., 2020), including most detections with JEM-X instrument on _INTEGRAL_ and six with _RXTE_. Kaaret et al. (2007) found oscillations at 1122 Hz in one of these bursts detected with _RXTE_, suggesting it to be the fastest spinning neutron star. However, the burst oscillation at 1122 Hz was never confirmed afterwards for the same burst using independent time windows (Galloway et al., 2008; Bilous & Watts, 2019), casting doubts on the previous detection. Very recently, during the rebrightening phase of XTE J1739\(-\)285 in 2020, _NICER_ detected a total of 32 X-ray bursts (Bult et al., 2020). These authors did not find any evidence of variability near 1122 Hz, and instead found burst oscillations at around 386 Hz in two X-ray bursts. _AstroSat_ also observed two X-ray bursts during the same outburst, but a detailed timing study has not been reported (Chakraborty & Banerjee, 2020).
In this paper, we report our results from _AstroSat_ and _NuSTAR_ observations of XTE J1739\(-\)285 during its 2019 and 2020 outbursts. We have performed a detailed timing and spectral study of this source.
## 2 Observations and Data Analysis
XTE J1739\(-\)285 was observed with _AstroSat_ and _NuSTAR_ on October 9, 2019 and February 19, 2020, respectively. Table 1 gives the log of observations that have been used in this work. Figure 1 shows the _MAXI_-GSC lightcurve of XTE J1739\(-\)285 during the period of 2019-2020. During the 2019 outburst, AstroSat and NuSTAR observations were made close to the peak of the outburst, while during the rebrightening phase in 2020 the source was caught during the early rise. The hardness ratio computed using the _MAXI_ light curves is also shown in the bottom panel of the Figure 1.
### Laxpc
LAXPC is one of the primary instrument on-board _AstroSat_. It consists of three co-aligned identical proportional counter detectors, viz. LAXPC10, LAXPC20 and LAXPC30. Each of these work in the energy range of 3\(-\)80 keV, independently record the arrival time of each photon with a time resolution of 10 \(\mu\)s, and has five layers, each with 12 detector cells (for details see, Yadav et al., 2016; Antia et al., 2017, 2021).
Due to the gain instability caused by the gas leakage, LAXPC10 data were not used while LAXPC30 was switched off during these observations1. Therefore, we have used data from LAXPC20 for our work. These data were collected in the Event Analysis mode (EA) which contains the information about the time, channel number and anodeID of each event. LazPCSort v3.32 software package was used to extract light curves and spectra. LAXPC has a dead-time of 42 \(\mu\)s and the extracted products are dead-time corrected. Background files are generated using the blank sky observations (see, Antia et al., 2017, for details). To minimize contribution of the background in our analysis we have used data from the top layers (L1, L2) (also see, Beri et al., 2019; Sharma et al., 2020, 2023, for details). Barycentric correction was performed using the tool asiJarx3. We used the best available position of the source, R.A. (J2000) = 17\({}^{h}\)39\({}^{m}\)53.5\({}^{s}\)95 and Dec. (J2000) = \(-\)28\({}^{\circ}\)29\({}^{\prime}\)46.7\({}^{\prime\prime}\)8 obtained with _Chandra_(Krauss et al., 2006).
Footnote 1: LAXPC30 is switched off since 8 March 2018, refer to [http://astrosat-ssc.iucaa.in/](http://astrosat-ssc.iucaa.in/)
Footnote 2: [http://www.tifr.res.in/](http://www.tifr.res.in/)\(\_\)astrosat_laxpc/LaxpcSoft.html
Footnote 3: [http://astrosat-ssc.iucaa.in/?q=data_and_analysis](http://astrosat-ssc.iucaa.in/?q=data_and_analysis)
### Xtt
The Soft X-ray Telescope (SXT) is a focusing X-ray telescope with CCD in the focal plane that can perform X-ray imaging and spectroscopy in the 0.3\(-\)7 keV energy range (Singh et al., 2014; Singh et al., 2017; Bhattacharyya et al., 2021). XTE J1739\(-\)285 was observed in the Photon Counting (PC) mode with SXT (Table 1). Level 1 data were processed with AS1SXTLevel2\(-\)1.4b pipeline to produce level 2 clean event files. Events from each orbit were merged using SXT Event Merger Tool (Julia Code4). These merged events were used to extract image, light curves and spectra using the ftool task xsslect, provided as part of heasoft version 6.29c. A circular region with radius of 15 arcmin centered on the source was used to extract source events. For spectral analysis, we have used the following files provided by the SXT team4: background spectrum (SkyBkg_comb_EL3p_5Cl_Rd16p0_v01_pha), spectral resolution matrix file (sxt_pc_mat_g0to12_r0). The ancillary response files (ARF) were generated using sxtARFModule using the standard ARF (sxt_pc_excl00_v04_20190608.arf) file provided by the SXT team. The SXT spectra were grouped to have atleast 25 counts/bin.
Footnote 4: [http://www.tifr.res.in/](http://www.tifr.res.in/)\(\_\)astrosat_sxt/dataanalysis.html
### _NuSTAR_
The Nuclear Spectroscopic Telescope Array (_NuSTAR_; Harrison et al., 2013) consists of two telescopes, which focus X-rays between 3 and 79 keV onto two identical focal planes (FPMA and FPMB). We have used software distributed with heasoft version 6.29c and the latest calibration files (version 20220331) for the _NuSTAR_ data reduction and analysis. The calibrated and screened event files have been generated using the task nupipeline. A circular region of radius 80 arcsec centred at the source position was used to extract source events. Background events were extracted from the source free region. The nupproduct tool was used to generate light curves, spectra, and response files. The spectra were grouped to have a minimum of 25 counts/bin. The FPMA/FPMB light curves were background corrected and averaged using ftool task lcmath.
## 3 Results
### Timing Results
#### 3.1.1 X-ray Light curves
Figure 2 shows 3-30 keV _AstroSat_-LAXPC light curves of XTE J1739\(-\)285 during its observations in 2019 (Obs 1) and 2020 (Obs 2). A large variation in the count rates was observed during the 2019 outburst (left plot of Figure 2). To track spectral evolution during these flares (segments where count rates are varying between 500 and 700 count s\({}^{-1}\)), we computed hardness ratio (shown in the bottom panels of Figure 2). HR was computed taking the ratio of count rates in the 10-30 keV and 3-10 keV energy bands. We observed a correlation between intensity and hardness ratios, suggesting an increase in hardness with the increase in intensity. Similar behaviour was also observed in the _NuSTAR_ light curves (see Figure A1).
On the other hand, LAXPC light curves in 2020 (right plot of Figure 2) showed almost a constant behaviour in the count rates as well as in the hardness ratio. The average count rate estimated is approximately 65 count s\({}^{-1}\). Moreover, an X-ray burst was also observed during this observation. This is in contrast to that reported by Chakraborty & Banerjee (2020) as we found that the second burst at \(\sim 60.7\) ks being filtered out due to the Good Time Interval (GTI) selection. The _NuSTAR_ light curves also showed a constant behaviour (Figure A1) along with the presence of two X-ray bursts. However, X-ray bursts in _NuSTAR_ are not observed at the same time as with _AstroSat_
#### 3.1.2 Power Density Spectra
3-30 keV LAXPC light curves created using data from the top layer with a time resolution of 10 ms were used to create the power density spectra (PDS) (shown in Figure 3). We used the ftool task powspec for the purpose. For observations made in 2019 (Obs 1), the PDS could be well-fitted using a single lorentzian. However, to model the PDS created using observations made in 2020 (Obs 2) needed a combination of four Lorentzians components, given individually by,
Figure 1: The top panel shows the 2–20 keV _MAXI_/GSC light curve of XTE J1739\(-\)285 during its 2019-2020 outburst. The blue and purple regions represent the time of _AstroSat_ and _NuSTAR_ observations in 2019, respectively, while deep blue and cyan colour show observations in 2020. The red arrow corresponds to the start of the 2019 outburst (MJD 58753.28) as reported in Mereminskiy & Grebenev (2019) while the purple arrow marks the re-brightening phase of XTE J1739\(-\)285 in 2020 (MJD 58887.39; Sanchez-Fernandez et al. 2020). The hardness ratio is plotted in the bottom panel.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Instrument & OBS ID & Start Time & Stop time & Exposure Time \\ & & yy-mm-dd h:mm:ss (MJD) & yy-mm-dd h:mm:ss (MJD) & ks \\ \hline LAXPC & 9000003208 (Obs 1) & 2019-10-01 02:08:54 (58757.09) & 2019-10-02 03:24:47 (58758.18) & 94.5 \\ SXT & 900003208 (Obs 1) & 2019-10-01 03:16:54 (58757.18) & 2019-10-02 03:52:55 (58758.18) & 82 \\ _NuSTAR_ & 90501343002 (Obs 1) & 2019-10-01 22:46:26 (58757.94) & 2019-10-02 21:41:33 (58758.90) & 82.5 \\ LAXPC & 900003524 (Obs 2) & 2020-02-19 22:45:39 (58898.95) & 2020-02-20 23:19:40 (58899.97) & 88.5 \\ SXT & 9000003524 (Obs 2) & 2020-02-19 22:48:26 (58898.95) & 2020-02-20 23:19:38 (58899.97) & 88.2 \\ _NuSTAR_ & 90601307002 (Obs 2) & 2020-02-19 09:30:06 (58898.39) & 2020-02-20 02:31:47 (58899.10) & 61 \\ \hline \end{tabular}
\end{table}
Table 1: Log of X-ray observations.
\[P(\nu)=\frac{r^{2}\Delta}{2\pi}\frac{1}{(\nu-\nu_{0})^{2}+(\Delta/2)^{2}} \tag{1}\]
where \(\nu_{0}\) is the centroid frequency, \(\Delta\) is the full-width at half-maximum, and \(r\) is the integrated fractional rms (see Belloni et al., 2002). The quality factor (Q) defined as \(Q=\nu_{0}/\Delta\) was used to find the presence of a quasi-periodic oscillation (QPO) as \(Q\geq 3\) indicate the presence of a QPO in the PDS (van der Klis, 1989).
The two lorentzian functions were used to model the band-limited noise while the other two fit QPOs observed (Table 2). One of these QPOs was found at \(\sim 0.83\) Hz with \(Q\sim 4\) and fractional rms of \(7\%\). This was detected at \(9\sigma\) where the significance was calculated by dividing the normalization of Lorentzian function by its negative \(1\sigma\) error. We also found a less significant (\(\sim 2.7\sigma\)) QPO feature at \(0.35\) Hz with \(Q\sim 3.5\) and rms of \(\sim 4\%\).
#### 3.1.3 Energy-resolved thermonuclear burst profile
To check energy-dependence of the burst observed in the LAXPC light curves during Obs 2, we created burst profiles in the following energy bands: 3-6 keV, 6-12 keV, 12-18 keV, 18-24 keV and 24-30 keV (see Figure 4). We observed that the burst was significantly detected up 24 keV. In a few X-ray sources (such as Aql X\(-\)1, 4U 1728\(-\)34) a dip has been observed in the hard X-ray light curves during bursts (see Maccarone & Coppi, 2003; Chen et al., 2013; Kajava et al., 2017, for details). Therefore, we investigated the presence of any dip in the 30-80 keV light curves (also see Beri et al., 2019). No dips were found in the hard X-ray light curves during burst. The rise time and exponential decay time measured using the 3-30 keV burst
Figure 3: The rms-normalized power density spectrum (white noise subtracted) of XTE J1739\(-\)285 from the two LAXPC observations. _Left panel:_ PDS from 2019 observation (Obs 1). _Right panel:_ PDS from 2020 observation (Obs 2).
Figure 2: The background corrected light curves of XTE J1739\(-\)285 obtained from LAXPC20 for the observation of 2019 (left panel) and 2020 (right panel). Both light curves are binned at 20 s and in the energy range of 3–30 keV. The bottom panels present the hardness ratio between the count rate in 10-30 keV energy range to 3–10 keV energy range.
light curve is \(4.7\pm 0.1\) s and \(11.4\pm 0.3\) s, respectively, consistent with that observed with _NICER_(Bult et al., 2020).
#### 3.1.4 Burst Oscillations (BOs)
We performed a search for <2048 Hz oscillations along the entire duration of each of the burst. Events from only LAXPC20 were taken into account. We performed Fourier transform (FT) of successive 1 s segment (shifting the 1 s time window) of the input barycentre-corrected event file corresponding to the burst time interval. The FT scan was repeated with the start time offset by 0.5 s. While we did not see any signal at \(\sim\)1122 Hz, a sharp signal at \(\sim\)383 Hz was clearly seen during the decay phase of the burst in the Leahy-normalized (Leahy et al., 1983) power spectrum (Figure 5). We then examined the region that showed the signal at \(\sim\)383 Hz and attempted to maximize the measured power, \(P_{\rm m}\), by varying the start and end points of the segment in steps of 0.1 s and trying segment lengths of 1 s, 2 s within a time window of 3 s (20+10=30 overlapping segments). We checked two energy bands: 3\(-\)10 keV and 3\(-\)25 keV. The number of trials was thus, \(30\times 2=60\). The single-trial chance probability i.e., the probability of obtaining \(P_{\rm m}\) solely due to noise, was then given by the survival function, \(e^{-P_{\rm m}/2}\), where \(P_{\rm m}\) was now the maximized power obtained through the trials. So, the significance was \(x=e^{-P_{\rm m}/2}\times 60\), and the confidence level would be \(X\sigma\), where \(X=\sqrt{2}erf^{-1}(1-x)\). The signal was detected with \(\sim 3.4\)\(\sigma\) (\(P_{\rm m}=23.08\)) confidence in a 1 s window during the decay of the burst. The dynamic power spectra on the right side of Figure 5 indicates the presence of a strong signal between 13 and 13.5 s segment.
We also evaluated the significance of the signal using a Monte Carlo simulation that generates Poisson-distributed events following the first 20 s of the burst light curve in 1 s bins. LAXPC deadtime is modelled by removing any event that occurs within 43 \(\mu\)s after a previous event. The number of events generated in each time bin is greater than the observed counts, so that after the deadtime correction the number of events is identical to that in the actual light curve within Poisson fluctuations. We generate, 10000 trial bursts and calculate successive 1 s FFTs (i.e., 20 FFTs per burst) searching for peaks in the 10-1000 Hz range. The chance probability of occurrence of the observed signal is obtained by counting the fraction of trial bursts with Leahy powers equal to or exceeding 21.6 (i.e., the Leahy power corresponding to a single trial probability of 2e-5). For 3-25 keV light curve simulation, we find 19 bursts which have at least one signal above 21.6 in the frequency range 381-387 Hz. Thus, we estimate the chance probability to be 19/10000 = \(1.9\times 10^{-3}\) which implies a significance of 3.1 \(\sigma\) since \(X=\sqrt{(}2)*erf^{-1}(1-x)\) where x is the chance probability. For 3-10 keV energy band, we find 21 bursts which have at least one signal above 21.6 in the frequency range 381-387 Hz. The chance probability and significance are, thus, 2.1\(\times 10^{-3}\) and 3.1 \(\sigma\) respectively. A similar search into the _NuSTAR_ data of two bursts did not yield any significant feature.
#### 3.1.5 Search for accretion-powered oscillations during flares
We looked for the presence of \(\sim 383\) Hz signal during the flares in the 3-30 keV LAXPC20 light curves observed during Obs 1, and found a few instances which showed a clear feature at nearby frequencies. Flares during which oscillations were found are shown in the shaded region of left-hand plot in Figure 2. A representative power spectrum with the maximum power and the corresponding dynamic power spectra are shown in Figure 6. The confidence level of this 386 Hz signal detection is estimated to be, \(\sim 3.3\sigma\) considering 30 trials.
#### 3.1.6 X-ray Pulse Profiles
To estimate the fractional amplitude of these oscillations we constructed pulse profiles shown in Figure 7. The phase was determined from the folded pulse profiles modelled with the function \(A+B\sin 2\pi\nu t\). Here, \(B/A\) gives the half-fractional amplitude and the fractional amplitude is given by \(B/(A\sqrt{2})\). We obtained the fractional amplitude of \(29\pm 4\%\) during burst oscillations while during flares it was observed to be \(31\pm 4\%\).
### Spectral results
We performed the spectral fitting using xspec 12.12.0 (Arnaud, 1996). To model the hydrogen column density (\(N_{H}\)) we have used tbabs using WILM abundances (Wilms et al., 2000). All errors quoted are within 90 % confidence range.
\begin{table}
\begin{tabular}{c c c} \hline \hline Model & parameter & 2019 & 2020 \\ \hline Lorentzian 1 & \(\nu\) (Hz) & 0.01 (fixed) & \(0.83\pm 0.01\) \\ & \(\Delta\) (Hz) & \(0.14\pm 0.02\) & \(0.20^{+0.01}_{-0.03}\) \\ & rms (\%) & \(7.3\pm 0.3\) & \(6.9\pm 0.4\) \\ Lorentzian 2 & \(\nu\) (Hz) & - & \(0.35\pm 0.02\) \\ & \(\Delta\) (Hz) & - & \(0.10^{+0.11}_{-0.07}\) \\ & rms (\%) & - & \(4.44^{+2.1}_{-0.84}\) \\ Lorentzian 3 & \(\nu\) (Hz) & - & \(0\) \\ & \(\Delta\) (Hz) & - & \(3.9^{+0.8}_{-0.5}\) \\ & rms (\%) & - & \(4.6\pm 0.4\) \\ Lorentzian 4 & \(\nu\) (Hz) & - & \(0\) \\ & \(\Delta\) (Hz) & - & \(0\) \\ & \(\Delta\) (Hz) & - & \(10^{1+23}_{-16}\) \\ & rms (\%) & - & \(1.8\pm 0.1\) \\ \hline \end{tabular}
\end{table}
Table 2: Fit parameters obtained using 2019 and 2020 observations of LAXPC. Errors quoted are within 90% confidence range.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Model & parameter & 2020 & 2019 & 2019 & 2019 \\ & & & & \\ & & & & \\ & & & & \\ \hline tbabs & \(N_{\rm H}\) (\(19^{21}\) cm\({}^{-3}\)) & \(1.37^{+0.5}_{-0.1}\) & \(2.3\pm 0.1\) & \(1.8\pm 0.1\) & \(2.1\pm 0.1\) \\ tbabs/ & \(2\) (\(N_{\rm H}\)) & \(1.36\pm 0.01\) & \(1.25\pm 0.01\) & \(1.23\pm 0.01\) \\ & Norm & \(1.17^{+0.3}_{-0.3}\) & \(68\pm 3\) & \(1.06^{+0.1}_{-0.03}\) & \(1.00^{+0.1}_{-0.03}\) \\ nthcomp & \(\Gamma\) & \(1.75\pm 0.02\) & \(1.68\pm 0.02\) & \(1.68\pm 0.02\) & \(1.68\pm 0.02\) \\ & \(\Delta\)(Hz) & \(1.9^{+1.5}_{-1.5}\) & \(2.95\pm 0.03\) & \(2.36\pm 0.03\) & \(2.33\pm 0.03\) \\ & \(\Delta\)(Hz) & \(0.60\pm 0.05\) & \(0.21\pm 0.04\) & \(0.20\pm 0.04\) & \(0.32\pm 0.04\) \\ & \(\Delta\)(Hz) & \(0.017\pm 0.000\) & \(0.48\pm 0.07\) & \(0.30\pm 0.04\) & \(0.41\pm 0.04\) \\ Gaussian & \(E\) (keV) & \(6.5\pm 0.2\) & \(-0.56\) & \(-0.56\) & \(-0.56\) \\ & \(\sigma\) (keV) & \(1.0\pm 0.2\) & \(0.15\pm 0.25\) & \(0.12\pm 0.12\) & \(0.09^{+0.10}_{-0.03}\) \\ & \(\sigma\)(keV) & \(0.21\pm 0.01\) & \(0.20\pm 0.01\) & \(0.02\pm 0.01\) & \(0.02\pm 0.01\) \\ & \(\sigma\)(keV) & \(0.2\pm 0.1\) & \(0.09^{+0.5}_{-0.5}\) & \(0.9\pm 0.2\) & \(0.7\pm 0.3\) \\ powerlaw & \(\Gamma\) & \(\sigma\)({}^{\rm max}\) & \(\sigma\)({}^{\rm max}\) & \(\sigma\)({}^{\rm max}\) \\ & Norm (\(\sigma\)) & \(3.2\pm 1.5\) & \(5.8\pm 1.1\) & \(3.3\pm 0.9\) \\ Cons & Crusa & \(|\) total
Figure 4: _Left panel_: Energy-resolved burst profiles from the LAXPC data. _Right panel_: Time-resolved spectroscopy of the bursts. The burst count rate in 3–20 keV, blackbody temperature in keV, blackbody emission radius in km, absorbed flux (\(\times 10^{-9}\) erg cm\({}^{-2}\) s\({}^{-1}\)) in 3-20 keV and reduced \(\chi^{2}\) for each fit from top to bottom, respectively.
Figure 5: _Left panel_: Power spectrum for a 1 s window during the decay phase of the 3–10 keV burst, showing burst oscillations at 383 Hz. The sampling rate is 2048 Hz. _Right panel_: Dynamic power spectra for 5 s window during the decay of the burst.
Figure 6: _Left panel_: Power spectrum for a 1 s window during the flare, showing oscillations at 386 Hz. The sampling rate is 2048 Hz. _Right panel_: Dynamic power spectra for 8 s window during the same. Each segment is 1 s long and overlaps the previous one by 0.5 s.
#### 3.2.1 X-ray spectra during 2019 observations
The LAXPC spectra showed a large calibration uncertainty (Figure 10), with background dominating above 20 keV (see Figure 11). Therefore, we have used a better spectral quality _NuSTAR_ data and contemporaneous SXT data for having energy coverage below 3 keV to perform broadband X-ray spectroscopy. The SXT spectra were corrected for gain offset using the gain fit command with fixed slope of 1.0 and best fit offset of \(\sim\) 0.022 eV. An offset correction of 0.02\(-\)0.09 keV is needed in quite a few SXT observations (see e.g., Beri et al., 2021). As recommended in the SXT data analysis guide, a systematic error of 2 % was also included in the spectral fits.
As large variation in the count rate as well as the hardness ratio was observed in observations made during the 2019 observations of XTE J1739\(-\)285 we divided data based on the source count rate (see Figure 10). Two spectra were obtained, one for times when source count rate was \(\leq\) 120 _count s\({}^{-1}\)_ (spectra 1) while the other for count rates above this value (spectra 2). FPMA and FPMB spectra were fit simultaneously. A constant model was added to account for flux calibration uncertainties. The value of constant was fixed at 1 for FPMA and was allowed to vary for FPMB and SXT.
We tried to model the continuum emission observed in both these spectra using a physical thermal Comptonized model nthcomp(Zdziarski et al., 1996; Zycki et al., 1999). A powerlaw model was used to fit the flat residuals above 40 keV. This returned a value of photon index (\(\Gamma\)) close to zero, therefore, we fixed its value to 0. The resultant fit showed low energy excess, indicating the presence of thermal emission. Therefore, we added a thermal component bbodyrad. The addition of this model component led to a significant improvement of \(\Delta\chi^{2}\)=-2217 and \(\Delta\chi^{2}\)=-5441 for 2 degrees of freedom for spectra 1 and spectra 2, respectively. This model (TBabs\(\times\)(bbodyrad + nthcomp + pow) fitted the continuum well. The Fe-K\({}_{\alpha}\) emission lines at around 6.4 keV have been observed in various neutron star low-mass X-ray binaries and discussed by several authors (see e.g., Bhattacharya & Strohmayer, 2007; Cackett et al., 2008; Papitto et al., 2009; Sharma et al., 2019, 2020). Therefore, we added a Gaussian component to model the emission feature observed in the X-ray spectra of XTE J1739-285. The best-fit parameters indicated the presence of a narrow emission feature at around 6.4 keV. Although improvement in the spectral fit was observed (\(\Delta\chi^{2}\)=-33 and \(\Delta\chi^{2}\)=-44 for 3 degrees of freedom for spectra 1 and spectra 2), the equivalent width is low. The best fitting parameters are given in Table 3. To evaluate chance probability of improvement of adding the extra Gaussian component, we simulated 100,000 data sets using simftest in xspec. The evaluated chance probability was \(<10^{-6}\) for both spectra 1 and 2, rejecting null hypothesis and confirming the presence of an emission feature at 6.4 keV in the spectrum. Since, we did not observe significant differences in the best-fit parameters of Spectra 1 and Spectra 2, we also performed a time-averaged spectroscopy using the same model as described above. Figure 8 shows the SXT and _NuSTAR_ spectrum observed during the 2019 outburst along with the best-fit residuals.
Figure 8: SXT (green) and _NuSTAR_ (FPMA (black) and FPMB (red)) spectrum from observation of 2019. The spectrum were fitted with best fit model. Lower panels show the residuals when absorbed Comptonization model with power-law, thermal and emission component was used, respectively. The residuals show the presence of a narrow emission feature around 6.4 keV in the spectrum. Spectra were rebinned for plotting purpose only.
Figure 7: _Top panel_: Pulse profile in the 3–10 keV band for a 1 s time window during the decay phase of the X-ray burst. The smooth curve shows the sinusoidal fit with frequency 383.14 Hz. The fractional amplitude is 29.4%. _Bottom panel_: Pulse profile in 3–30 keV band for a 1 s time window during the flare. The smooth curve shows the sinusoidal fit with frequency 386.15 Hz. The fractional amplitude is \(31\pm 4\)%. In both panels, the second cycle is shown for clarity.
#### 3.2.2 X-ray spectra during 2020 observations
X-ray bursts observed during Obs 2 were removed from the _NuSTAR_ for performing spectroscopy during the persistent emission. No spectral variation was observed, therefore we used the total spectrum (see, Figure 2). Moreover, contemporaneous SXT observation could also be used to account for low energies (0.5-7 keV). The constant model added was kept fixed at 1 for FPMA and was allowed to vary for FPMB and SXT.
The following model: tbabs\(\times\)(nthcomp+bbodyrad+Gaussian) best fit the X-ray spectra. In contrast to that observed during the 2019 outburst we did not find hard power law tail in the X-ray spectra. A broad emission feature (Figure 9) was however needed to obtain the best-fit (see Table 3).
#### 3.2.3 Reflection spectrum
We also examined if the broad iron line feature could be better described using the Relativistic reflection model. We fitted the spectra with the self-consistent reflection model relxillCP5(Dauser et al., 2014; Garcia et al., 2014). This component includes the thermal Comptonization model nthcomp as the illuminating continuum. To limit the number of the free parameters, we used the single emissivity profile (\(r^{-q}\)) and fixed emissivity index \(q=3\)(Cackett et al., 2010; Wilkins and Fabian, 2012). We fixed the outer radius \(R_{\rm out}=1000R_{\rm G}\), where \(R_{\rm G}=GM/c^{2}\) is the Gravitational radius. We also fixed The iron abundance \(A_{\rm Fe}\) was fixed to 1 in units of solar abundance. The dimensionless spin parameter \(a\) can be calculated from the spin frequency using the relation \(a=0.47\)/P[ms](Braje et al., 2000). Assuming the spin frequency (\(v\)) of 386 Hz, we fixed \(a\) at 0.18.
Footnote 5: [http://www.sternwarte.uni-erlangen.de/](http://www.sternwarte.uni-erlangen.de/) dauser/research/relxill/
top panel shows the variation of count rate in the 3-20 keV energy band. The temperature (\(kT\)) evolution, blackbody emission radius in unit of km, absorbed flux in units of \(10^{-9}\) erg cm\({}^{-2}\) s\({}^{-1}\) in the energy range of 3-20 keV and the reduced \(\chi^{2}\) for each fit are plotted from the second to bottom panel, respectively. The blackbody emission radius was calculated from the normalization of bbodyrad, \(Norm=R_{\rm km}/D_{\rm 10kpc}^{2}\) and we used source distance of 7.3 kpc (Galloway et al., 2008). A peak temperature and bolometric flux were found to be \(2.32\pm 0.09\) keV and, \(1.1\times 10^{-8}\) erg cm\({}^{-2}\) s\({}^{-1}\) respectively.
## 4 Discussion
In this work, we performed a detailed timing and spectral analysis of XTE J1739\(-\)285 during its 2019-2020 outburst. We discuss our timing and spectral results as follows.
### Timing Behaviour
The X-ray light curves during the 2019 observations (Obs 1 Figure-2) showed a large variability in the count rates which has never been reported earlier from this source. This is in contrast to that observed in the X-ray light curves of Obs 2. Moreover, during observations in 2019, hardness ratio showed an increase with count rates. However, no significant spectral variation was observed during the 2020 observations. The LAXPC light curves showed a single X-ray burst, while two were observed during the _NuSTAR_ observations in 2020 (Obs 2). The energy-resolved X-ray burst light curve with LAXPC indicates that it is significantly detected up to 24 keV (Figure 4). Searching for BOs require an instrument capable of providing \(\mu\)s time resolution. After the launch of _AstroSat_(Singh et al., 2016) and _NICER_(Arzoumanian et al., 2014) the hunt for BOs began once again. We searched for BOs during the burst and found a peak in the PDS around 383.14 Hz. These oscillations were observed during the decay of the burst at a significance of 3.4\(\sigma\). Bult et al. (2020) observed similar oscillations at 386 Hz during the rise phase of the burst. A large fractional half-amplitude of the signal measured at \(29\pm 4\)% (equivalent to a rms amplitude of 21\(\pm\)3%) was observed, consistent with the NICER measurement (rms amplitude of 26\(\pm\)4%) during the rising phase (Bult et al., 2020). Although, large value of fractional rms amplitude during the decay phase of an X-ray burst decay has been observed in other sources such as 4U 1636-536 (see e.g., Mahmoodifar et al., 2019; Roy et al., 2021) the mechanism behind this is not clear. Usually decay phase oscillations are explained with surface modes, but the fractional rms amplitude is typically small (about 10%). We could not perform a detailed energy- and phase-resolved analysis due to limited number of counts owing to the unavailability of two other LAXPC detectors.
Since BOs arise due to rotational induced modulation of a brightness asymmetry on the stellar surface, they are believed to closely track the spin frequency of the neutron star (see, e.g. Strohmayer et al., 1996; Chakrabarty et al., 2003; Watts, 2012). Motivated by this and also the fact that there exist an overlap between NMXPs and AMXPs, we searched for \(\sim 386\) Hz oscillations during flares seen in the LAXPC light curves of XTE J1739\(-\)285 (Obs 1). We found a significant detection at around \(\sim 386\) Hz which strengthened our confidence in the earlier detection of the signal during burst. To our best knowledge, there has been no previous report of an effort of searching for neutron star spin frequency using short segments (1 s) during a flare. It has been found that in AMXPs, coherent X-ray pulsations are present both during the outburst and quiescence phase (see, e.g., Di Salvo and Sanna, 2020, and references therein) and there also exist sources which show intermittent pulsations (see e.g., Galloway et al., 2007; Altamirano et al., 2008; Casella et al., 2008). XTE J1739\(-\)285 is reminiscent of Aql X-1, where coherent X-ray pulsations were detected only during a short snapshot of about 150 s. Perhaps this indicates that XTE J1739\(-\)285 belong to the class of AMXP which are also a NMXP.
Frequency drifts of 1-3 Hz have been observed in many thermonuclear X-ray bursts such as 4U 1636-536 (Galloway et al., 2008). Therefore, if 386 Hz is a spin period of XTE J1739\(-\)285 then the observed BO (\(\nu_{\alpha}\)\(\sim\) 383.14 Hz) during the decay phase can be explained by surface modes (r modes) which is given by \(\nu_{\alpha}=m\nu_{s}+\nu_{r}\) where \(\nu_{s}\) is the spin frequency of the star, and the sign of \(\nu_{r}\) is positive or negative depending on whether the mode is prograde (eastbound) or retrograde (westbound), respectively. R modes propagating in the retrograde direction may lead to the downward drift as we are observing.
XTE J1739\(-\)285 was observed to change its spectral state (soft to hard) during its 2005 outburst (Shaw et al., 2005). This behaviour is in contrast to that observed in AMXPs which are believed to be hard X-ray transients. Accretion-powered pulsations have been detected in only a few (25) NS-LMXBs. The reason why only a small fraction of these show pulsations is still not clear. There can be possibility that a rigorous search using a very narrow time intervals may reveal pulsations in other NS-LMXBs as well.
We also observed significant changes in the PDS during the 2019 and 2020 outburst. No significant feature was detected in the PDS during the 2019 outburst of XTE J1739\(-\)285 however, the presence of a strong QPO at around 0.83 Hz was found in the _AstroSat_LAXPC light curves during its 2020 observations. A QPO around 1 Hz have also been found in other NS-LMXBs such as 4U 1746\(-\)37, 4U 1323\(-\)62 and EXO 0748\(-\)676 (see e.g., Jonker et al., 2000, and references therein). This feature was observed only in the low-intensity state and was absent when the source is in high accretion state, consistent with our results.
### Spectral Behaviour
The X-ray continuum of XTE J1739\(-\)285 during both 2019 and 2020 observations could be well described using an absorbed blackbody plus thermal Comptonized emission. The best fit values of the photon index and the electron temperature indicates spectrum to be softer in 2019 compared to observations in 2020. Moreover, a broad iron emission feature was found in Obs 2 which is in contrast to that observed during Obs 1. The observed iron line feature was quite narrow and with a lower equivalent width during the 2019 observation (see Table 3.)
Another difference we observed was that we did not require an additional power law component to obtain a best-fit during the 2020 observation. One of the reasons for the lack of hard X-ray tail in the spectra during the 2020 observations could be that these were made at a lower flux levels compared to the 2019 observations. A power-law like hard tail is generally observed during the soft state of a source, which can contribute up to few percent to the total energy flux (Di Salvo et al., 2000, 2001; D'Af et al., 2007; Pintore et al., 2016). The X-ray spectra of several LMXBs are known to exhibit the hard power law tail, but the exact cause is not known yet (e.g., Di Salvo et al., 2000, 2001; D'Af et al., 2007). Several scenarios have been proposed to explain the hard power-law tails
such as non-thermal Comptonization emission due to the presence of non thermal, relativistic, electrons in a local outflow (e.g., Di Salvo et al., 2000) or in a corona (Poutanen & Coppi, 1998), or by the bulk motion of accreting material close to the NS (e.g., Titarchuk & Zannias, 1998). Another possibility discussed in literature is due to synchrotron emission from a relativistic jet escaping from the system (Markoff et al., 2001). Thus, one would also expect to detect radio emission from XTE J1739\(-\)285. Bright et al. (2019) reported a 3-sigma upper limit of 210\(\mu\)Jy at the position of XTE J1739\(-\)285 during the 2019 rising phase with MeerKAT radio telescope.
X-ray spectra during the 2020 outburst when fitted using the relativistic reflection model'relxillCp' revealed the value of inner disc radius to be \(3.8R_{\rm ISCO}\) (\(\sim 42.6\) km) with a lower limit of \(1.8R_{\rm ISCO}\) at 90 % confidence limit. \(R_{\rm ISCO}\) can be approximated using \(R_{\rm ISCO}\sim 6R_{\rm G}(1-0.5ata)\)(Miller et al., 1998). This implies \(R_{\rm in}\geq 9.75R_{\rm G}\) (20 km) for NS mass of \(1.4M_{\sun}\). Thus, this suggests that the accretion disc is probably truncated moderately away from the NS surface during the 2020 outburst. Our spectral results obtained for Obs 2 are also consistent with those reported in Mondal et al. (2022).
In case of NS LMXBs, the accretion disc has been observed to be truncated at moderate radii due to the pressure exerted by the magnetic field of the NS (Cackett et al., 2009; Degenaar et al., 2014). Thus, if it is truncated at the magnetospheric radius, one can estimate the magnetic field strength. The magnetic dipole moment is given by the following expression (Ibragimov & Poutanen, 2009),
\[\mu_{\rm 25}=1.168k_{\rm A}^{-7/4}\Big{(}\frac{M}{1.4M_{\sun}}\Big{)}^{1/4} \Big{(}\frac{R_{\rm in}}{10\rm km}\Big{)}^{7/4}\]
\[\times(\frac{f_{\rm mag}}{\eta}\frac{F}{10^{-9}~{}erg~{}cm^{-2}~{}s^{-1}})^{1/ 2}\frac{D}{7.3\rm kpc}\]
where \(\mu_{\rm 25}=\mu/10^{25}\) G cm\({}^{3}\), \(\eta\) is the accretion efficiency in the Schwarzchild metric, \(f_{\rm mag}\) is the anisotropy correction (which is close to unity; Ibragimov & Poutanen, 2009) and \(k_{\rm A}\) is a geometry coefficient expected to be \(\simeq 0.5-1.1\)(Psaltis & Chakrabarty, 1999; Long et al., 2005; Kluzniak & Rappaport, 2007). We assumed \(f_{\rm mag}=1,k_{\rm A}=1\) and \(\eta=0.1\)(Cackett et al., 2009; Degenaar et al., 2017; Sharma et al., 2019). We then obtained \(\mu=4.3\times 10^{26}\) G cm\({}^{3}\) for \(R_{\rm in}=42.6\) km, this leads to a magnetic field strength of \(B=4.3\times 10^{8}\) G for NS radius of 10 km. Our estimate of magnetic field strengths is within the range determined by Cackett et al. (2009); Mukherjee et al. (2015); Ludlam et al. (2017). We would also like to mention that \(R_{in}\) inferred in AMXPs lies within a range of 6-15 \(R_{\rm G}\)(e.g., Papito et al., 2009), but larger values of about 15-40 \(R_{\rm G}\) have also been observed (e.g., Papito et al., 2010, 2013).
The time-resolved spectroscopy during the X-ray burst observed with LAXPC did not indicate the presence of a photospheric radius expansion. The maximum temperature measured during these X-ray bursts is \(2.32\pm 0.09\) keV at a bolometric flux of about 1.1\(\times 10^{-8}\) erg cm\({}^{-2}\) s\({}^{-1}\).
## 5 Conclusions
In this work, we have studied XTE J1739\(-\)285 during its hard and soft X-ray spectral state using observations with _AstroSat_ and _NuSTAR_.
* The X-ray light curves during the 2019 observations indicated the presence of flares. The flares were found to be harder compared to the rest of the emission. Such variability in the X-ray light curves have never been reported earlier from this source. The 2020 observations made during the hard spectral state did not exhibit similar variability in the count rates.
* We observed a QPO at 0.83 Hz with rms variability of about 7% during the hard state of XTE J1739\(-\)285 in 2020 (Obs 2). Similar feature was not found during the soft state of the source, observations made in 2019 (Obs 1).
* Coherent X-ray pulsations at 386 Hz were observed during the short-segments of these X-ray flares, making XTE J1739\(-\)285 an intermittent X-ray pulsar. Moreover, BOs observed around 383 Hz during the decay phase of the X-ray burst could be explained with r modes.
* Our X-ray spectroscopy results indicate significant changes in the X-ray spectrum of XTE J1739\(-\)285 during Obs 1 and Obs 2. The Obs 1 made close to the peak of the outburst, showed a spectrum which is softer compared to that observed in Obs 2, the observation made during the early rise of the rebrightening phase in 2020.
## Acknowledgements
We would like to thank the referee for his/her comments and useful advice on our manuscript. A.B is funded by an INSPIRE Faculty grant (DST/INSPIRE/04/2018/001265) by the Department of Science and Technology, Govt. of India. She is also grateful to the Royal Society, U.K. A.B and P.R acknowledge the financial support of ISRO under _AstroSat_ archival Data utilization program (No.DS-2B-13013(2)/4/2019-Sec. 2). R.S was supported by the INSPIRE grant (DST/INSPIRE/04/2018/001265) awarded to A.B during the course of this project. This research has made use of the _AstroSat_, an ISRO mission and _NuSTAR_, a NASA mission. The data was obtained from the Indian Space Science Data Centre (ISSDC) and High Energy Astrophysics Science Archive Research Center (HEASARC), provided by NASA's Goddard Space Flight Center.
## Data Availability
Data used in this work can be accessed through the Indian Space Science Data Center (ISSDC) at [https://astrobrowse.issdc.gov.in/astro_archive/archive/Home.jsp](https://astrobrowse.issdc.gov.in/astro_archive/archive/Home.jsp) and HEASARC archive at [https://heasarc.gsfc.nasa.gov/cgi-bin/W3Browse/w3browse.pl](https://heasarc.gsfc.nasa.gov/cgi-bin/W3Browse/w3browse.pl).
|
2307.06619 | Unveiling the origins of quasi-phase matching spectral imperfections in
thin-film lithium niobate frequency doublers | Thin-film lithium niobate (TFLN) based frequency doublers have been widely
recognized as essential components for both classical and quantum optical
communications. Nonetheless, the efficiency of these devices is hindered by
imperfections present in the quasi-phase matching (QPM) spectrum. In this
study, we present a thorough analysis of the spectral imperfections in TFLN
frequency doublers with varying lengths, ranging from 5 mm to 15 mm. Employing
a non-destructive diagnostic method based on scattered light imaging, we
identify the sources and waveguide sections that contribute to the
imperfections in the QPM spectrum. Furthermore, by mapping the TFLN film
thickness across the entire waveguiding regions, we successfully reproduce the
QPM spectra numerically, thus confirming the prominent influence of film
thickness variations on the observed spectral imperfections. This comprehensive
investigation provides valuable insights into the identification and mitigation
of spectral imperfections in TFLN-based frequency doublers, paving the way
toward the realization of nonlinear optical devices with enhanced efficiency
and improved spectral fidelity. | Jie Zhao, Xiaoting Li, Ting-Chen Hu, Ayed Al Sayem, Haochuan Li, Al Tate, Kwangwoong Kim, Rose Kopf, Pouria Sanjari, Mark Earnshaw, Nicolas K. Fontaine, Cheng Wang, Andrea Blanco-Redondo | 2023-07-13T08:36:02Z | http://arxiv.org/abs/2307.06619v1 | Unveiling the origins of quasi-phase matching spectral imperfections in thin-film lithium niobate frequency doublers
###### Abstract
Thin-film lithium niobate (TFLN) based frequency doublers have been widely recognized as essential components for both classical and quantum optical communications. Nonetheless, the efficiency of these devices is hindered by imperfections present in the quasi-phase matching (QPM) spectrum. In this study, we present a thorough analysis of the spectral imperfections in TFLN frequency doublers with varying lengths, ranging from 5 mm to 15 mm. Employing a non-destructive diagnostic method based on scattered light imaging, we identify the sources and waveguide sections that contribute to the imperfections in the QPM spectrum. Furthermore, by mapping the TFLN film thickness across the entire waveguiding regions, we successfully reproduce the QPM spectra numerically, thus confirming the prominent influence of film thickness variations on the observed spectral imperfections. This comprehensive investigation provides valuable insights into the identification and mitigation of spectral imperfections in TFLN-based frequency doublers, paving the way toward the realization of nonlinear optical devices with enhanced efficiency and improved spectral fidelity.
oeurmurm
## 1 Introduction
Thin-film periodically poled lithium niobate (PPLN) waveguides offer a compelling platform for achieving highly efficient wavelength conversion devices, leveraging their high second-order nonlinear coefficient (\(\chi^{(2)}\)) and tight confinement of optical modes. These waveguides have found diverse applications in second-harmonic generation (SHG) [1, 2], entangled photon-pair generation [3, 4], optical parametric amplification [5], optical isolation [6], and all-optical switching [7]. However, compared to their bulk counterparts, these waveguides exhibit an increased susceptibility to fabrication inhomogeneities, presenting a significant challenge to their overall performance [8, 9, 10]. These inhomogeneities can give rise to unfavorable effects in the QPM spectrum, including broadened central peaks and unwanted side lobes, resulting in reduced conversion efficiency. Furthermore, as phase errors accumulate along the waveguide's length, longer devices experience more pronounced impacts from fabrication non-uniformity, hindering the further enhancement of power conversion efficiency [11]. To date, the demonstrated thin-film PPLN devices have typically had lengths ranging between 4 and 6 mm. As a result, the overall conversion efficiency of these devices remains lower than what has been reported for their bulk counterparts [12]. Therefore, it is imperative to direct research efforts towards investigating approaches that can enable the fabrication of longer thin-film PPLN devices, while ensuring
good spectral fidelity. This research is crucial to bridge the performance gap with their bulk counterparts and fully unlock the potential of thin-film PPLN technology.
Prominent studies have examined the impact of fabrication inhomogeneities on the QPM spectrum from Ti-indiffused and diced Zinc-indiffused bulk lithium niobate (LN) waveguides [13, 14, 15], with a specific focus on waveguide width variation as the primary error source. In particular, in Ref. [13], 83 mm Ti-indiffused PPLN waveguides were diced into shorter sections to analyze the generated QPM spectra from different regions of the waveguides, thereby revealing the evolution of the QPM spectrum along the waveguide's length. In the context of thin-film PPLN devices, numerical methods have been employed to replicate the measured QPM spectra based on an estimated TFLN film thickness profile [9, 10, 16]. However, comprehensive experimental investigations akin to those described in Ref. [13], which would greatly enhance our understanding of QPM spectral imperfections and benefit the development of longer devices, are currently lacking for thin-film PPLN devices.
Here, we introduce a non-destructive optical diagnostic method that enables the visualization of the QPM spectrum at any location along the thin-film PPLN waveguide. To accomplish this, we utilize a monochrome camera positioned perpendicular to the chip surface to capture the scattered second-harmonic (SH) light across the entire PPLN waveguiding region. By acquiring multiple images while sweeping the pump wavelengths, the local QPM spectra at different locations on the waveguide can subsequently be calculated and obtained. This innovative technique uncovers the contributions from different sections of the waveguide to the final QPM spectrum, facilitating an in-depth understanding of the imperfections observed in the measured spectra. To evaluate the efficacy of our approach, we conducted investigations on thin-film PPLN waveguides with lengths of 5 mm, 7.5 mm, 12.5 mm, and 15 mm. Our findings indicate that the observed imperfections in the spectra primarily stem from variations in TFLN film thickness. Subsequently, we performed a thorough mapping of the film thickness across the entire waveguiding regions. This comprehensive mapping allowed us to successfully numerically reproduce the measured QPM spectra, further confirming the significant influence of film thickness variations on the spectral imperfections.
Figure 1: (a) Schematic of the experimental setup for SHG characterization. (b) Images of the collected scattered SH light from a 5 mm long PPLN waveguide with the pump wavelength at 1594.4 nm and 1599 nm respectively. (c) Cross-section of the PPLN waveguides. (d) Second-harmonic microscope image of the periodically poled lithium niobate thin film.
Methods
The thin-film PPLN waveguides were fabricated using 5 mol% MgO-doped 300 nm x-cut lithium niobate on insulator (LNOI) wafers. In this report, all measured waveguides have a targeted etching depth of 150 nm and a waveguide top width of approximately 850 nm, as shown in Fig. 1(c). These waveguides were designed for efficient SHG from telecommunication wavelengths to near-visible wavelengths. The required poling period for QPM is about 2.46 \(\mu\)m, and all the interacting waves are in the fundamental TE mode.
In the initial fabrication step, poling electrodes with lengths varying from 5 mm to 15 mm were formed on the surface of TFLN using photolithography, while maintaining a fixed poling period of 2.46 \(\mu\)m for each electrode. High-voltage pulses, as described in Ref. [17], were then applied to these electrodes for periodic poling of the TFLN. Figure 1(d) presents a representative second-harmonic (SH) microscope image of the poled area, revealing domain structures with ideal uniformity and duty cycles, which are crucial for efficient frequency conversion. Multiple SH microscope images were generated at different locations along each waveguide. It is worth emphasizing that achieving high-fidelity poling of TFLN with long lengths and small periods has been challenging. As the poling periods decrease, adjacent domains tend to merge, while longer waveguides introduce more variations in the duty cycle. Despite these challenges, we have achieved remarkable results in our devices. To the best of our knowledge, the 7.5 mm long PPLN waveguide we have fabricated is currently the longest demonstrated on the x-cut TFLN platform. Here, even for the 15 mm long poling electrodes, the calculated poling duty cycle remains highly uniform, at 52.83% \(\pm\) 2.28%. Therefore, contributions from imperfect periodic poling of TFLN (as discussed in detail through theoretical calculations in Ref. [18]) to the QPM spectra are not considered in our subsequent discussions and numerical simulations. The waveguides were fabricated in the poled areas through aligned electron beam lithography and dry etching, followed by a wet etching process for sidewall deposition cleaning. Finally, the chip edges were cleaved to ensure optimal edge coupling.
The experimental setup for SHG characterization is depicted in Fig. 1(a). A tunable laser operating at telecommunication wavelengths serves as the pump source for the PPLN waveguides, with its polarization adjusted by a set of polarization controllers to ensure TE mode injection at the waveguide's input facet. Light is coupled into and out of the chip using tapered lensed fibers to optimize the coupling efficiency. While sweeping the pump wavelength, the corresponding SH light is collected at the output waveguide facet using a lensed fiber and detected by a Si photodetector, following the conventional method for obtaining the QPM spectrum. Additionally, we developed and employed a novel approach for SHG characterization, which not only provides QPM spectrum information at any location along the waveguide but also facilitates the identification of sources causing spectral imperfections. Specifically, while sweeping the pump wavelengths, a monochrome camera (Allied Vision Alvium 1800 U-511) was used to image the scattered SH light along the waveguide region. For example, Fig. 1(b) presents the measured scattered SH light images from a 5 mm thin-film PPLN waveguide with pump wavelengths of 1594.4 nm and 1599 nm, respectively. As we can see in these images, SH light with different wavelengths is scattered from distinct parts of the waveguide. In the case of this waveguide, the 799.5 nm SH light forms an unwanted side lobe in the QPM spectrum, and Fig. 1(b) indicates that it is primarily generated near the initial section of the PPLN. Such information proves valuable for comprehending and addressing QPM spectral imperfections, particularly in longer waveguides, as further elaborated in the subsequent sections.
## 3 Results
We measured SHG from waveguides with lengths of 5 mm, 7.5 mm, 12.5 mm, and 15 mm. Note that these waveguides are located on the same chip, and their relative positions are depicted in
Fig. 3(a). The QPM spectra obtained by the conventional method (as described in the previous paragraph) are shown as the blue curves in Figs. 2(c), (f), (i), and (l). Upon comparing these plots, it becomes evident that increasing the waveguide length leads to the generation of more side lobes, thereby diminishing the spectrum fidelity and the overall achievable conversion efficiency. The variations in the main peak wavelengths can be attributed to a combination of slight differences in the designed waveguide widths and the film thicknesses from one waveguide to another, as illustrated in Fig. 3(b). The spectrum fidelity values, calculated using the definition from Ref. [11], are 0.46, 0.24, 0.17, and 0.15 respectively for the four lengths. To gain deeper insights into the origins of these side lobes, we captured scattered SH light images for each waveguide while sweeping the pump wavelengths, similar to the representative images shown in Fig. 1(b). By integrating the pixel values in each column of these images (perpendicular to the waveguide's direction), we obtained mappings of the SH light in terms of propagation length and pump wavelength, which are presented as color images in Figs. 2(a), (d), (g), and (j). These visual representations conveniently illustrate the evolution of the QPM spectrum along the waveguide. Moreover, end-point QPM spectra, which refer to the QPM spectra obtained at the end of the PPLN waveguides, can be generated by averaging the final column values of the SH light mappings. They are depicted by the yellow curves in Figs. 2(c), (f), (i), and (l). The close alignment between these curves and the QPM spectra obtained through the conventional method provides compelling evidence that valid QPM spectra can be obtained using the scattered light imaging technique at any location along the PPLN waveguide. Therefore, we analyze the imperfections in the QPM spectra based on the SH light mappings. Notably, these mappings exhibit a consistent trend: side peaks with longer wavelengths than the main peaks predominantly arise from the initial section of the waveguides, while the SH light from the main peaks only begins to emerge at approximately 2-3 mm away from the waveguides' starting points. This similarity suggests the presence of one or several waveguide geometry parameters that consistently vary along the direction of light propagation across the entire chip.
Figure 2: Measured and simulated QPM spectrum mappings and end-point QPM spectra from thin-film PPLN waveguides with different lengths of 5 mm (a)-(c), 7.5 mm (d)-(f), 12.5 mm (g)-(i), and 15 mm (j)-(l). (a), (d), (g), and (j): Measured QPM spectrum mappings using scattered-light imaging. (b), (e), (h), and (k): Calculated QPM spectrum mappings based on measured TFLN film thickness profiles. (c), (f), (i), and (l): Measured end-point QPM spectra using the conventional method (blue curves), as well as scattered-light imaging (yellow curves), overlaid with calculated plots (dashed black curves) based on TFLN film thickness profiles.
For uniform waveguides, the QPM spectrum (\(\Phi\), normalized per unit length) can be calculated as [10, 11]:
\[\begin{split}\Phi=\text{sinc}(\frac{\Delta\beta L}{2})\text{exp}(i \frac{\Delta\beta L}{2}),\\ \Delta\beta=\frac{2\pi}{\lambda_{\text{SH}}}n_{\text{SH}}-2\frac {2\pi}{\lambda_{\text{pump}}}n_{\text{pump}}-\frac{2\pi}{\Lambda},\end{split} \tag{1}\]
where \(L\) is the length of the waveguide, \(\lambda_{\text{pump, SH}}\) denotes the pump and SH wavelengths, and \(n_{\text{pump, SH}}\) indicates the effective mode indices for the pump and SH light. To determine the type of fabrication errors causing these spectrum imperfections, we calculated the partial derivatives of the momentum mismatch (\(\Delta\beta\), as defined in Eq. 1) with respect to the waveguide top width \(w\), etching depth \(h_{\text{etch}}\), and film thickness \(t_{\text{LN}}\) as follows:
\[\begin{split}(\frac{\partial\Delta\beta}{\partial w})\{_{h_{ \text{etch}}=154~{}nm},\,\,h_{\text{LN}}=303~{}nm,\,\,\lambda_{\text{pump}}= 1580~{}nm\}&=-0.485~{}\mu m^{-2},\\ (\frac{\partial\Delta\beta}{\partial h_{\text{etch}}})\{_{w=850~{ }nm},\,\,h_{\text{LN}}=303~{}nm,\,\,\lambda_{\text{pump}}=1580~{}nm\}& =1.842~{}\mu m^{-2},\\ (\frac{\partial\Delta\beta}{\partial t_{\text{LN}}})\{_{w=850~{ }nm},\,\,h_{\text{etch}}=154~{}nm,\,\,\lambda_{\text{pump}}=1580~{}nm\}& =-4.018~{}\mu m^{-2}.\end{split} \tag{2}\]
The calculations reveal that \(\Delta\beta\) of the waveguide structure employed in this study is most sensitive to variations in film thickness. In addition, based on the general framework developed in Ref. [11], we further estimated the contribution from each individual waveguide parameter to the QPM spectrum infidelity for all four waveguides, and the results are summarized in Table 1. This estimation assumes that the spectrum infidelity is primarily influenced by a single waveguide parameter, which exhibits 1/f noise to account for the long-range correlations arising from the fabrication processes [14]. The similarity in the calculated variations for each parameter across all four waveguides provides further evidence of the consistent nature of the fabrication variations throughout the entire chip.
We conducted precise measurements using atomic force microscopy (AFM) on a twin chip to assess the variations in waveguide width and etching depth along a 15 mm long waveguide. The calculated errors for both parameters were found to be lower than the estimated values mentioned in Table 1. On the other hand, the estimated variations in film thickness appear more plausible, as such levels of long-range variation can often arise from the chemical-mechanical polishing process and are consistent with the labeled thickness uniformity value provided by the wafer vendor. In order to examine the film thickness variations, we used Filmetrics to measure the thickness of TFLN across the entire etched chip. The measurement was performed with a step size of 200 \(\mu\)m, and the resulting mapping is illustrated in Fig. 3(a). It is worth noting that this
\begin{table}
\begin{tabular}{||c c c c||} \hline Waveguide length & \(\Delta w\) & \(\Delta h_{\text{etch}}\) & \(\Delta t_{\text{LN}}\) \\ \hline \hline
5 mm & 13.16 nm & 3.47 nm & 1.59 nm \\ \hline
7.5 mm & 22.04 nm & 5.81 nm & 2.66 nm \\ \hline
12.5 mm & 20.70 nm & 5.46 nm & 2.50 nm \\ \hline
15 mm & 20.42 nm & 5.38 nm & 2.47 nm \\ \hline \end{tabular}
\end{table}
Table 1: Estimated waveguide geometry variation for all four waveguides based on the measured QPM spectra.
measurement was conducted subsequent to the dry-etching process of the waveguide. Therefore, the plotted thickness values in Fig. 3 represent the raw measured data, augmented by a constant etching depth of 154 nm. As depicted in Fig. 3(a), it is evident that the film thickness exhibits consistent variations across the entire chip, which aligns with our hypothesis based on the QPM spectrum mappings shown in Fig. 2. Specifically, the thickness gradually increases from the left edge of the chip (the input port of the PPLN) and subsequently decreases towards the right side (the output port of the PPLN). In addition to the overall mapping, Fig. 3(b) provides a detailed plot of the measured TFLN thickness along four specific waveguides with lengths of 5 mm (blue), 7.5 mm (red), 12.5 mm (yellow), and 15 mm (black). The dashed lines in the figure indicate the end of the PPLN regions. It is important to note that the chip analyzed in this study was obtained from an area near the edge of the wafer. The thickness profile, which exhibits a high-order polynomial trend, observed in this specific chip was not present in subsequent measurements conducted on chips extracted from regions in the center of the wafer.
Utilizing these film thickness values, we numerically calculated the QPM spectrum mapping for all four waveguides, based on the methods described in Refs. [14, 19]. These mappings are presented in Figs. 2(b), (e), (h), and (k). We also depicted the end-point QPM spectra in Figs. 2(c), (f), (i), and (l), overlaid with the curves acquired experimentally. A thorough comparison of these plots highlights the successful reproduction of the main side lobes observed in the measured QPM spectra for all four waveguides. Moreover, the simulations accurately captured the evolution trend evident in the measured QPM spectrum mappings. Several factors may contribute to the observed discrepancies between the measured and simulated mappings in this study. These factors include local variations in waveguide widths and etching depth, which are considered secondary factors here and thus are not included in the numerical calculations. Furthermore, considering the high sensitivity of the spectra to film thickness variations, even slight deviations from the true values, such as the absence of fine features in the captured thickness profile and errors resulting from imperfect measurement accuracy, can significantly contribute to the observed disparities. Taking the 15 mm PPLN waveguide as an example, in Fig. 4, we show that slight local variations in the film thickness profile can provide better alignment to the measured QPM spectra. The optimized film thickness profile, as shown in Fig. 4(a), was obtained using the Particle Swarm Optimization algorithm, aiming to reproduce the QPM spectra at various locations along the waveguide (Fig. 4(b)). Note that the difference between the optimized thickness profile and the raw data can be introduced by other waveguide parameters other than film thickness, such as waveguide width and etching depth. As for the 12.5 mm long waveguide, we could achieve a
Figure 3: (a) Measured TFLN film thickness mapping over the entire chip, with the black lines showing the position of PPLN waveguides. (b) Measured TFLN thickness along the waveguides with lengths of 5 mm (blue), 7.5 mm (red), 12.5 mm (yellow), and 15 mm (black). The black dashed lines indicate the end of the PPLN region for each waveguide.
better matching with the measured QPM spectrum mapping (shown in Fig. 2(h)) by adding 2 nm to the poling period, shifting the whole spectrum to longer wavelengths, though the alignment with the end-point QPM spectrum would be compromised. It is important to emphasize that the goal of this work is not to perfectly replicate the measured QPM spectrum mapping, as achieving this level of precision with a single parameter variation (in this case, film thickness) is unrealistic. Nevertheless, the remarkable agreement observed between the simulated and measured QPM spectra provides strong evidence supporting the hypothesis that the observed imperfections primarily originate from variations in the film thickness along the waveguiding region.
## 4 Conclusion
In conclusion, our study presents a non-destructive optical diagnostic method for assessing imperfections in QPM spectra from thin-film PPLN waveguides. This method allows for the extraction of QPM spectra at any position along the waveguide without the need for physically dicing the waveguides. By comparing the measured spectrum mappings with ideal ones, we can identify specific regions that contribute to unwanted distortions in the QPM spectra, providing crucial insights into the sources of these spectral imperfections. We applied this approach to investigate the QPM spectrum imperfections in four fabricated thin-film PPLN waveguides with lengths of 5 mm, 7.5 mm, 12.5 mm, and 15 mm. Our analysis revealed a consistent trend in the QPM spectrum mappings across these waveguides. Combining our experimental results with numerical simulations, we identified variations in TFLN thickness along the waveguide as the primary source of the observed spectral imperfections. Moreover, we successfully replicated the main features of the measured QPM mappings using film thickness data obtained from Filmetrics measurements. The strong alignment between the simulated and measured mappings serves to validate our conclusions.
There are several potential avenues for improving the fidelity of QPM spectra. One straightforward approach is to fabricate PPLN waveguides in an area close to the center of the LNOI wafer, where the film thickness is expected to be more uniform and exhibit a reduced variation trend observed in this study. Another promising strategy involves designing waveguide structures that are less sensitive to thickness variations, as explored in simulations detailed in Ref. [9], which
Figure 4: (a) Comparison of the measured and optimized film thickness profile for the 15 mm long waveguide. (b) Simulated QPM spectrum mapping using the optimized thickness profile. (c) QPM spectra at different locations along the waveguide obtained through the experiment (solid blue curve), simulations with raw thickness profile (solid red curve), as well as simulations with optimized thickness profile (dashed black curve).
may come at the cost of reduced conversion efficiency. Additionally, obtaining a detailed TFLN thickness mapping prior to waveguide fabrication and compensating for thickness variations through adjustments in other waveguide parameters such as poling period and waveguide width is another viable option. It is also possible to design folded waveguides that possess a smaller area with more uniform film thickness [20], although this would require additional components to control the accumulated phase mismatch in the curved waveguides.
Taken together, our research provides new insights to the understanding of imperfections in QPM spectra originating from thin-film PPLN waveguides. The integration of experimental findings with numerical simulations sheds light on the path toward the development of long thin-film PPLN devices with enhanced overall conversion efficiency. These insights hold great potential for benefiting applications in both classical and quantum communication domains, opening up new possibilities for enhanced performance and functionality.
Funding. Research Grants Council, University Grants Committee (CityU 11204820, N_CityU113/20)
Acknowledgments. Jie Zhao would like to thank Haowen Ren, Yanqi Luo, and Yang Li for their valuable discussions and assistance in the fabrication process.
Disclosures. The authors declare no conflicts of interest.
Data Availability Statement. Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
|
2304.07057 | Epitaxial ferroelectric hafnia stabilized by symmetry constraints | Ferroelectric memories experienced a revival in the last decade due to the
discovery of ferroelectricity in HfO$_2$-based nanometer-thick thin films.
These films exhibit exceptional silicon compatibility, overcoming the scaling
and integration obstacles that impeded perovskite ferroelectrics' use in
high-density integrated circuits. The exact phase responsible for
ferroelectricity in hafnia films remains debated with no single factor
identified that could stabilize the ferroelectric phase thermodynamically.
Here, supported by density functional theory (DFT) high-throughput (HT)
calculations that screen a broad range of epitaxial conditions, we demonstrate
conclusively that specific epitaxial conditions achievable with common
substrates such as yttria-stabilized zirconia (YSZ) and SrTiO$_3$ can favor the
polar Pca2$_1$ phase thermodynamically over other polar phases such as R3m and
Pmn2$_1$ and nonpolar P2$_1$/c phase. The substrate's symmetry
constraint-induced shear strain is crucial for the preference of Pca2$_1$. The
strain-stability phase diagrams resolve experiment-theory discrepancies and can
guide the improvement of ferroelectric properties of epitaxial hafnia thin
films. | Tianyuan Zhu, Shiqing Deng, Shi Liu | 2023-04-14T11:26:40Z | http://arxiv.org/abs/2304.07057v2 | # Epitaxial ferroelectric hafnia stabilized by symmetry constraints
###### Abstract
Ferroelectric memories experienced a revival in the last decade due to the discovery of ferroelectricity in HfO\({}_{2}\)-based nanometer-thick thin films. These films exhibit exceptional silicon compatibility, overcoming the scaling and integration obstacles that impeded perovskite ferroelectrics' use in high-density integrated circuits. The exact phase responsible for ferroelectricity in hafnia films remains debated with no single factor identified that could stabilize the ferroelectric phase thermodynamically. Here, supported by density functional theory (DFT) high-throughput (HT) calculations that screen a broad range of epitaxial conditions, we demonstrate conclusively that specific epitaxial conditions achievable with common substrates such as yttria-stabilized zirconia (YSZ) and SrTiO\({}_{3}\) can favor the polar \(Pca2_{1}\) phase thermodynamically over other polar phases such as \(R3m\) and \(Pmn2_{1}\) and nonpolar \(P2_{1}/c\) phase. The substrate's symmetry constraint-induced shear strain is crucial for the preference of \(Pca2_{1}\). The strain-stability phase diagrams resolve experiment-theory discrepancies and can guide the improvement of ferroelectric properties of epitaxial hafnia thin films.
Introduction
The fluorite-structured binary oxide, HfO\({}_{2}\), is known to form many nonpolar polymorphs including the monoclinic (\(M\)) \(P2_{1}/c\), tetragonal (\(T\)) \(P4_{2}/nmc\), and cubic \(Fm\overline{3}m\) phases, among which the most stable phase is the \(M\) phase [1]. The observed ferroelectricity in HfO\({}_{2}\)-based thin films has been attributed to the polar orthorhombic \(Pca2_{1}\) phase [2; 3; 4; 5; 6; 7; 8; 9; 10], while other polar phases such as rhombohedral \(R3m\) and \(R3\) phases [11; 12; 13] and orthorhombic \(Pmn2_{1}\) phase [14; 15] have also been proposed (Fig. S1). One of the controversies surrounding ferroelectric HfO\({}_{2}\) stems from the fact that all these polar phases are higher in energy than the nonpolar \(M\) phase (Table S1). Several extrinsic factors have been suggested to explain the stabilization of the polar phases in thin films. Among them, the surface energy effect has been commonly cited as the primary mechanism that favors \(Pca2_{1}\) and \(T\) phases thermodynamically over the \(M\) phase in nanocrystals with a high surface-to-volume ratio [16]. However, comprehensive surface energy calculations involving multiple major crystallographic orientations revealed that the \(M\) phase actually possesses lower surface energy than \(Pca2_{1}\)[17; 18]. The impact of defects such as dopants and oxygen vacancy has been considered. DFT studies predicted that even a high concentration of dopants is not enough to reverse the relative stability between the \(M\) and polar phases [19; 20]. More recently, it was proposed that charged oxygen vacancies could promote nonpolar-polar phase transitions of HfO\({}_{2}\), offering an explanation for the origin of ferroelectricity from the perspective of polymorphism kinetics [21]. We aim to identify a single, readily-tunable parameter that can stabilize the polar phases thermodynamically over the \(M\) phase.
Polycrystalline films of hafnia often exhibit a mixture of polar and nonpolar phases, which poses a challenge in isolating the individual contributions of various factors that influence the ferroelectricity. Thin film epitaxy with precisely controlled substrate-ferroelectric interfaces and microstructures serves as an ideal platform to understand the ferroelectric behavior of hafnia [22]. By leveraging the lattice mismatch between the film and substrate [23], as well as the substrate symmetry and vicinality, it is possible to control the phase stability of hafnia polymorphs for optimal ferroelectric properties and device prototyping. Ferroelectric Y-doped HfO\({}_{2}\) (YHO) thin films with various orientations (\(\{001\}\), \(\{110\}\) and \(\{111\}\)) were grown through lattice-matching epitaxy (LME), each coherently matching the ITO/YSZ substrates (where ITO refers to the indium-tin oxide electrode) [4; 5; 6; 7; 8], and the resulting polar phase was identified as \(Pca2_{1}\). However, Wei _et al._ reported the formation of a compressively strained rhombohedral \(R3m\) phase in (111)-oriented
Hf\({}_{0.5}\)Zr\({}_{0.5}\)O\({}_{2}\) (HZO) thin films deposited on a (001)-oriented (La,Sr)MnO\({}_{3}\) (LSMO) electrode and SrTiO\({}_{3}\) (STO) substrate [11], while another polar rhombohedral \(R3\) phase was suggested in HZO thin films grown on the GaN(0001)/Si(111) substrate [12]. Yun _et al._ instead demonstrated a rhombohedrally distorted \(Pca2_{1}\) phase in YHO(111) thin films on both LSMO/STO(001) and LSMO/STO(110) substrates [10]. Recently, Liu _et al._ suggested both \(R3m\) and \(Pca2_{1}\) phases in HZO(111) on LSMO(110) [24]. Because of the large lattice mismatch between LSMO and HZO, HZO films grown on LSMO/STO likely adopt the domain-matching epitaxy (DME) where \(m\) lattices of film match \(n\) lattices of substrate (Fig. S2 and Table S2).
Several DFT studies have attempted to reveal the impact of epitaxial strains on the relative stability of various hafnia polymorphs, but the findings have been inconsistent. Qi _et al._ proposed that an in-plane shear strain could promote \(T\to Pmn2_{1}\) transition and attributed the ferroelectricity of HZO(111) on LSMO/STO(001) to the kinetically stabilized \(Pmn2_{1}\) phase [15]. Zheng _et al._ suggested that the trigonal symmetry constraint imposed by the ZnO(0001) substrate renders the \(R3m\) phase energetically competitive with the \(M\) phase [13], although the possibility of a distorted \(Pca2_{1}\) phase can not be ruled out. Furthermore, different studies have argued that compressive [25] and tensile strains [26] may be responsible for stabilizing the \(Pca2_{1}\) phase. To summarize, there is currently no consensus either experimentally or theoretically on the following key questions: (i) which polar phase or phases (\(Pca2_{1}\), \(Pmn2_{1}\), \(R3m\), and \(R3\)) are responsible for the ferroelectricity in epitaxial thin films? (ii) what types of strains, tensile or compressive, can stabilize the polar phase? (iii) can a single factor stabilize the ferroelectric phase thermodynamically in hafnia thin films?
In this work, we address the aforementioned questions by performing DFT-based high-throughput (HT) calculations on \(\approx\)3500 configurations to quantitatively assess the influence of a broad range of isotropic and anisotropic epitaxial strains, as well as substrate symmetry, on the phase competitions in thin films of HfO\({}_{2}\). We show that instead of focusing on the type of stain, either compressive or tensile, applied to the ground state of a polymorph (as commonly done in prior studies), a more conceptually straightforward and experimentally relevant approach is to examine the influence of a given substrate on the relative phase stability in the film. Our results provide definitive proof that the \(Pca2_{1}\) phase can be intentionally engineered as the most stable phase across an extensive range of epitaxial conditions that impose orthogonal in-plane lattices, thereby resolving multiple discrepancies between experimental and theoretical observations.
Results and discussion
We start by emphasizing that the epitaxial constraints experienced by a film grown on a given substrate depends on the film-substrate matching plane (\(\sigma\)) represented by Miller indices (\(hkl\)) and the crystal symmetry (\(\varphi\)) of the polymorph (see discussions below). \(\sigma\) labels the growth orientation while \(\varphi\) determines the number of unique growth orientations within the same family of \(\{hkl\}\) (referred to as "general orientation"). As shown in Fig. 1, in the case of \(\{110\}\)-oriented thin films, there exist four unique growth orientations for \(M\) but only three for \(Pca2_{1}\). For convenience, we introduce the method of lattice normalization for polymorphs in films (\(f\)), \(\widetilde{L}_{f}=L_{f}/\sqrt{u_{L}^{2}+v_{L}^{2}+w_{L}^{2}}\), where \(L_{f}\) (\(L=X,Y\)) is the length of an in-plane lattice vector \([u_{L}v_{L}w_{L}]\in(hkl)\) that is aligned along the measurement axis \(X\) or \(Y\) (Table S3), and the intrinsic lattice angle is denoted as \(\theta_{f}\). Similarly, the mechanical boundary conditions of a _generic_ substrate (\(s\)) can be specified with normalized in-plane lattice constants (\(\widetilde{X}_{s}\) and \(\widetilde{Y}_{s}\)) and lattice angle \(\theta_{s}\).
A key aspect of hafnia epitaxy, which is often underappreciated, is that a given substrate (\(\widetilde{X}_{s}\), \(\widetilde{Y}_{s}\), \(\theta_{s}\)) can impose drastically different strain conditions depending on the values of \(\sigma\) and \(\varphi\) associated with the crystallized polymorph in the film. This can be understood by examining the ground-state epitaxial conditions of unstrained hafnia polymorphs. We define an anisotropic parameter, \(\lambda_{f}=\widetilde{X}_{f}/\widetilde{Y}_{f}\), and a distortion angle, \(\Delta\theta_{f}=\theta_{f}-\theta_{s}\) (with \(\theta_{s}=90^{\circ}\) for simplicity). The degree of in-plane anisotropy for a polymorph is quantified by the deviation of \(\lambda_{f}\) from unity and the amount of in-plane shear strain experienced by the film scales with the value of \(\Delta\theta_{f}\). The four parameters, \(\widetilde{X}_{f}\), \(\widetilde{Y}_{f}\), \(\lambda_{f}\), and \(\Delta\theta_{f}\), hence collectively characterize the strain-free epitaxial condition for a specific phase and growth orientation. Figure 2 plots the values of ground-state epitaxial conditions for a number of phases of HfO\({}_{2}\) with \(\{001\}\), \(\{110\}\), and \(\{111\}\) orientations, respectively, revealing several important characteristics. First, using the HfO\({}_{2}\)/YSZ heterostructure grown by LME as an example, the normal strain, \(\varepsilon_{L}=(\widetilde{L}_{s}-\widetilde{L}_{f})/\widetilde{L}_{f}\), clearly depends on both \(\sigma\) and \(\varphi\) of the polymorph. Specifically, YSZ (\(a_{\text{YSZ}}=5.15\) A) with identical \(\{110\}\) orientations induces a non-equibiaxial compressive strain in the (101)-oriented \(M\) phase but results in \(\varepsilon_{X}=-0.8\%\) and \(\varepsilon_{Y}=+7.9\%\) for the \((10\overline{1})\)-oriented \(M\) phase. Second, the \(T\) phase and \(Pca2_{1}\) phase of the same orientation always have similar ground-state epitaxial conditions that are close to be isotropic as characterized by \(\lambda_{f}=1\) and \(\Delta\theta_{f}=0\). In contrast, the values of \(\lambda_{f}\) and \(\Delta\theta_{f}\) for \(M\) and \(Pmn2_{1}\) (blue and green markers in Fig. 2) deviate more from the isotropic
condition. Finally, for \(\{111\}\)-oriented films, the ground-state epitaxial conditions of \(Pca2_{1}\), \(T\), \(R3m\), and \(R3\) phases are comparably close to those of the isotropic YSZ substrate. This could make it challenging to distinguish between these phases in experimental settings.
Because the same substrate can cause varying strains in the film, it is essential to thoroughly consider all possible growth orientations of competing polymorphs to establish the correct thermodynamic stability order. Our HT DFT calculations are performed by computing phase energetics for an expansive range of (\(\widetilde{X}_{s}\), \(\widetilde{Y}_{s}\)) values with \(\theta_{s}=90^{\circ}\) for three general orientations: \(\{001\}\), \(\{110\}\), and \(\{111\}\), respectively. Given a substrate and general orientation, we optimize the supercell structures for all possible \(\sigma\) and \(\varphi\) values. During the structural optimization, the in-plane lattice parameters remain fixed to those of the substrate (\(\widetilde{X}_{s}\), \(\widetilde{Y}_{s}\)) while the atomic coordinates and out-of-plane lattice parameters are allowed to relax (see Methods).
We first investigate the effects of isotropic epitaxial constraints (\(\widetilde{X}_{s}=\widetilde{Y}_{s}=a_{s}\) and \(\theta_{s}=90^{\circ}\)) on the phase competitions in differently-oriented hafnia thin films. Note that most of previous theoretical studies only considered equibiaxial strains, which involve equal scaling lattice vectors along \(X\) and \(Y\) while conserving the in-plane lattice angle at the ground-state value of \(\theta_{f}\). That is, the in-plane shear strain is not considered as \(\Delta\theta_{f}=0\). Here, closely resembling LME on isotropic substrates such as YSZ, each hafnia polymorph has in-plane lattice vectors being orthogonal and the lengths fixed to \(a_{s}\). Depending on the values of \(\sigma\) and \(\varphi\), a hafnia polymorph could be subjected to in-plane shear strain characterized by \(\Delta\theta_{f}\) as discussed above. The value of \(a_{s}\) varies from 4.9 to 5.4 A. Figure 3 shows the calculated phase energetics for \(M\), \(Pca2_{1}\), \(Pmn2_{1}\), \(T\), \(R3m\), and \(R3\) with different growth orientations as functions of \(a_{s}\). For a specific \(a_{s}\), only the lowest energy value of each phase is plotted.
In the case of \(\{001\}\)-oriented polymorphs (Fig. 3**a**), the most stable phase on YSZ(001) is the \(M\) phase with the same orientation, consistent with the report by Torrejon _et al._ which demonstrated the formation of a slightly distorted \(M\) phase with equal in-plane lattice constants in HZO/YSZ(001) [27]. This is expected because the strain-free epitaxial condition of \(M(001)\) is close to that of YSZ(001) as shown in Fig. 2**a**. In addition, a critical value of \(a_{s}=5.0\) A is identified below which \(Pca2_{1}(001)\) becomes most stable. However, this film will only exhibit in-plane polarization (\(Pca2_{1}\) has polarization along [010]) that is not convenient for device lateral downscaling. Overall, \(M(001)\) is the most stable polymorph over a wide range of isotropic epitaxial conditions.
For \(\{110\}\)-oriented polymorphs (Fig. 3**b**), \(Pca2_{1}\) becomes more stable than \(M\) within a specific
strain range of \(5.10<a_{s}<5.23\) A, and the energies of two orientations of \(Pca2_{1}\), (101) and (011), are almost equal, indicating a degenerate energy landscape. This is supported by experimentally observed coexistence of \(Pca2_{1}(101)\) and (011) domains in YHO/YSZ(110) [6]. Specifically, when grown on YSZ(110), our calculations indicate that the energy of \(Pca2_{1}\) is 22 meV per formula unit (f.u.) lower than \(M\), and a larger \(a_{s}=5.20\) A can further increase this energy difference to 35 meV/f.u. Outside the strain range that stabilize \(Pca2_{1}\), a larger \(a_{s}\) favors the formation of \(M(101)\), while a smaller \(a_{s}\) promotes \(M(10\overline{1})\).
Regarding {111}-oriented polymorphs (Fig. 3**c**), the \(Pca2_{1}\) phase can be stabilized by isotropic substrates with \(a_{s}\) in a wide range of 5.00-5.23 A, outside which the \(M\) phase is more stable. Importantly, we find that the energies of polar \(R3m\) and \(R3\) phases are considerably higher than the other phases. Although \(R3m\) becomes competitive with \(Pca2_{1}\) under large compressive strains (\(a_{s}<4.9\) A), the nonpolar \(M\) phase remains the most stable at these epitaxial conditions. As pointed out by Fina and Sanchez [28], the assumption of the \(R3m\) phase in (111)-oriented HZO films leads to an important mismatch between DFT calculations and experiments on the required strain for the measured polarization. The strain state can be related to the out-of-plane interplanar spacing \(d_{111}\) in (111)-oriented films. We calculate the energy and polarization of four polar phases (\(Pca2_{1}\), \(Pmn2_{1}\), \(R3m\), and \(R3\)) as a function of \(d_{111}\), respectively (Fig. 4). It is found that when \(d_{111}<3.06\) A, \(Pca2_{1}\) is most stable with an out-of-plane polarization of \(\sim\)30 \(\mu\)C/cm\({}^{2}\), comparable to experimental values of \(\approx\)4-23 \(\mu\)C/cm\({}^{2}\) considering the depolarization effect. In contrast, a giant value of \(d_{111}\) of 3.4 A in the \(R3m\) phase is needed to induce a polarization of the same magnitude. Since the experimental values of \(d_{111}\) fall within the range of 2.96-3.05 A as reported in a few HZO(111) films grown on different substrates, we believe \(Pca2_{1}\) is responsible for the ferroelectricity in (111)-oriented epitaxial hafnia thin films.
Our extensive investigations on all growth orientations in the family of {001}, {110}, and {111} demonstrate that the epitaxial strain can serve as the sole factor that thermodynamically stabilizes \(Pca2_{1}\) over \(M\), offering a straightforward explanation to the origin of ferroelectricity in epitaxial HfO\({}_{2}\)-based thin films on YSZ substrates reported in experiments. This strain effect has been elusive because previous DFT studies either overlooked certain low-energy orientations [25] or disregarded the in-plane shear strain that results from the orthogonal lattice vectors of isotropic substrates [29]. We now prove that the in-plane shear strain is crucial for the stabilization of \(Pca2_{1}\) by performing a series of model calculations that estimate the phase energetics on (hypothetical) substrates with \(a_{s}\) fixed to YSZ lattice constant but varying \(\theta_{s}\). This enables the
isolation of the shear strain contribution, _i. e._, the (110)-oriented \(M\) phase (\(\theta_{f}=96.8^{\circ}\)) grown on a substrate with \(\theta_{s}=92^{\circ}\) has \(\Delta\theta_{f}=4.8^{\circ}\), thereby experiencing a smaller shear strain compared to that on a substrate of \(\theta_{s}=90^{\circ}\).
The energies of representative polymorphs as a function of \(|\Delta\theta_{s}|=|\theta_{s}-90^{\circ}|\) are presented in the right panels of Fig. 3. In \(\{001\}\)-oriented films, most polymorphs show increasing energy with increasing \(|\Delta\theta_{s}|\) as their values of \(\theta_{f}\) are already close to \(90^{\circ}\), matching well to a substrate of \(\theta_{s}=90^{\circ}\). For \(\Delta\theta_{s}<3.0^{\circ}\), \(Pca2_{1}(010)\) is lower in energy than \(M(010)\) but still higher in energy than \(M(001)\). It may be feasible to obtain ferroelectric (010)-oriented films on YSZ by finding a way to prevent the formation of \(M(001)\). For \(\{110\}\)-oriented polymorphs, when \(\Delta\theta_{s}<1.9^{\circ}\), \(Pca2_{1}\) is preferred over \(M\). Furthermore, as \(\Delta\theta_{s}\) increases and substrate \(\theta_{s}\) approaches the value of \(\theta_{f}=83.1^{\circ}\) of \(M(011)\), the energy of \(M(011)\) reduces considerably. This implies that in the absence of shear strain applied to \(M(011)\), the nonpolar \(M\) phase will be highly favored over \(Pca2_{1}\), as is the case in bulk, underscoring the substantial impact of shear strain on the stability of the \(M\) phase. Another finding is that at \(\Delta\theta_{s}=0^{\circ}\), \(M(10\overline{1})\) having \(\theta_{f}=90^{\circ}\) is higher in energy than \(Pca2_{1}\), mainly due to its large strain-free anisotropy (\(\lambda_{f}=1.09\)). Regarding \(\{111\}\) orientations, \(Pca2_{1}\) is most stable when \(\Delta\theta_{s}\) values are below \(1.7^{\circ}\). Decreasing shear strain by increasing \(\Delta\theta_{s}\) leads to a significant reduction in energy for \(M(11\overline{1})\). Our results also reveal that on a substrate of \(\theta_{s}=90^{\circ}\), the \(T\) phase is more stable than \(Pmn2_{1}\), challenging the hypothesis of spontaneous transition of \(T\to Pmn2_{1}\)[15].
For substrates like LSMO/STO that have a large lattice mismatch with HfO\({}_{2}\), DME results in smaller effective mismatch than LME [30; 31], and the effective strain experienced by the film could be anisotropic. In this regard, we further study the phase competitions under anisotropic epitaxial conditions characterized by \(\widetilde{X}_{s}\neq\widetilde{Y}_{s}\) and \(\theta_{s}=90^{\circ}\). As demonstrated in Fig. 3 that \(M\) and \(Pca2_{1}\) consistently have lower energies than the other phases, here we focus on these two phases. Figure 5 displays the phase diagrams of HfO\({}_{2}\) thin films under anisotropic epitaxial conditions, with color representing the energy difference (\(\Delta E\)) between the most stable \(Pca2_{1}\) phase and the most stable \(M\) phase for a given general orientation. In \(\{001\}\)-oriented films, anisotropic epitaxial conditions accessible in experiments strongly promote the formation of \(M\) in both (001) and (100) orientations, similar to the isotropic case presented in Fig. 3**a**. For \(\{110\}\) orientations, the energy difference \(\Delta E\) is typically more responsive to the strain applied along the \(Y\left\langle 1\overline{1}0\right\rangle\) direction (Fig. 1). A uniaxial tensile strain along \(X\) can further facilitate the formation of \(Pca2_{1}(110)\). In \(\{111\}\) orientations, the \(Pca2_{1}\) phase is stabilized when the normalized lattice
lengths \(\widetilde{X}_{s}\) and \(\widetilde{Y}_{s}\) range from 5.05 to 5.25 A. Notably, both isotropic YSZ(111) and anisotropic STO(001)/(110) epitaxial conditions fall within this range (Fig. 5**c**). Moreover, the effect of shear strain combined with anisotropic normal strain on the phase stability is further examined. We map out the phase diagrams for {110}- and {111}-oriented polymorphs with \(\Delta\theta_{s}=1.5^{\circ}\) and \(\Delta\theta_{s}=1.0^{\circ}\), respectively (Fig. S3). We find that a substrate featuring non-orthogonal in-plane lattice vectors generally constricts the range of epitaxial conditions that can stabilize \(Pca2_{1}\), due to the diminished shear strain applied to \(M\).
The strain-stability phase diagrams established with HT DFT calculations provide answers to the three questions raised above. First, our findings indicate that the \(Pca2_{1}\) phase is most likely the ferroelectric phase formed in epitaxial hafnia thin films grown by LME and DME. Other polar phases such as \(R3m\), \(R3\), and \(Pmn2_{1}\) are all higher in energy than \(Pca2_{1}\) across a wide range of epitaxial conditions. The experimentally observed rhombohedral symmetry [11] could stem from the lattice distortion of \(Pca2_{1}\) phase. Second, as the same substrate can generate varying strain conditions, classifying the strain type, be it tensile or compressive, that stabilizes the polar phase is not particularly useful. Instead, we recommend focusing on the effective epitaxial conditions of the substrate. Finally, the \(Pca2_{1}\) phase can be stabilized across a broad range of epitaxial conditions in both {110} and {111} growth orientations by imposing orthogonal lattice constraints using substrates with orthogonal in-plane lattice vectors. These epitaxial conditions primarily destabilize the \(M\) phase with a large intrinsic in-plane lattice angle and/or significant anisotropic ratio. Moreover, our results offer a potential explanation for the _reverse size effect_ observed in ferroelectric HfO\({}_{2}\)-based thin films [32]: with the film thickness increasing, the relaxation of epitaxial constraints, particularly the shear strain, restores the thermodynamic stability of \(M\), leading to a suppressed ferroelectricity in thicker films.
In summary, this study demonstrates that the epitaxial conditions presented in common substrates such as YSZ and STO can thermodynamically stabilize {110}- and (111)-oriented polar \(Pca2_{1}\) phase without relying on other extrinsic factors. The shear strain arising from the symmetry of the substrate that tends to orthogonalize in-plane lattices plays a crucial role in destabilizing the nonpolar \(M\) phase. By clarifying the ambiguities surrounding the field of ferroelectric hafnia, we hope to facilitate the optimization of epitaxial hafnia thin films, ultimately leading to enhanced functionalities.
## III Methods
DFT calculations are performed using the Vienna _ab initio_ simulation package (VASP) [33] with the projector augmented-wave (PAW) method [34; 35] and the Perdew-Burke-Ernzerhof (PBE) exchange correlation functional [36]. The plane-wave cutoff energy is set to 600 eV. The Brillouin zones of the {001}, {110}, and {111} supercells are sampled by \(\Gamma\)-centered (4\(\times\)4\(\times\)4), (4\(\times\)3\(\times\)3), and (3\(\times\)2\(\times\)3) Monkhorst-Pack [37]\(k\)-point meshes, respectively. For a specific epitaxial condition, the initial structures are constructed by accordingly setting the in-plane lattice parameters, then the atomic coordinates and out-of-plane lattices are fully optimized with a force convergence threshold of 0.01 eV/A with fixed in-plane lattices. The polarization values are calculated by using the Berry phase method [38; 39].
## IV Acknowledgments
T.Z. and S.L. acknowledge the supports from National Key R&D Program of China (2021YFA 1202100), National Natural Science Foundation of China (12074319), and Westlake Education Foundation. The computational resource is provided by Westlake HPC Center.
Figure 1: **Epitaxial matching of different HfO2{110} films with a generic substrate and resulted strain conditions.** The left panel shows the in-plane lattices for four \(P2_{1}/c\) and three \(Pca2_{1}\) unique growth orientations, each of which is characterized by a set of lattice parameters (\(X_{f}\), \(Y_{f}\), \(\theta_{f}\)). By normalizing the in-plane lattice parameters as (\(\widetilde{X}_{f}\), \(\widetilde{Y}_{f}\), \(\theta_{f}\)), the strain conditions (\(\varepsilon_{X}\), \(\varepsilon_{Y}\), \(\gamma\)) of these films imposed by a generic substrate (\(\widetilde{X}_{s}\), \(\widetilde{Y}_{s}\), \(\theta_{s}\)) depend on both growth orientation and crystal symmetry.
Figure 2: **Ground-state epitaxial conditions of unstrained HfO\({}_{2}\) polymorphs of different growth orientations.** The top panels show the normalized lattice lengths (\(\widetilde{X}_{f}\), \(\widetilde{Y}_{f}\)) of (**a**) \(\{001\}\), (**b**) \(\{110\}\), and (**c**) \(\{111\}\)-oriented polymorphs. Different phases and growth orientations are denoted by different colored and shaped markers, respectively. Note that for the strain-induced polar rhombohedral \(R3m\) phase, the unstrained state is considered as the nonpolar cubic \(P\overline{4}3m\) phase [22] (Table S1). The red dashed lines mark the experimental lattice constant (5.15 Å) of the YSZ substrate [27]. The bottom panels show the distortion angle \(\Delta\theta_{f}=\theta_{f}-90^{\circ}\) and the anisotropic parameter \(\lambda_{f}=\widetilde{X}_{f}/\widetilde{Y}_{f}\) of (**d**) \(\{001\}\), (**e**) \(\{110\}\), and (**f**) \(\{111\}\)-oriented polymorphs. The grey dashed lines in all panels denote the epitaxial conditions of isotropic substrates with \(\widetilde{X}_{s}=\widetilde{Y}_{s}\) and \(\theta_{s}=90^{\circ}\).
Figure 3: **Thermodynamic stability of HfO\({}_{\bf 2}\) thin films under isotropic epitaxial conditions.** The left panels show the energy of the most stable orientation of a given phase as a function of the substrate lattice constant \(a_{s}\) in (**a**) \(\{001\}\), (**b**) \(\{110\}\), and (**c**) \(\{111\}\)-oriented films. Different phases and growth orientations are denoted by different colored and shaped markers, respectively. The red dashed lines mark the YSZ lattice constant (\(a_{\rm YSZ}=5.15\) Å). The right panels display the energy as a function of the substrate distortion angle \(|\Delta\theta_{s}|=|\theta_{s}-90^{\circ}|\) when \(a_{s}\) is fixed as \(a_{\rm YSZ}\). The grey shaded regions mark the ranges of epitaxial conditions where the polar \(Pca2_{1}\) phase is the most stable phase. The energy of the unstrained \(M\) phase is set to zero as reference.
Figure 4: **Energy and polarization of four polar phases in HfO\({}_{2}\)(111) thin films under isotropic epitaxial conditions.** (**a**) Energy and (**b**) out-of-plane polarization \(P_{Z}\) as a function of interplanar spacing \(d_{111}\). The blue circles mark the experimental remanent polarization and \(d_{111}\) of HZO epitaxial thin films grown on different substrates [9].
Figure 5: **Strain-stability phase diagrams of HfO\({}_{2}\) thin films.** The color scales with the energy difference (in unit of eV/f.u.) between the most stable polar \(Pca2_{1}\) phase (labeled as \(O\)) and the most stable nonpolar \(M\) phase for (**a**) \(\{001\}\), (**b**) \(\{110\}\), and (**c**) \(\{111\}\)-oriented films. The red lines denote the phase boundaries between \(Pca2_{1}\) and \(M\) phases, while the blue and orange lines separate the \(M\) phase and \(Pca2_{1}\) phase of different orientations, respectively. Experimental epitaxial conditions (Table S2) of the isotropic YSZ [27] and anisotropic STO [10] substrates are marked by red and pink circles, respectively. |
2304.11635 | Resonant plasmonic detection of terahertz radiation in field-effect
transistors with the graphene channel and the black-As$_x$P$_{1-x}$ gate
layer | We propose the terahertz (THz) detectors based on field-effect transistors
(FETs) with the graphene channel (GC) and the black-Arsenic (b-As)
black-Phosphorus (b-P), or black-Arsenic-Phosphorus (b-As$_x$P$_{1-x}$) gate
barrier layer. The operation of the GC-FET detectors is associated with the
carrier heating in the GC by the THz electric field resonantly excited by
incoming radiation leading to an increase in the rectified current between the
channel and the gate over the b-As$_x$P$_{1-x}$ energy barrier layer (BLs). The
specific feature of the GC-FETs under consideration is relatively low energy
BLs and the possibility to optimize the device characteristics by choosing the
barriers containing a necessary number of the b-As$_x$P$_{1-x}$ atomic layers
and a proper gate voltage. The excitation of the plasma oscillations in the
GC-FETs leads to the resonant reinforcement of the carrier heating and the
enhancement of the detector responsivity. The room temperature responsivity can
exceed the values of $10^3$~A/W. The speed of the GC-FET detector's response to
the modulated THz radiation is determined by the processes of carrier heating.
As shown, the modulation frequency can be in the range of several GHz at room
temperatures. | V. Ryzhii, C. Tang, T. Otsuji, M. Ryzhii, V. Mitin, M. S. Shur | 2023-04-23T12:34:46Z | http://arxiv.org/abs/2304.11635v1 | Resonant plasmonic detection of terahertz radiation in field-effect transistors with the graphene channel and the black-As\({}_{x}\)P\({}_{1-x}\) gate layer
###### Abstract
We propose the terahertz (THz) detectors based on field-effect transistors (FETs) with the graphene channel (GC) and the black-Arsenic (b-As) black-Phosphorus (b-P), or black-Arsenic-Phosphorus (b-As\({}_{x}\)P\({}_{1-x}\)) gate barrier layer. The operation of the GC-FET detectors is associated with the carrier heating in the GC by the THz electric field resonantly excited by incoming radiation leading to an increase in the rectified current between the channel and the gate over the b-As\({}_{x}\)P\({}_{1-x}\) energy barrier layer (BLs). The specific feature of the GC-FETs under consideration is relatively low energy BLs and the possibility to optimize the device characteristics by choosing the barriers containing a necessary number of the b-As\({}_{x}\)P\({}_{1-x}\) atomic layers and a proper gate voltage. The excitation of the plasma oscillations in the GC-FETs leads to the resonant reinforcement of the carrier heating and the enhancement of the detector responsivity. The room temperature responsivity can exceed the values of \(10^{3}\) A/W. The speed of the GC-FET detector's response to the modulated THz radiation is determined by the processes of carrier heating. As shown, the modulation frequency can be in the range of several GHz at room temperatures.
## Introduction
The emergence of the black-Phosphorus (b-P), black-Arsenic (b-As), and the compounds of these materials (b-AsP), with the energy gap A\({}_{BL}\) varying from 0.15 to 1.2 eV (see, for example, [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21]) opens new prospects for the creation of different electronic, optoelectronic, and terahertz (THz) devices. The combination of the GLs with the b-As\({}_{x}\)P\({}_{1-x}\) layers with graphene [22, 23, 24, 25, 26, 27, 28] can be particularly beneficial for the creation of novel devices, including MIR/FIR/THz interband photodetectors. In this paper, we propose and evaluate the THz detectors akin to the field-effect transistors (FETs) with the graphene channel (GC) and b-As\({}_{x}\)P\({}_{1-x}\) gate barrier layer (BL). The operation of such GC-FETs is associated with the carrier heating in the GC by incoming THz radiation (see, for example, [29]) leading to an increase of the thermionic GC-gate current. This implies that the GC-FETs could operate as hot carrier bolometric detectors. The main features of the proposed THz detectors are as follows: (a) the b-As\({}_{x}\)P\({}_{1-x}\) BL provides the possibility to choose the desirable BL height (and, hence, optimize the device characteristics) by varying the number of the atomic layers and/or the molar fraction of As [3, 4, 5, 6], (b) the GC exhibits a room temperature elevated carrier energy relaxation time [30, 31, 32, 33, 34], which promotes high detector responsivities and detectivities, and (c) the plasmonic (PL) properties of the GC-FET [35, 36, 37] can enable the detector resonance response to the THz radiation at the frequencies close to the GC frequencies.
## Device structure
Figure 1(a) shows the GC-FET detector structure (with the number of atomic layers in the BL \(N=20\)) and the related band diagrams for the assumed band alignment. For definiteness, we consider the GC-FET structures with the n-type GC, in which the thermionic current between the GC and the gate is associated with the electrons overcoming the BL barrier in the conduction band. If \(\Delta_{V}\geq\Delta_{C}\), where \(\Delta_{C}\) and \(\Delta_{V}\) are the band offsets between the BL conduction and valence band's edges and the Dirac point in the GC (so that \(\Delta_{C}+\Delta_{V}\) is the energy gap of the BL), the electron current exceeds the hole current when the electron Fermi energy, \(\mu_{D}\), in the GC is sufficiently large, so that \(\Delta_{C}-\mu_{D}\lesssim\Delta_{M}\). Here \(\Delta_{M}\) is the difference between the BL
and GC work functions and \(\mu_{D}\simeq\hbar\,v_{W}\sqrt{\pi\Sigma_{D}^{-}}\) is the equilibrium value of the electron Fermi energy counted from the Dirac point at \(V_{G}=0\), \(v_{W}\simeq 10^{8}\) cm/s is the characteristic electron velocity in GCs, and \(\hbar\) is the reduced Planck constant.
Figures 1(b) and 1(c) show the band diagrams for \(\Delta_{C}-\mu_{D}=\Delta_{M}\) at zero gate bias (\(V_{G}=0\) with the GC donor density \(\Sigma_{D}^{-}\) corresponding to the BL flat band) and under the gate bias (\(V_{G}>0\)), respectively. We focus on the GC-FETs with n-doped GC, in which the above equality is met, i.e., assuming that in the absence of the bias gate voltage (\(V_{G}=0\)), the BL bottom of the conduction band and the top of the valence band are flat.
Table 1 lists examples of possible combinations of the b-As\({}_{s}\)P\({}_{1-x}\) barriers and the metal gate materials. The pertinent parameters were taken from [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21].
The results obtained below can be also used for devices with relatively large \(\Delta_{C}\), considering the hole transport instead of the electron one.
The local voltage drop across the BL \(e\Phi=\Delta_{C}-\Delta_{M}-\mu+e(V_{G}+\varphi)=\mu_{D}-\mu+e(V_{G}+\varphi)\), where \(V_{G}\) is the applied DC bias gate voltage, \(\varphi=\varphi(x,t)\) is the GC potential local value, and \(\mu\) is the net electron Fermi energy in the GC. At the GC edges, \(\varphi(\pm L,t)=\pm\frac{1}{2}\delta V_{\omega}\exp(-i\omega t)\) or \(\varphi(\pm L,t)=\delta V_{\omega}\exp(-i\omega t)\) for the asymmetric (a) and symmetric (s) THz radiation input, respectively, with \(\delta V_{\omega}\) and \(\omega\) being the amplitude and the frequency of the THz radiation received by an antenna. The asymmetric THz radiation input corresponds to the design when the antenna leads are connected to the GC-FET side (source and drain) contact pads. In the case of the symmetric input, one of the antenna leads is contacted to the gate, whereas the second one to both side contacts.
The variation of the potential difference between the side source and drain contacts (in the case of the symmetrical input) and between the side contacts and the gate leads to the transient electron current along the GC and the transient variation of the self-consistent electron density, i.e., to the excitation of the plasmonic oscillations. In the case of the asymmetric input, the electron current along the GC exists even at very small signal frequency. As a results the electron heating by the incoming signals takes place at such frequencies as well. In contrast, in the case of the symmetrical input, slow variations of the side contacts potential with respect to the gate potential create a very weak lateral electron current not heating the GC electron system. This leads to marked distinctions of the response in the range of low frequencies (which is demonstrated below).
The BL energy gap \(\Delta_{G}\) and the dielectric constant \(\kappa_{G}\) depend on the transverse electric field \(\Phi/W\). Accounting for this, one can set \(\Delta_{C}=\eta\Delta_{G}[1-(\Phi/WE_{G})^{2}]\), and \(\kappa=\kappa_{G}/[1-(\Phi/WE_{G})^{2}]\), where \(\Delta_{G}\) and \(\kappa\) are the BL energy gap and the dielectric constant in the absence of the transverse electric field, \(E_{G}\) is the characteristic electric field, \(W\) is the BL thickness (see, for
example,[38, 39, 40, 41]), and \(\eta=\Delta_{C}/(\Delta_{C}+\Delta_{V})<1\) the fraction of the BL height related to the conduction band. For the b-P BL with \(W=10\) nm (the number of the atomic layers \(N=20\)), \(E_{G}\simeq 0.7-0.8\simeq\) V/nm. This implies that the effect of the transverse electric field on \(\Delta_{C}\), \(\Delta_{G}\), and \(\kappa_{G}\) markedly reveals at sufficiently high gate voltages when \(\Phi\gtrsim 1\) V). However, such a voltage range is beyond our present consideration. Considering the GC-FETs with a sufficiently thick b-As\({}_{\rm F}\)P\({}_{1-x}\) BL at moderate gate voltages, we disregard the carrier tunneling across this layer. This implies that the GC-gate current is associated with the sufficiently energetic electrons overcoming the BL, i.e., it is of thermionic origin.
## Equations of the model
### Thermionic DC and AC
At not-too-small electron densities in GCs, the characteristic time of the electron-electron collisions \(\tau_{ee}\) is shorter than the pertinent times associated with the optical phonons \(\tau_{0}\), acoustic phonons \(\tau_{ac}\), and impurities \(\tau_{i}\), respectively. This implies that the electron distribution function is close to the Fermi distribution function \(f(\varepsilon)=[\exp(\varepsilon-\mu)/T+1]^{-1}\), characterized by the effective electron temperature \(T\) generally different from the lattice (thermostat) temperature \(T_{0}\) (in the energy units) and the electron Fermi energy \(\mu\). Hence, at \(\varepsilon>\mu\), \(f(\varepsilon)\simeq\exp[(\mu-\varepsilon)/T]\). However, in the energy range \(\varepsilon>\Delta_{C}\), the electron escape over the BL can markedly decrease \(f(\varepsilon)\). To account for this effect, in the range in question, one can set \(f(\varepsilon)\simeq\xi\exp[(\mu-\varepsilon)/T]\), where \(\xi=\tau_{\perp}/(\tau_{ee}+\tau_{\perp})\) with \(\tau_{\perp}\) being the electron try-to-escape time.
Considering that the height of the potential barrier for the electrons in the GC and in the metal gate are equal to \(\Delta_{C}-\mu\) and \(\Delta_{M}+e(V_{G}+\varphi)\), respectively, the density of the thermionic electron current can be presented as
\[j\simeq j^{max}\left[\exp\left(\frac{\mu-\Delta_{C}}{T}\right)-\exp\left(- \frac{\Delta_{M}+e\Phi}{T_{0}}\right)\right]. \tag{1}\]
Here \(j^{max}=e\Sigma/\tau_{\perp}\) is the characteristic (maximum) GC-gate DC density, \(\Sigma\) is the electron density in the GC induced by the donors and gate voltage, and \(e=|e|\) is the electron charge. One can assume that \(\tau_{\perp}\) is determined by the momentum relaxation time, associated with the quasi-elastic scattering of the high-energy electrons, i.e., with acoustic phonons (in sufficiently perfect GCs). Due to this, it is natural to assume that \(\tau_{\perp}>\tau_{ac}\gg\tau_{ee}\). The Fermi energy \(\mu\) is determined by both the GC doping and the gate voltage.
Equation (1) leads to the following expressions for the thermionic DC density \(\overline{j}\), corresponding to the DC temperature \(\overline{T}\):
\[\overline{j}=j^{max}\left[\exp\left(\frac{\mu-\Delta_{C}}{\overline{T}} \right)-\exp\left(\frac{\mu-\Delta_{C}-eV_{G}}{T_{0}}\right)\right]. \tag{2}\]
Due to the dependence of \(\mu\) on \(V_{G}\), Eq. (2) provides the GC-gate I-V characteristics. Since \(\overline{T}\) also depends on \(V_{G}\) (because of the electron heating in the GC by the lateral DC), the latter dependence can somewhat contribute to the GC-FET characteristics as well.
At sufficiently high GC lateral conductivity in the situations under consideration (large \(\Sigma\) and \(\mu\)), the DC potential and the DC effective temperature nonuniformity along the GC are weak (\(\overline{T}\simeq const\)). This implies that we disregard the possible DC crowding. A high electron thermal conductivity additionally suppresses the above nonuniformity.
The AC variation \(\delta j_{\omega}\) due to the potential oscillations leading to the electron heating is given by
\[\delta j_{\omega}=f^{max}\frac{\delta T_{\omega}}{\overline{T}}\frac{(\Delta_{C}- \mu)}{\overline{T}}\exp\biggl{(}\frac{\mu-\Delta_{C}}{\overline{T}}\biggr{)} \tag{3}\]
Here we omitted the term containing the factor \((e\delta\varphi_{\omega}/T_{0})^{2}/2\) with \(\delta\varphi_{\omega}\) being the GC potential ac component. In this case, the quantity \(\delta j_{\omega}\), given by Eq.(3), does not depend explicitly on the AC variations of the GC potential (only via the effective temperature variation \(\delta T_{\omega}\)). This is due to a specific shape of the energy barrier for the electrons in the GC [see Fig. 1(c)].
### Rectified current and effective carrier temperature
The incoming THz radiation results in variations of the potential in the GC. This leads to extra electron heating and the variation of the electron temperature \(\delta T=T-\overline{T}\). According to Eq. (3), the variation of the net gate current associated with the effect of the incoming THz radiation averaged over its period (rectified photocurrent) is given by
\[<\overline{\delta J_{\omega}}>=J^{max}\mathcal{F}(V_{G})\frac{<\overline{ \delta T_{\omega}}>}{\overline{T}}. \tag{4}\]
Here \(J^{max}=2LH\,J^{max}\), and \(2L\) and \(H\) are the GC length and width,
\[\mathcal{F}(V_{G})=\frac{(\Delta_{C}-\mu)}{\overline{T}}\exp\biggl{(}\frac{ \mu-\Delta_{C}}{\overline{T}}\biggr{)}=\frac{[\Delta_{M}-(\mu-\mu_{D})]}{ \overline{T}}\exp\biggl{[}\frac{(\mu-\mu_{D})-\Delta_{M}}{\overline{T}}\biggr{]} \tag{5}\]
is the barrier factor, and the symbols \(<...>\) and \(\overline{<...>}\) denote the averaging over the signal period \(2\pi/\omega\) and the length of the GC, respectively, with
\[<\overline{\delta T_{\omega}}>=\frac{1}{2L}\int_{-L}^{L}dx<\delta T_{\omega}>. \tag{6}\]
The dependence of the factor \(\mathcal{F}(V_{G})\) on the gate voltage as associated with the voltage dependence of the electron Fermi energy (see below).
The effective electron temperature \(T\) is determined by the balance of the electron energy transfer to the lattice and the energy provided by the electric field along the GL. At room temperature, the emission and absorption of the optical phonons by the electrons in GLs can be considered as a main mechanism of electron energy relaxation. In this case, the power transferring from the electrons in the GC to the optical phonons due to the intraband transitions is [30, 31, 32, 33]
\[P_{0}^{intra}=\hbar\omega_{0}R_{0}^{intra}. \tag{7}\]
Here
\[R_{0}^{intra}=R_{0}\frac{\hbar\omega_{0}\mu^{2}}{T_{0}^{3}}\biggl{[}\biggl{(} 1+\frac{1}{\mathcal{N}_{0}}\biggr{)}\exp\biggl{(}-\frac{\hbar\omega_{0}}{T} \biggr{)}-1\biggr{]}, \tag{8}\]
\(\hbar\omega_{0}\sim 200\) meV is the optical phonon energy, \(\mathcal{N}_{0}=[\exp(\hbar\omega_{0}/T_{0})-1]^{-1}\simeq\exp(-\hbar\omega_{ 0}/T_{0})\), \(R_{0}\) is the characteristic rate of the interband absorption of optical phonons, and \(T_{0}\) is the lattice temperature. At moderate THz power, the effective electron temperature \(T\) is close to the optical phonon temperature \(T_{0}\), and Eq. (8) yields for \(R_{0}^{intra}\):
\[R_{0}^{intra}\simeq R_{0}\frac{\hbar\omega_{0}\mu^{2}}{T_{0}^{3}}\biggl{(} \frac{1}{T_{0}}-\frac{1}{T}\biggr{)}. \tag{9}\]
Equalizing \(R_{0}^{intra}\) given by Eq. (9) and the Joule power associated with the AC in the GC, for the THz range of frequencies (in which one can assume \(\omega\gg 1/\tau_{\epsilon}\)), we arrive at the following energy balance equation:
\[\frac{<\overline{\delta T_{\omega}}>}{\tau_{\epsilon}}=\frac{\mbox{Re }\,\sigma_{0}}{2\Sigma_{0}L}\int_{-L}^{L}dx\biggl{|}\frac{d\varphi_{\omega}}{ dx}\biggr{|}^{2}. \tag{10}\]
Here \(\mbox{Re }\,\sigma_{\omega}=\sigma_{0}v^{2}/(v^{2}+\omega^{2})\) is the real part of the GC Drude conductivity, \(\sigma_{0}=e^{2}\mu/\pi\hbar^{2}\nu\) is its DC value, \(\nu\) is the frequency of the electron collisions on impurities, acoustic phonons, as well as due to the carrier viscosity (see, [42] and the references therein). Accounting for the deviation of the optical phonon temperature \(T_{0}\) from the lattice temperature \(T_{l}\), the carrier energy relaxation time \(\tau_{\epsilon}\) associated with the interaction with optical phonons is estimated as [32]\(\tau_{\epsilon}=\tau_{0}(1+\xi_{0})(T_{l}/\hbar\omega_{0})^{2}\exp(\hbar\omega_{ 0}/T_{l})\simeq\tau_{0}(1+\xi_{0})(T_{0}/\hbar\omega_{0})^{2}\exp(\hbar\omega_{ 0}/T_{l})\), where \(\tau_{0}\) is the characteristic time of the spontaneous optical phonon intraband emission by the electrons and \(\xi_{0}=\tau_{0}^{decay}/\tau_{0}\), and \(\tau_{0}^{decay}\) is the decay time of optical phonons in GCs.
### Plasmonic oscillations factor
The description of the spatio-temporal oscillations of the electron density and the self-consistent electric field, i.e., the plasmonic oscillations in the GLs (see, for example, [32, 33, 34, 35, 36, 37]) forced by the incoming THz signals can be reduced to a differential equation for the AC potential of the gated GC filled by the electrons (followed from a hydrodynamic electron transport model equations [43, 44, 45] coupled with the Poisson equation), \(\delta\varphi_{\omega}(x)\) :
\[\frac{d^{2}\delta\varphi_{\omega}}{dx^{2}}+\frac{\omega(\omega+i\nu)}{s^{2}} \delta\varphi_{\omega}=0, \tag{11}\]
supplemented by the following boundary conditions:
\[\delta\varphi_{\omega}|_{x=\pm L}|=\pm\frac{\delta V_{\omega}}{2}\exp(-i\omega t ),\qquad\delta\varphi_{\omega}^{s}|_{x=\pm L}|=\delta V_{\omega}\exp(-i\omega t). \tag{12}\]
Here \(s=\sqrt{4\,e^{2}\mu\,w/\kappa\hbar^{2}}\) is the plasma-wave velocity in the gated GC.
The above equations yield the following formula for the AC potential along the GC
\[\delta\varphi_{\omega}^{a}=\frac{\delta V_{\omega}}{2}\frac{\sin(\gamma_{ \omega}x/L)}{\sin\gamma_{\omega}},\qquad\delta\varphi_{\omega}^{s}=\delta V_ {\omega}\frac{\cos(\gamma_{\omega}x/L)}{\cos\gamma_{\omega}}. \tag{13}\]
Here
\[\gamma_{\omega}=\pi\frac{\sqrt{\omega(\omega+i\nu)}}{\Omega},\qquad\Omega= \sqrt{\frac{4\pi^{2}\,e^{2}\mu\,W}{\kappa\hbar^{2}L^{2}}} \tag{14}\]
are the normalized wavenumber and the characteristic frequency of the plasmonic oscillations of the electron system in the GC-FET under consideration.
The AC electric field along the GC is equal to
\[\frac{d\varphi_{\omega}^{a}}{dx}=\frac{\delta V_{\omega}}{2}\frac{\gamma_{ \omega}}{L}\frac{\cos(\gamma_{\omega}x/L)}{\sin\gamma_{\omega}},\qquad\frac{d \varphi_{\omega}^{s}}{dx}=-\delta V_{\omega}\frac{\gamma_{\omega}}{L}\frac{ \sin(\gamma_{\omega}x/L)}{\cos\gamma_{\omega}} \tag{15}\]
that, accounting for Eq. (12), yields
\[\frac{<\overline{\delta T_{\omega}}>^{a,s}}{\tau_{\epsilon}}=\left|\frac{ \delta V_{\omega}}{2}\right|^{2}\frac{\sigma_{0}}{\Sigma L^{2}}\mathcal{P}_ {\omega}^{a,s}. \tag{16}\]
Here
\[\mathcal{P}_{\omega}^{a}=\frac{\nu^{2}}{(\nu^{2}+\omega^{2})}\int_{0}^{1}d \zeta\left|\frac{\gamma_{\omega}\cos(\gamma_{\omega}\zeta)}{\sin\gamma_{ \omega}}\right|^{2},\qquad\mathcal{P}_{\omega}^{s}=\frac{\nu^{2}}{(\nu^{2}+ \omega^{2})}\int_{0}^{1}d\zeta\left|\frac{\gamma_{\omega}\sin(\gamma_{\omega} \zeta)}{\cos\gamma_{\omega}}\right|^{2} \tag{17}\]
are the plasmonic factors, which can be also presented as
\[\mathcal{P}_{\omega}^{a}\simeq\left(\frac{\pi\nu}{\Omega}\right)^{2}\frac{ \omega}{\sqrt{(\nu^{2}+\omega^{2})}}\frac{P_{\omega}^{a}}{|\sin\gamma_{\omega }|^{2}},\qquad\mathcal{P}_{\omega}^{s}\simeq\left(\frac{\pi\nu}{\Omega} \right)^{2}\frac{\omega}{\sqrt{(\nu^{2}+\omega^{2})}}\frac{P_{\omega}^{s}}{| \cos\gamma_{\omega}|^{2}}, \tag{18}\]
with \(P_{\omega}^{a}=\int_{0}^{1}d\zeta|\cos(\gamma_{\omega}\zeta)|^{2}\) and \(P_{\omega}^{s}=\int_{0}^{1}d\zeta|\sin(\gamma_{\omega}\zeta)|^{2}\) being functions of the order of unity oscillating with the frequency. If \(\omega\ll\Omega\) (\(\gamma_{\omega}\) tends to zero), Eqs. (17) and (18) yield \(\mathcal{P}_{\omega}^{a}\simeq 1\) and \(\mathcal{P}_{\omega}^{s}\simeq 0\).
Combining Eqs. (4), (6), and (16), we obtain
\[\frac{<\overline{\delta J_{\omega}}>^{a,s}}{J_{max}}=\left|\frac{\delta V_{\omega}}{ 2}\right|^{2}\frac{\omega_{0}\tau_{\epsilon}}{\overline{\Sigma}L^{2}}\mathcal{F} (V_{G})\,\mathcal{P}_{\omega}^{a,s}. \tag{19}\]
The detector response depends on the antenna type (see, for example, [46, 47]). Using an antenna specially defined for the THz range could substantially increase the collected power [47]. Here we define the GC-FET detector current responsivity (in the A/W units) and its voltage responsivity (in the V/W units) as
\[\mathcal{R}_{\omega}=\frac{<\overline{\delta J_{\omega}}>^{a,s}}{S_{\omega}}, \qquad\mathcal{R}_{\omega}^{V}=\frac{<\overline{\delta J_{\omega}}>^{a,s}}{S_ {\omega}}\rho, \tag{20}\]
respectively. Here \(S_{\omega}\) is the THz power collected by an antenna and \(\rho=2L/H\sigma_{0}\) is the GC DC resistance (for the case of load resistance equal to the GC resistance). This collected power is estimated as \(S_{\omega}=I_{\omega}A_{\omega}\), where \(I_{\omega}\) is the intensity of the impinging radiation and \(A_{\omega}=\lambda_{\omega}^{2}g/4\pi\) is the antenna aperture [46], \(\lambda_{\omega}\) is the radiation wavelength, and \(g\) is the antenna gain. Considering as an example, the half-wavelength dipole antenna, for which \(|\delta V_{\omega}|^{2}\simeq I_{\omega}(8\pi/c)(\lambda_{\omega}/\pi)^{2}\), where \(c\) is the speed of light in vacuum, we obtain \(|\delta V_{\omega}|^{2}=32S_{\omega}/gc\).
Accounting for Eqs. (18) and (19), we obtain
\[\mathcal{R}_{\omega}=\frac{32}{gc}\frac{<\overline{\delta J_{\omega}}>^{a,s} }{|\delta V_{\omega}|^{2}},\qquad\mathcal{R}_{\omega}^{V}=\frac{32}{gc}\frac{ <\overline{\delta J_{\omega}}>^{a,s}}{|\delta V_{\omega}|^{2}}\rho. \tag{21}\]
The latter equations yield
\[\mathcal{R}_{\omega}=\mathcal{R}_{0}\mathcal{F}(V_{G})\,\mathcal{P}_{\omega} ^{a,s},\qquad\mathcal{R}_{\omega}^{V}=\mathcal{R}_{0}{}^{V}\mathcal{F}(V_{G}) \,\mathcal{P}_{\omega}^{a,s}, \tag{22}\]
where
\[\mathcal{R}_{0}=\frac{16}{g}\frac{e\sigma_{0}}{\overline{T}c}\frac{\tau_{ \epsilon}}{\tau_{\perp}}\frac{H}{L},\qquad\mathcal{R}_{0}^{V}=\frac{32}{g} \frac{e}{\overline{T}c}\frac{\tau_{\epsilon}}{\tau_{\perp}}. \tag{23}\]
According to Eq. (23), the characteristic voltage responsivity \(\mathcal{R}_{0}^{V}\) does not explicitly depend on the frequency of electron collisions \(\nu\).
It is instructive that the responsivity at \(V_{G}=0\) does not turn to zero because of the factor \(\mathcal{F}(0)\neq 0\), so that \(<\overline{\delta J_{\omega}}>0\).
## Method and Results
Equations of the model were analyzed analytically and solved numerically. The resulting GC-FET characteristics - their responsivity found for different device samples are demonstrated in Figs. 2 - 5.
Figure 2 shows the normalized responsivity at the fundamental plasmonic resonance \(\mathcal{R}_{\omega}/\mathcal{P}_{\omega}^{a}\mathcal{R}_{0}|_{\omega=\Omega }=\mathcal{R}_{\omega}/\mathcal{P}_{\omega}^{s}\mathcal{R}_{0}||_{\omega= \Omega}=\mathcal{F}(V_{G})\) (as a function of the gate voltage \(V_{G}\)) for the devices with different \(\Delta_{C}\), \(\Delta_{V}\), \(\Delta_{M}\), and the GC doping corresponding to the BL flat band at \(V_{G}=0\) calculated using Eqs. (5) and (20). In this case, the thermionic activation energy \(\Delta_{C}-\mu_{D}=\Delta_{V}\). Equations (5) and (20) are supplemented by the following relation for \(\mu\) accounting for the effect of quantum capacitance [48, 49, 50, 51]:
\[(\mu-\mu_{D})(\mu+\mu_{D}-2\mu_{0})=2\mu_{0}eV_{G}, \tag{24}\]
where \(\mu_{0}=(\kappa_{G}\hbar^{2}v_{W}^{2}/4e^{2}W)\). For small (moderate) voltages, Eq. (23) yields
\[\mu\simeq\mu_{D}+\frac{\mu_{0}}{(\mu_{D}+\mu_{0}}eV_{G}. \tag{25}\]
As seen from Fig. 2, the normalized responsivity, which might be rather high at \(V_{G}=0\), exhibits a maximum at a certain voltage \(V_{G}^{max}\). The latter is different for different samples depending on the device band parameters. A decrease in the temperature \(T_{0}\) leads to somewhat sharper responsivity versus gate voltage dependence. This is associated with the specifics of
the rectified current-voltage dependence given by Eq. (4). The fact that the maximum of function \(\mathcal{F}(V_{G})\) height is independent of \(T_{0}\) is reflected in the dependences shown in Fig. 2.
As follows from Eq. (18) and (22), the maximal values of \(\mathcal{R}_{\omega}\) and \(\mathcal{R}_{\omega}^{V}\) as functions of the signal frequency \(\omega\) are reached at the plasmonic resonances \(\omega=\sqrt{n^{2}\Omega^{2}-\nu^{2}}\simeq n\Omega\) for the asymmetrical input, and \(\omega=\sqrt{2n-1)^{2}\Omega^{2}/4-\nu^{2}}\simeq(2n-1)\Omega/2\) for the symmetrical input, where \(n=1,2,3,...\) is the plasmonic resonance index. At the fundamental resonances, \(\mathcal{P}_{\omega}^{a}|_{\omega=\Omega}\simeq 2\) and \(\mathcal{P}_{\omega}^{s}|_{\omega=\Omega/2}\simeq 1\).
Figures 3 and 4 show the frequency dependence of the plasmonic oscillations factors \(\mathcal{P}_{\omega}^{a}\) and \(\mathcal{P}_{\omega}^{s}\) calculated for different values of the plasmonic frequencies \(\Omega\) and collision frequencies \(\nu\). According to Eq. (22), these factors determine (proportional to) the spectral characteristics of the GC-FET detector responsivity. To account for the electron collisions and the effect of their viscosity on the plasmon damping, we set [42]\(\nu=\nu_{coll}+\nu_{visc}(\omega/\Omega)^{2}\), assuming \(\nu_{coll}=(1-2)\) ps\({}^{-1}\) and \(\nu_{visc}=0.25\) ps\({}^{-1}\). In the GC-FETs with \(L=(0.5-1.0)\)\(\mu\)m, the latter corresponds to the electron viscosity \(h\simeq(250-1000)\) cm\({}^{2}\)/s that is in line with the observed values [42].
In particular, Figs. 3 and 4 demonstrate that [in line with Eqs. (17) and (22)] the responsivity exhibits fairly sharp (resonant) maxima at \(\omega\simeq n\Omega\) and \(\omega\simeq(2n-1)\Omega/2\) when \(\nu_{coll}=(1-2)\) ps\({}^{-1}\).
Although the GC-FETs with different methods of the THz radiation input exhibit the resonant response, the pattern of the spectral characteristics shown in these plots are rather distinct, and the resonance frequencies differ. This is associated with the excitation of different plasmonic modes (with different spatial distributions of the ac potential) using asymmetric and symmetric input. As seen, the amplitude of the plasmonic factor maxima increases with increasing resonance index despite the strengthening of the viscosity effect. This is attributed to an increase in the average AC electric field when the number of its semi-periods, i.e., the index \(n\) increase.
Figure 5 shows the dependences of the GC-FET detector current responsivity \(\mathcal{R}_{\omega}\) corresponding to the plasmonic factors of Figs. 3(b) and 4(b) calculated for \(\nu_{coll}=1\) ps\({}^{-1}\) and \(\nu_{coll}=2\) ps\({}^{-1}\) (solid lines). These dependencies exhibit pronounced plasmonic resonances. Since the responsivity \(\mathcal{R}_{\omega}\propto\sigma_{0}\mathcal{P}_{\omega}\propto\mathcal{P}_ {\omega}/\nu\), the heights of the responsivity peaks for a larger index \(n\) and for a larger collisional frequency \(\nu\) are smaller. It is instructive that the voltage responsivity \(\mathcal{R}_{\omega}^{V}\) for different collisional frequencies exhibits different behavior (at the chosen load resistance, which is assumed to be inversely proportional to \(\sigma_{0}\)). However, as seen from Fig. 5 (dashed lines), the plasmonic resonances in the cases of much stronger electron scattering (\(\nu_{coll}=4\) ps\({}^{-1}\) and \(\nu_{coll}=6\) ps\({}^{-1}\)), are substantially smeared.
Using Eq. (14) and setting \(\mu=120-140\) meV, \(\kappa=4\), \(W=10\) nm, and \(L=(1-2)\)\(\mu\)m, we obtain the following estimate: \(\Omega/2\pi\simeq(0.53-1.14)\) THz.
Considering that the escape of a hot electron with the energy \(\epsilon\gtrsim\Delta_{C}\) from the GC over the BL is possible due to its scattering on an acoustic phonon.
We might assume that the electron escape time \(\tau_{\perp}\gg\tau_{ac}\), where \(\tau_{ac}\) is the momentum relaxation time for the electrons with the energy \(\epsilon\gtrsim\Delta_{C}\). The quantity \(\tau_{ac}\) can be estimated as [52, 53, 54]\(\tau_{ac}\simeq 1\) ps. Considering this, for rough estimates at the values \(\Delta_{C}\) considered above we set \(\tau_{\perp}\sim 10-20\) ps. The electron energy relaxation time due to the interaction with the GC optical phonons is estimated as \(\tau_{\epsilon}\simeq 32-65\) ps (compare, for example, with [32, 34, 55]). The fast decay of optical phonons and
Figure 2: The normalized detector responsivity \(\mathcal{R}_{\omega}/\mathcal{P}_{\omega}^{a,s}\mathcal{R}_{0}\) (the same for the asymmetric and symmetric THz radiation input) as a function of the gate voltage \(V_{G}\) for the GC-FETs with different band parameters: (a) at \(T_{0}=25\) meV and (b) \(T_{0}=15\) meV.
the interaction of the electrons in the GC with the interface optical phonons can lead to a decrease in \(\tau_{\rm e}\) to the values about 10- 20 ps. Setting \(\mu=140\) meV, \(\nu=1\) ps\({}^{-1}\), \(\tau_{\rm\perp}=10\) ps, \(\tau_{\rm e}=10\) ps, \(g=1.64\), \(H=2L\), and \((\Delta_{C}-\mu)/T_{0}=1\), we arrive at \(\mathcal{R}_{0}\simeq 4.1\times 10^{2}\) A/W. This yields the characteristic voltage responsivity \(\mathcal{R}_{0}^{V}\simeq 3.7\times 10^{4}\) V/W. The latter values are close to the GC-FET current and voltage responsivities, \(\mathcal{R}_{\omega}|_{\omega=\Omega}\) and \(\mathcal{R}_{\omega}^{V}|_{\omega=\Omega}\), at the plasmonic resonances.
## Discussion
In Eq. (10), which governs the electron energy balance in the GC, we disregarded the electron cooling effect, associated with thermionic emission. This effect can be accounted for by replacing the quantity \((\tau_{\rm e}/\tau_{\rm\perp})\mathcal{F}(V_{G})\) in Eq. (22) by the factor \((\tau_{\rm e}/\tau_{\rm\perp})\mathcal{F}(V_{G})/[1+(\tau_{\rm e}/\tau_{\rm \perp})\mathcal{F}(V_{G})]\). The pertinent distinction is small if \(\tau_{\rm e}\lesssim\tau_{\rm\perp}\).
Using the relation between the GC mobility \(M\) and the Fermi energy \(\mu\): \(M=e\nu_{W}^{2}/\nu\mu_{D}\), where the quantity \(\mu_{D}/\nu_{W}^{2}\) is the electron fictitious mass, we find that the values of \(\nu_{coll}=(1-2)\) ps\({}^{-1}\) and \(\mu_{D}=(120-140)\) meV assumed above correspond to \(M\simeq(4.3-7.1)\times 10^{4}\) cm\({}^{2}\)/Vs. For \(\nu_{coll}=(4-6)\) ps\({}^{-1}\) (see the dashed lines in Fig.5), one obtains \(M\simeq(0.7-1.8)\times 10^{4}\) cm\({}^{2}\)/Vs, which are realistic GC mobilities at room or somewhat lower temperatures [53, 55]. The electron mobility of the GC on b-P studied several years ago [56], at \(T_{0}=(15-25)\) meV (\(T_{0}\simeq(180-300)\) K) reaches the values \(M\simeq(8-9)\times 10^{3}\)cm\({}^{2}\)/Vs. This corresponds to \(\nu_{coll}\simeq(8-10)\) ps\({}^{-1}\). Further improvements in the GC/b-P interface quality or/and using the GC remote doping (see, for example, [57, 58]), one can reduce \(\nu_{coll}\) increasing the plasmonic resonance sharpness. Another option to decrease \(\nu_{coll}\) is to use the positively biased back gate, which can electrically induce a sufficient electron density in the GC and, hence, a proper value of the electron Fermi energy, eliminating the necessity of GC doping. The
Figure 4: The same as in Fig. 3 but for the plasmonic oscillation factor \(\mathcal{P}_{\omega}^{s}\) of GC-FETs with the symmetric THz radiation input.
plasmonic resonances and, hence, the pronounced resonant response of the GC-FET detectors might be more pronounced for larger plasma frequency \(\Omega\), i.e., in the devices with shorter GCs (smaller length \(2L\)). In particular, if \(2L=0.5\)\(\mu\)m, the plasma oscillations quality factor is about 8.2 even at \(V_{coll}=10\) ps\({}^{-1}\). One needs to note that even at relatively high values of \(V_{coll}\), the overdamped plasmonic oscillations can provide elevated GC-FET detector responsivities despite the resonant peaks vanishing.
Electron thermal conductivity along the GC [59], which leads to the transfer of a portion of the electron heat to the side contacts can reduce the electron temperature and smooth down the spatial nonuniformities of the electron density. The latter can particularly affect the resonant maxima height with increasing plasmonic mode index \(n\).
Fairly high values of the GC-FET responsivity are due to the long electron energy relaxation time \(\tau_{\epsilon}\) inherent for GCs. However, the speed of the photodetectors using the hot electron bolometric mechanism is limited by the inverse electron energy relaxation time \(\tau_{\epsilon}^{-1}\) (see, for example, [34, 54]). This implies that the operation of the THz GC-FET detectors under consideration (with the parameters used in the above estimates) might be limited to the modulation frequencies in the GHz range.
The GC-FET detector dark current limited detectivity \(D_{\omega}^{*}=\mathcal{R}_{\omega}/\sqrt{2e\overline{j}}\) (see, for example, [60]) depends, in particular, on the dark current density. As follows from Eqs. (2) and (25), the dark current density is
\[\overline{j}\simeq f^{max}\exp\biggl{(}-\frac{\Delta_{M}}{T_{0}}\biggr{)}\exp \biggl{[}\frac{\mu_{0}}{(\mu_{D}+\mu_{0})}\frac{eV_{G}}{T_{0}}\biggr{]} \biggl{[}1-\exp\biggl{(}-\frac{eV_{G}}{T_{0}}\biggr{)}\biggr{]}. \tag{26}\]
For low gate voltages (\(eV_{G}<T_{0}\)), the latter tends to zero as \(\overline{j}\propto V_{G}\). Since the responsivity in the limit of small \(V_{G}\) is a constant (see Fig. 2), this implies that the GC-FET detector detectivity as a function of the gate voltage increases with decreasing \(V_{G}\) as
\[D_{\omega}^{*}\propto\frac{1}{\sqrt{V_{G}}}. \tag{27}\]
This also means that at low values of \(V_{G}\), the GC-FET noises might be determined by other mechanisms (not by the dark current).
If the electron interactions at the GC/b-AsP interface are relatively strong (leading to high values of \(\nu_{coll}\) and preventing the pronounced plasmonic resonance) is a critical issue, the GC-FET structure can be modified by using the length of the gate and the b-AsP layer markedly smaller than the length of the GC \(2L\).
In principle, the GC-FET detectors can be based on the p-type GC, in which \(\mu_{D}<0\). This might exhibit advantages associated with smaller \(\Delta_{V}\) compared to \(\Delta_{C}\) (see Table 1). In the detectors with the p-type GC, a proper thermionic activation energy \(\Delta_{V}+\mu_{D}\) can be achieved at smaller \(\mu_{D}\), i.e., at the lower carrier (hole) densities. However, the adequate BL and metal gate band alignment might be a problem. This problem can be avoided in the GC-FET structures, in which both the GC and the gate are made of p-type graphene layers (double-GC-FETs). Such double-GC-FETs can exhibit markedly
Figure 5: The spectral characteristics of GC-FET detectors current responsivity with (a) asymmetric and (b) symmetric THz radiation input (\(\Omega/2\pi=1.0\) THz) with different collision frequencies \(\nu_{coll}\).
different plasmonic properties. This is because of the possible plasmonic response of the carriers in the double-GG (see, for example, [61, 62, 63, 64, 65, 66, 67]). The plasmonic response in the double-GC-FETs depends on the contacts. Depending on the geometry of these contacts, the plasmonic factor can be a fairly different function of the signal frequency. The bolometric detectors based on the double-GC-FET structures with the barrier b-As\({}_{\mathrm{s}}\)P\({}_{1-x}\) are beyond the scope of our present study and require a separate treatment.
## Conclusions
We proposed and evaluated the THz graphene-channel- FET detectors with the black-Arsenic, black-Phosphorus, or black-Arsenic-Phosphorus barrier gate layers. The operation of these detectors is associated with the hot carrier bolometric effect, i.e., with the carrier heating by incoming THz radiation, causing their thermionic emission from the graphene channel into the gate. Such a THz GC-FET detector can exhibit fairly high characteristics. The excitation of plasmonic oscillations in the graphene channel leads to a strong resonant enhancement of the detector responsivity and detectivity.
The realization of the proposed GC-FET bolometric detectors with elevated characteristics is enabled by the effective carrier heating in graphene accompanied by the effective plasmonic oscillation excitation and the possibility of a proper band alignment between the graphene channel and the barrier layer.
## Acknowledgments
Financial support is provided by The Japan Society for Promotion of Science (KAKENHI Grants # 20K20349 and # 21H04546), Japan; RIEC Nation-Wide Collaborative Research Project # R04/A10; and by AFOSR (contract number FA9550-19-1-0355.
## Author contributions statement
V.R. conceived the device concept and developed its model, C.T. and M.R. conducted the calculations and presented the results, T. O., V. M, and M.S. analyzed the model, obtained results, and provided financial support. All authors reviewed the manuscript.
## Data availability
All data generated or analyzed during this study are included in this published article.
## Additional information
**Competing interests:** The authors declare that they have no competing interests.
|
2303.02188 | Superhabitability of High-Obliquity and High-Eccentricity Planets | Planetary obliquity and eccentricity influence climate by shaping the spatial
and temporal patterns of stellar energy incident at a planet's surface,
affecting both the annual mean climate and magnitude of seasonal variability.
Previous work has demonstrated the importance of both planetary obliquity and
eccentricity for climate and habitability, but most studies have not explicitly
modeled the response of life to these parameters. While exaggerated seasons may
be stressful to some types of life, a recent study found an increase in marine
biological activity for moderately high obliquities <45$^{\circ}$ assuming an
Earth-like eccentricity. However, it is unclear how life might respond to
obliquities >45$^{\circ}$, eccentricities much larger than Earth's, or the
combination of both. To address this gap, we use cGENIE-PlaSim, a 3-D marine
biogeochemical model coupled to an atmospheric general circulation model, to
investigate the response of Earth-like marine life to a large range of
obliquities (0-90$^{\circ}$) and eccentricities (0-0.4). We find that marine
biological activity increases with both increasing obliquity and eccentricity
across the parameter space we considered, including the combination of high
obliquity and high eccentricity. We discuss these results in the context of
remote biosignatures, and we argue that planets with high obliquity and/or
eccentricity may be superhabitable worlds that are particularly favorable for
exoplanet life detection. | Jonathan Jernigan, Émilie Laflèche, Angela Burke, Stephanie Olson | 2023-03-03T19:23:47Z | http://arxiv.org/abs/2303.02188v1 | # Superhabitability of High-Doliquity and High-Eccentricity Planets
###### Abstract
Planetary obliquity and eccentricity influence climate by shaping the spatial and temporal patterns of stellar energy incident at a planet's surface, affecting both the annual mean climate and magnitude of seasonal variability. Previous work has demonstrated the importance of both planetary obliquity and eccentricity for climate and habitability, but most studies have not explicitly modeled the response of life to these parameters. While exaggerated seasons may be stressful to some types of life, a recent study found an increase in marine biological activity for moderately high obliquities \(<\)45\({}^{\circ}\) assuming an Earth-like eccentricity. However, it is unclear how life might respond to obliquities \(>\)45\({}^{\circ}\), eccentricities much larger than Earth's, or the combination of both. To address this gap, we use cGENIE-PlaSim, a 3-D marine biogeochemical model coupled to an atmospheric general circulation model, to investigate the response of Earth-like marine life to a large range of obliquities (0-90\({}^{\circ}\)) and eccentricities (0-0.4). We find that marine biological activity increases with both increasing obliquity and eccentricity across the parameter space we considered, including the combination of high obliquity and high eccentricity. We discuss these results in the context of remote biosignatures, and we argue that planets with high obliquity and/or eccentricity may be superhabitable worlds that are particularly favorable for exoplanet life detection.
Exoplanets (498) -- Habitable planets (695) -- Planetary climates (2184) -- Astrobiology (74) -- Biosignatures (2018) 0000-0002-2880-2808]Jonathan Jernigan
0000-0002-4880-7888]Emilie Lafleche
0000-0002-4880-7888]Angela Burke
0000-0002-4880-7888]Stephanie Olson
## 1 Introduction
Planetary obliquity and orbital eccentricity modulate planetary climate, in part by generating seasons due to time-varying instellation patterns. The effects of these orbital parameters on climate are modest for Earth, but a diversity of annual mean climate and seasonality scenarios are possible on other worlds. The diversity of planetary obliquities is apparent even among terrestrial planets in our own solar system, with Venus and Mercury exhibiting obliquities of 180\({}^{\circ}\) and near 0\({}^{\circ}\) respectively, while Mars' obliquity varies chaotically from 0-60\({}^{\circ}\)(Laskar and Robutel, 1993). A similarly large range of obliquities is expected in other stellar system architectures as a result of complex orbital interactions (Millholland and Laughlin, 2019) and giant impacts (Li and Lai, 2020). Eccentricity is less varied among observed low-mass planets to date, including terrestrial planets in the Habitable Zone (HZ) (Van Eylen and Albrecht, 2015; Eylen et al., 2019; Guendelman and Kaspi, 2020). Nonetheless, eccentricity varies among solar system planets and known exoplanets (Udry and Santos, 2007), and these differences in eccentricity are expected to impact planetary climate, seasonal cycles, and habitability (Williams and Pollard, 2002; Dressing et al., 2010). Considering the diversity of obliquities and eccentricities anticipated among HZ planets, many potentially habitable exoplanets may experience amplified seasonality compared to Earth. This enhanced seasonality may have broad-reaching consequences for planetary habitability and exoplanet life detection efforts.
Exaggerated seasonality on high-obliquity and/or high-eccentricity worlds could be challenging for life. However, higher obliquity planets have warmer poles and more equable climates than lower obliquity planets due to decreased planetary albedo from a reduction in both ice and cloud cover, which may allow higher obliquity planets to maintain surface liquid water at farther distances from their host star (Williams and Kasting, 1997; Williams and Pollard, 2003; Spiegel et al., 2009; Dressing et al., 2010; Dressing et al.
et al., 2010; Armstrong et al., 2014; Ferreira et al., 2014; Linsenmeier et al., 2015; Wang et al., 2016; Kilic et al., 2018; Nowajewski et al., 2018; Guendelman and Kaspi, 2019; Kang, 2019; Colose et al., 2019; Palubski et al., 2020; Komacek et al., 2021). Some authors have even suggested that planets with higher obliquities may be'superhabitable' worlds more suitable for life than Earth (Heller and Armstrong, 2014; Olson et al., 2020), but most models for planetary habitability to date lack explicit representation of life.
Recent biogeochemical modeling suggests that moderately high planetary obliquity enhances the productivity of marine life (Barnett and Olson, 2022). However, Barnett and Olson (2022) only considered obliquities up to \(45^{\circ}\). It is thus unknown whether marine life would respond positively to even higher obliquities, or if there exists an optimal obliquity beyond which strong seasonal contrasts begin to negatively affect marine life. Their study also neglects the effects of seasonality arising from eccentricity.
The mechanism by which obliquity influenced biospheric productivity in Barnett and Olson (2022)'s study was through enhanced nutrient recycling due to seasonal breakdown of the ocean's thermal stratification, which otherwise tends to limit vertical mixing. This same phenomenon may occur on planets with eccentric orbits, but asymmetry in season duration may yield different dynamical and biological effects. Planets on highly eccentric orbits can also move into and out of their star's HZ with each orbit bearing unclear consequences for habitability and life. Ultimately, it is unknown how marine life might respond to the large range of orbital scenarios we expect to find with current and next-generation observatories.
We address this gap here by using an atmospheric GCM (PlaSim) coupled to a 3D biogeochemical model (cGENIE) to simulate the climates of high-obliquity and high-eccentricity worlds and characterize the response of an Earth-like biosphere to each orbital scenario. We review how obliquity and eccentricity drive seasonality and provide an introduction to key biogeochemical processes regulating the biospheric response to seasons in Section 2. We then describe our climate and biogeochemical modeling approach in Section 3. We present our results regarding the response of surface environments and life to seasons in Section 4, and we discuss implications for exoplanet habitability and life detection on high-obliquity and high-eccentricity worlds in Section 5. Finally, we summarize our findings and offer recommendations for future work in Section 6.
## 2 Background
### Effects of Orbital Parameters on Planetary Climate and Seasonality
Planetary obliquity (\(\theta\) in Figure 1A) shapes the spatial and temporal distribution of incident stellar radiation, generating the familiar hemispheric seasons that we experience on Earth. At low obliquity (herein, 0-23.5\({}^{\circ}\)), the equator receives the most stellar energy, while the poles receive relatively little. Seasonality is also limited, particularly at low latitudes. At moderate obliquity (herein, 23.5-54\({}^{\circ}\)), the spatial distribution of stellar energy becomes more uniform on annual average, but with increasing temporal variability. The result is a climate with smaller equator-to-pole contrasts, less ice
Figure 1: A: Planetary obliquity (\(\theta\)) is the angle between a planet’s orbital and equatorial planes. B: The eccentricity of an elliptical orbit is the ratio of the distance between one focus and the geometric center of the ellipse (c) to the semi-major axis (a). C: The argument of the periastron (\(\omega\)) is the angle between the vernal equinox and the periastron. For \(\omega=282.7^{\circ}\), the present-day Earth value, the Northern Hemisphere (NH) summer occurs near the apoastron, while the Southern Hemisphere (SH) summer occurs near the periastron (Table 1).
on annual average, and larger seasonal contrasts at all latitudes--despite receiving the same stellar energy flux on global average. At high obliquity (herein, 54-90\({}^{\circ}\)), the poles receive more stellar energy than the equator, despite increasingly dramatic temporal variability in irradiance (Ward, 1974). In this scenario, equatorial ice belts (rather than polar ice caps) may become stable (Rose et al., 2017; Kilic et al., 2018).
Eccentric orbits (Figure 1B) generate seasons as the star-planet separation varies throughout the orbit. In this case, summer occurs when the planet is closer to its host star, while winter occurs while it is further away. Unlike seasons arising from obliquity, both hemispheres will experience the same season as the stellar energy received by the planet changes across the year. Moreover, the summer vs. winter seasons will differ in duration. This contrast arises due to evolving gravitational interaction between the planet and star as the orbital distance changes, as described by Kepler's second law. Consequently, the summer will be relatively short as the planet speeds up close to its star and the winter will be relatively long as the planet orbits slower further from its star. Both the magnitude of seasonal flux variations and the asymmetry in season duration increase with increasing eccentricity.
For planets with a combination of both non-zero planetary obliquity and orbital eccentricity, both seasonality effects will manifest. These effects may be additive or opposing, depending on the the timing of obliquity-driven seasons relative to the point in the planet's eccentric orbit closest to the star (periastron) and the point furthest from the star (apoastron). This relationship can be described by the argument of the periastron (\(\omega\) in Figure 1C), i.e, the angle between the orbital position where both hemispheres receive identical installation during the NH vernal equinox and the periastron.
For \(\omega=0^{\circ}\) or \(180^{\circ}\), the Northern Hemisphere (NH) vernal or autumnal equinox respectively will occur at the periastron, with the opposite occurring at apoastron. The effects of eccentricity and obliquity on the seasons will be out-of-phase from one another, with the global summer and winter generated by eccentricity corresponding with spring and autumn generated by obliquity. The resulting seasonality will therefore be subdued.
For \(\omega=90^{\circ}\) or \(270^{\circ}\), the NH summer solstice (SS) or winter solstice (WS) respectively occurs at the periastron, with the opposite occurring at the apoastron (Table 1). The global summers and winters generated by eccentricity and the hemispheric summers and winters generated by obliquity will coincide for one hemisphere and conflict for the other. The resulting seasonality will therefore be amplified for one hemisphere and muted for the other (Figure 2).
### Characterizing biological activity
Photosynthesis by marine microorganisms produces biomass via the following chemical reaction:
\[CO_{2}+H_{2}O\xrightarrow{h\nu}CH_{2}O+O_{2} \tag{1}\]
where CH\({}_{2}\)O is geochemical shorthand for biomass that in reality includes a number of other macro- (e.g., N, P) and micro-nutrients (e.g., Fe, Mo). These nutrients, especially P, typically limit photosynthetic rates ('primary productivity') on global and long-term average on Earth (Tyrrell, 1999), but any of these ingredients can be locally or seasonally limiting. For instance, light may limit productivity during winter darkness while nutrients may limit productivity during photon-replete summers. We expect similar spatial and temporal variability in primary productivity on other worlds experiencing seasons.
The majority of biomass produced by photosynthesis is broken down in the surface wind-mixed layer by respiration:
\[CH_{2}O+O_{2}\to CO_{2}+H_{2}O \tag{2}\]
However, a small portion sinks from the mixed layer into the deeper layers of the ocean. This flux is termed biological 'export production', or simply 'export', and is measured in teramoles of particulate organic carbon per year (Tmol POC yr\({}^{-1}\)). Export production is typically greatest when and where primary production is greatest, but export is also sensitive to temperature and thus it is not necessarily a fixed fraction of primary productivity in either space or time.
The process by which export POC is transferred from the surface to depth is referred to as the 'biological pump'. The net consequence of the biological pump is to concentrate bioessential nutrients at depth while depleting them at the surface. As a result, the biosphere
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Eccentricity & Periastron & Apoastron & SS (NH) & WS (NH) \\ \hline
0.0 & - & - & (6, 20) & (12, 20) \\
0.1 & (1, 13) & (7, 14) & (6, 28) & (1, 3) \\
0.2 & (1, 24) & (7, 24) & (7, 6) & (1, 16) \\
0.3 & (2, 4) & (8, 4) & (7, 11) & (1, 27) \\
0.4 & (2, 14) & (8, 13) & (7, 15) & (2, 8) \\ \hline \end{tabular}
\end{table}
Table 1: Timing of Orbital Events (Month, Day)
depends on vertical mixing to replenish nutrients to the sunlit regions of the water column where photosynthesis is viable, also referred to as the photic zone. Greater nutrient recycling ultimately allows greater biological productivity, increasing the production (but not necessarily the survival) of biogenic gases like O\({}_{2}\) and CH\({}_{4}\) that may enter the atmosphere and serve as 'biosignatures' for life. See Olson et al. (2020) for further review of marine biogeochemical cycles as they relate to exoplanet habitability and biosignatures.
## 3 Methods & Model Description
Our experiments leverage cGENIE, a 3D marine biogeochemical model, coupled to PlaSim (Holden et al., 2016). PlaSim is a reduced-complexity 3D atmospheric circulation model, built around the PUMA atmospheric model described by Fraedrich et al. (2005). PlaSim replaces the 2D energy-moisture balance model (EMBM) typically used in cGENIE simulations, and it provides cGENIE with interactive wind forcing, temperature, humidity, surface pressure, divergence and vorticity. The module also includes fractional cloud cover from relative humidity and precipitation. Time step length is 12 hours for cGENIE and 45 minutes for PlaSim; coupling inputs are averaged over the 16 previous PlaSim time steps to account for this difference (Holden et al., 2016).
Ocean circulation in cGENIE is handled by C-GOLDSTEIN and GOLDSTEINSEAICE. C-GOLDSTEIN is a reduced physics, frictional geostrophic 3D ocean circulation model that uses advection and diffusion to transport heat, salinity, and biogeochemical tracers (Edwards & Marsh, 2005). Net precipitation minus evaporation (P-E) is represented by a proportionate salinity flux. C-GOLDSTEIN calculates the mixed-layer depth considering the strength of density stratification arising from temperature and salinity gradients and wind stress from PlaSim. GOLDSTEINSEAICE is a dynamic-thermodynamic sea-ice model. Dynamical equations are solved for the percentage of ice cover and the height of ice in each cell. Sea ice growth and decay is governed by heat flux from the atmosphere and ocean. C-GOLDSTEIN, GOLDSTEINSEAICE, and PlaSim are coupled through heat transfer, wind forcing, and the drifting of sea ice along ocean currents (Holden et al., 2016).
Chemistry and biology in cGENIE are handled by the ATCHEM and BIOGEM modules for atmospheric chemistry and marine biogeochemistry, respectively (Ridgwell et al., 2007). ATCHEM includes parameterized chemistry involving O\({}_{2}\), O\({}_{3}\), CH\({}_{4}\), and CO\({}_{2}\) assuming a Sun-like stellar spectrum (Reinhard et al., 2020). ATCHEM passes pCO\({}_{2}\) to PlaSim for calculation of surface temperature, but PlaSim does not currently receive pCO\({}_{4}\). ATCHEM and BIOGEM are coupled through spatially resolved (2D) sea-air gas exchange that varies with local sea surface saturation state for each species, but ATCHEM assumes a well-mixed atmosphere and homogenizing gaseous species with each time-step. BIOGEM includes photosynthesis, aerobic and anaerobic respiration, methanogenesis (Olson et al., 2013), aerobic and anaerobic methanotropy (Olson et al., 2016), nitrification and sulfide oxidation. Photosynthesis is limited by the availability of both N and P, but N\({}_{2}\) fixation occurs when N is scarce relative to P (Ridgwell et al., 2007). Photosynthesis is further constrained by the availability of photosynthetically active radiation (PAR), which is calculated as a fixed fraction of incident radiation. Light is then attenuated in the water column according to an imposed e-folding depth. This calculation implicitly assumes a Sun-like spectrum and that exo-photosynthesis prefers photons of the same wavelength as photosynthesis on Earth.
We run a total of 35 experiments, varying obliquity from 0\({}^{\circ}\) to 90\({}^{\circ}\) and eccentricity from 0 to 0.4 (Table 2). Although obliquity simply shapes the spatial distribution of installation, annual average installation varies with eccentricity (\(e\)) by a factor of \((1-e^{2})^{-1/2}\) for planets with equivalent semi-major axes (Table 3) (Laskar
\begin{table}
\begin{tabular}{l l} \hline \hline Parameter & Value \\ \hline Obliquity & _low:_\(0^{\circ}\), \(15^{\circ}\) \\ & _moderate:_\(30^{\circ}\), \(45^{\circ}\) \\ & _high:_\(60^{\circ}\), \(75^{\circ}\), \(90^{\circ}\) \\ Eccentricity & 0, 0.1, 0.2, 0.3, 0.4 \\ \hline \end{tabular}
\end{table}
Table 2: Parameters Varied
\begin{table}
\begin{tabular}{c c} \hline \hline Eccentricity & Instellation (\(S_{\oplus}\)) \\ \hline
0.0 & 1.000 \\
0.1 & 1.054 \\
0.2 & 1.082 \\
0.3 & 1.195 \\
0.4 & 1.136 \\ \hline \end{tabular}
\end{table}
Table 3: Instellation Relative to Earth (\(S_{\oplus}\))
et al., 1993). All other parameters--such as day length, surface pressure, and atmospheric pCO\({}_{2}\) among others--are set to present-day Earth values. We also assume a present-day Earth continental configuration, which allows us to compare the habitability of land vs. marine environments. Finally, we use the Sun's spectrum, which may limit the relevance of our work to more common M-dwarf planets that receive relatively more IR photons and less of the visible photons used by photosynthesis on Earth. This choice is due to current limitations of the photochemistry and photosynthesis codes described above, but it is also physically motivated because HZ planets around M-dwarfs are likely to be tidally locked with near zero obliquity and eccentricity (Heller et al., 2011). Model runs are spun up for 10,000 years until reaching steady-state with respect to surface temperature and chemical tracers in the ocean. Each experiment requires only 1 CPU and completes in 10 days. Data is output every two weeks, and all data presented herein is averaged over the last decade of each simulation.
## 4 Results
Extreme surface air temperatures occur locally and seasonally at high obliquity and high eccentricity
Surface air temperature (SAT) decreases on annual and global average by \(\sim\)5\({}^{\circ}\)C with increasing obliquity from 0-90\({}^{\circ}\) for fixed eccentricity (Figure 3). Cooling on global average reflects a \(\sim\)18\({}^{\circ}\)C decrease in equatorial SAT, but SAT at the poles actually increases by \(\sim\)20\({}^{\circ}\)C with increasing obliquity, leading to reductions in ice cover and planetary albedo as in previous studies.
Globally and annually averaged SAT increases dramatically with eccentricity for fixed obliquity. SAT increases by \(\sim\)5\({}^{\circ}\)C between 0 and 0.2 eccentricity and further increases by \(\sim\)13\({}^{\circ}\)C between 0.2 and 0.4 eccentricity on global average. Warming is more pronounced at the poles, which warm by \(\sim\)20\({}^{\circ}\)C over this same eccentricity range. This warming on annual average arises along with higher annual-mean instellation on high-eccentricity planets despite asymmetry in season duration (Table 3).
SAT varies only with latitude for planets with both zero obliquity and eccentricity. For all other orbital scenarios, SAT varies both spatially and temporally. Peak SATs surpass 50\({}^{\circ}\)C during the summers for orbits with an eccentricity of 0.4 or an obliquity of \(\geq 60^{\circ}\), rendering land habitats in the polar regions of planets with high obliquity and the equatorial regions of planets with the combination of high eccentricity and low obliquity habitable only for thermophilic life by Earth standards. Seasonal temperature contrasts of \(\geq 60^{\circ}\)C under these same orbital scenarios may exacerbate this stressor for life on land. However, simulations with both low to moderate obliquity and eccentricity of \(\leq\)0.3 generate SATs \(<\)50\({}^{\circ}\)C year-round at all latitudes, with seasonal temperature variations \(<\)20\({}^{\circ}\)C for equatorial and middle latitudes.
### Sea surface temperatures remain hospitable globally at high obliquity and high eccentricity
Sea surface temperature (SST) more directly impacts marine life than SAT. Globally and annually averaged SST decreases by \(\sim\)5\({}^{\circ}\)C from 0\({}^{\circ}\) to 90\({}^{\circ}\) obliquity for fixed eccentricity. Globally and annually averaged SST increases by \(\sim\)4\({}^{\circ}\)C between 0 and 0.2 eccentricity, and it further increases by \(\sim\)10\({}^{\circ}\)C between 0.2 and 0.4 eccentricity, for fixed obliquity.
Assessing the habitability of marine environments also requires consideration of how seasonal maxima and minima may affect life locally. Even for the most extreme orbits we consider here, two-week average SST does not exceed 46\({}^{\circ}\)C at any latitude, suggesting that high obliquity and/or high eccentricity does not preclude marine environments suitable for life. Two-week average SST above 40\({}^{\circ}\)C occurs only during the summers of simulations with orbits of 0.4 eccentricity and 0\({}^{\circ}\)-30\({}^{\circ}\) obliquity, where it ranges from 43-45\({}^{\circ}\)C. This muted seasonal maxima relative to SAT arises due to the high heat capacity of water, which makes the ocean more resistant to temperature swings than the atmosphere. The seasonal minima is also muted because SST can not fall below the freezing point of seawater regardless of atmospheric temperature, but we note that permanent sea ice is eliminated for obliquities greater than 45\({}^{\circ}\) and eccentricities greater than 0.2 in our simulations. In combination, seasonal SST contrasts are significantly reduced relative to seasonal SAT contrasts. Simulations with low to moderate obliquity and \(\leq\)0.3 eccentricity display seasonal SST variations of \(<\)10\({}^{\circ}\)C. For simulations with both high obliquity and 0.4 eccentricity, seasonal temperature variation in the SH middle and polar latitudes approaches 20\({}^{\circ}\)C. These seasonal effects are large compared to the Earth, but elimination of polar sea ice nonetheless increases open ocean habitats for life.
### Seasonality in the depth of the ocean mixed-layer increases with obliquity and eccentricity
The depth of the wind-mixed layer varies spatially and temporally with SST (Figure 4, 5). The mixed layer is shallowest when and where SST is highest, and the mixed layer is deepest when and where SST is lowest. This relationship reflects density stratification of the ocean that opposes vertical mixing, the strength of which varies with latitude and season as the surface of
the ocean warms/cools in response to spatially and temporally variable installation.
At low obliquities, the mixed layer is shallowest at the equator and deepest at the winter pole. As obliquity increases, the equatorial mixed layer deepens as thermal stratification weakens as the poles begin to receive more stellar energy at the expense of the equator. The depth of the mixed layer also becomes increasingly variable at all latitudes as seasonal temperature contrasts increase. At high obliquities, the depth of the equatorial mixed layer exceeds that of the summer pole (Figure 5).
### Marine biological productivity increases with obliquity and eccentricity
Both obliquity and eccentricity influence biological export production. However, obliquity effects are much larger within the parameter space we investigated. At lower obliquity and eccentricity values, increases in both obliquity and eccentricity have large positive effects on export production, but increases in either parameter are less significant at higher values of one or both (Figure 6). We use our 0\({}^{\circ}\) obliquity and 0 eccentricity simulation, which yields 1600 Tmol POL yr\({}^{-1}\) export production, as a baseline for comparing the biospheric response of worlds experiencing seasons.
For simulations with 0 eccentricity, export production increases nearly linearly with obliquity from 1600 Tmol POC yr\({}^{-1}\) (baseline) at 0\({}^{\circ}\) obliquity to 5200 Tmol POC yr\({}^{-1}\) (3.3x baseline) at 90\({}^{\circ}\) obliquity. Simulations with fixed eccentricities of 0.1 and 0.2 yield small increases in export production over low obliquity ranges from 2200 and 2600 Tmol POC yr\({}^{-1}\) (1.4 and 1.6x baseline, respectively), converging on 2700 Tmol POC yr\({}^{-1}\) (1.7x baseline) at 30\({}^{\circ}\). At moderate and high obliquities, export production again increases nearly linearly, reaching 5500 Tmol POC yr\({}^{-1}\) (3.4x baseline) at 90\({}^{\circ}\) for these same eccentricities. Simulations with higher eccentricities of 0.3 and 0.4 yield 3300 and 4100 Tmol POC yr\({}^{-1}\) (2.1x and 2.6x baseline) at 0\({}^{\circ}\) obliquity, decrease approximately linearly by 200-300 Tmol POC yr\({}^{-1}\) over low obliquity ranges, then increase to 5300-5400 Tmol POC yr\({}^{-1}\) (3.3-3.4x baseline) at 90\({}^{\circ}\) (Figure 6).
For simulations with low obliquity, export production increases nearly linearly with eccentricity, increasing from 1600 and 1900 Tmol POC yr\({}^{-1}\) (1.0 and 1.2x baseline, respectively) at 0 eccentricity and converge to 4100 Tmol POC yr\({}^{-1}\) (2.6x baseline) at 0.4 eccentricity.
For moderate and high obliquities, export production is initially higher at 0 eccentricity and is less sensitive to eccentricities \(\leq\)0.2, changing by \(\leq\)4%. In simulations
Figure 2: Zonally averaged solar short wave forcing at top of atmosphere across the seasonal cycle for varied obliquity (columns) and eccentricity (rows). Time advances along the x-axis and latitude is represented on the y-axis for each subpanel. Solar forcing peaks at periastron and reaches a minimum at apoastron, the timing of which varies with eccentricity.
with 30-60\({}^{\circ}\) obliquity, export production then increases nearly linearly from 0.2 up to 0.4 eccentricity, yielding 3800-4300 Tmol POL yr\({}^{-1}\) (2.4-2.7x baseline) at 0.4 eccentricity. Meanwhile, export production in higher obliquity simulations with 75\({}^{\circ}\) and 90\({}^{\circ}\) obliquity remains relatively insensitive to eccentricity. Export increases by only 100 Tmol POL yr\({}^{-1}\) in our 75\({}^{\circ}\) simulation, while export decreases by 400 Tmol POL yr\({}^{-1}\) in our 90\({}^{\circ}\) simulation, with both scenarios converging to \(\sim\)5200 Tmol POL yr\({}^{-1}\) at 0.4 eccentricity (Figure 6).
The distribution of export production is closely related to mixed-layer depth, both spatially and temporally. In every simulation, nearly all export production occurs at latitudes with high mixed-layer depth seasonality, during and immediately following the transition from deep to shallow mixed layer as thermal stratification develops with surface warming (Figures 5, 7), a phenomenon referred to as'spring blooms' on Earth. This relationship between productivity and mixed-layer depth arises due to seasonal differences in nutrient availability as ocean stratification varies (Barnett and Olson, 2022).
## 5 Discussion
### Support and extension of previous work
Barnett and Olson (2022) showed that annual biospheric productivity increases with obliquity in cGENIE simulations, and they argued that moderately high obliquity planets may thus be particularly attractive candidates for exoplanet life detection. However, limitations to their modeling methodology precluded them from investigating obliquities higher than 45\({}^{\circ}\) and it was unclear whether the trend of increasing productivity with obliquity would continue to more extreme orbits. We have improved upon their initial study by coupling the PlaSim atmospheric GCM to cGENIE to allow for seasonal reversal of surface winds, and we demonstrate that biospheric productivity increases with obliquity up to 90\({}^{\circ}\) (Figure 6), suggesting that high-obliquity planets may be superhabitable (Heller and Armstrong, 2014). We also show that seasonality arising from high-eccentricity orbits also favorably influences biospheric productivity (Figure 6). Perhaps most surprising, the combination of high obliquity and high eccentricity supports marine biospheres that are more productive than present-day Earth in our model (Figure 6)!
As concluded in previous work, ocean life responds favorably to 'extreme' orbits in our simulations due to changes in the distribution and recycling of biossenstial nutrients in the oceans (Olson et al., 2020; Barnett
Figure 3: Zonally averaged surface air temperature (SAT) across the seasonal cycle for varied obliquity (columns) and eccentricity (rows). Time advances along the x-axis and latitude is represented on the y-axis for each subpanel. SATs exceed 50\({}^{\circ}\)C during summers for high-obliquity and/or high-eccentricity orbits.
and Olson, 2022). Enhanced nutrient availability in our simulations is the consequence of seasonal breakdown of density stratification within the ocean (Figure 5), allowing nutrient-rich deep waters to mix with nutrient-depleted surface waters and stimulating photosynthetic activity.
### Implications for exoplanet life detection
Higher productivity on high-obliquity and high-eccentricity planets may favorably influence the 'detectability' of remote biosignatures. Higher productivity results in greater production of photosynthetic O\({}_{2}\), a frequently discussed biosignature that signals the presence of life on Earth (Meadows, 2017; Meadows et al., 2018; Schwieterman et al., 2018). Moreover, higher rates of photosynthesis ultimately increase the potential rates of other metabolisms in the ocean interior and/or marine sediments--and thus the production of other metabolic waste products that may also serve as gaseous biosignatures. This relationship arises because photosynthesis translates stellar energy to chemical energy that is then propagated through the biosphere in the form of chemical disequilibrium (Krissansen-Totton et al., 2018). This energy flux on Earth is orders-of-magnitude greater than geological energy inputs, with the consequence that Earth's biosphere is ultimately solar-powered despite a diversity of non-photosynthetic metabolisms. Photosynthetic rates thus throttle the production of most biogenic gases that may serve as remote biosignatures, including those not directly involved in photosynthesis (e.g., CH\({}_{4}\), N\({}_{2}\)O, H\({}_{2}\)S).
Biosignature gases may be produced in large quantities without influencing planetary spectra, potentially resulting in a 'false negative' for life (Reinhard et al., 2017). As an example, microbial sulfate reduction produces large fluxes of H\({}_{2}\)S in marine sediments on present-day Earth (Canfield, 1991), but the resulting H\({}_{2}\)S forms pyrite (FeS\({}_{2}\)) within sediments or is re-oxidized before it can reach the atmosphere and influence the spectral appearance of our planet. Likewise, the Black Sea--an analog environment for Earth's ancient oceans--supports high rates of methanogenesis, but it is ultimately a negligible source of CH\({}_{4}\) to the atmosphere due to oxidation of CH\({}_{4}\) within the water column (Schmale et al., 2011). Alien observers expecting to find H\({}_{2}\)S or CH\({}_{4}\) on inhabited planets may thus be mislead by their low-abundances in our atmosphere.
Figure 4: Zonally averaged sea surface temperature (SST) across the seasonal cycle for varied obliquity (columns) and eccentricity (rows). Global average SST increases with increasing eccentricity and decreases slightly with increasing obliquity. Time advances along the x-axis and latitude is represented on the y-axis for each subpanel. Two-week average SST does not exceed 46\({}^{\circ}\)C for any latitude.
Exoplanets with large seasonal cycles may be less vulnerable to these types of false negatives. In addition to replenishing nutrients, seasonal breakdown of the thermal stratification that inhibits vertical mixing in the ocean on high-obliquity and high-eccentricity worlds may increase the fraction of biosignature gases produced in the ocean interior and marine sediments that ultimately make it to the surface and accumulate in the atmosphere where they may affect planetary spectra. The combination of greater biosignature production and enhanced communication between the ocean interior and the atmosphere may thus increase the detectability of life on worlds experiencing large seasonal cycles.
In addition to increasing the atmospheric accumulation of biosignature gases, seasonality may also serve as an independent beacon for life (Olson et al., 2018; Mettler et al., 2022). The composition of Earth's atmosphere oscillates seasonally, reflecting seasonally variable gas fluxes as life responds to its changing environment. Most famously, CO\({}_{2}\) levels rise and fall at Mauna Loa as the balance between CO\({}_{2}\) fixation into biomass by photosynthesis and CO\({}_{2}\) release by respiration shifts with the seasons (Keeling et al., 1976). Similar fluctuations on other worlds may provide a temporal biosignature that reveals the presence of exoplanet life.
These seasonal changes are small on present-day Earth but may be exaggerated on worlds with larger seasonal cycles. However, it remains to be seen whether obliquity- or eccentricity-driven seasonality in atmospheric composition will be more detectable. Our simulations suggest that obliquity is more likely to result in large-magnitude biogenic seasonality compared to eccentricity, but characterizing seasonality on these worlds is complicated by the details of viewing geometry (Olson et al., 2018; Mettler et al., 2022). In some scenarios, views that blend opposing seasons between hemispheres may mask the biogenic signal in disk average. Global seasons on planets with eccentric orbits may thus be easier to characterize than hemispheric seasons resulting from obliquity for many viewing geometries, but our results imply that detectable seasonality may require very high eccentricities compared to Earth's.
A recent study by Schulze-Makuch et al. (2020) generated a list of two dozen potentially superhabitable exoplanets and exoplanet candidates from the Kepler catalogue, taking into account various planet and host star properties previously known to favorably influence hab
Figure 5: Zonally averaged mixed layer depth across the seasonal cycle for varied obliquity (columns) and eccentricity (rows). Time advances along the x-axis and latitude is represented on the y-axis for each subpanel. Seasonal temperature variations in high-obliquity and, to a lesser extent, high-eccentricity simulations generate extreme fluctuations in mixed layer depth at middle and polar latitudes.
itability Heller and Armstrong (2014). Our results suggest that high-obliquity and/or high-eccentricity worlds should also be considered among the most favorable targets for life detection. By extension, observationally constraining obliquity and eccentricity will be critical for characterizing habitability and for the interpretation of biosignatures. While eccentricity can be derived from radial velocity measurements and low-resolution light curves prior to atmospheric characterization measurements, constraints on planetary obliquity require more detailed and frequent measurements (Kipping, 2008; Van Eylen and Albrecht, 2015; Kane and Torres, 2017). Previous studies have derived planetary obliquity from full thermal phase curves of large hot Jupiters (Adams et al., 2019). Thermal emission may also reveal the obliquities of directly imaged young jovian planets (Cowan et al., 2013), but resolving the obliquities of mature planets with less internal heat is complicated by several additional factors (Gaidos and Williams, 2004). The obliquities of Earth-sized rocky exoplanets are inaccessible with current instrumentation (Kane and Torres, 2017), but full-orbit reflected light curves obtained with future instrumentation may one day constrain the obliquities of potentially habitable worlds (Kawahara and Fujii, 2011). Knowledge of planetary obliquity could provide important context for interpreting atmospheric observations relating to both planetary habitability and biosignatures. This context may provide an opportunity to evaluate exoplanets for future observations and prioritize those that have the highest potential for the detection of biosignatures. Future work and/or instrument design should consider constraining the obliquities of rocky planets in the HZ of Sun-like stars a priority.
### Caveats and opportunities for future work
A potential caveat is that the same compositional seasonality that may signal the presence of exoplanet life may also represent a challenge for certain types of life. For example, seasonally variable ocean chemistry may threaten the development of animal-grade complexity if such life requires stable oxygenation of its environment (Catling et al., 2005; Reinhard et al., 2016). Indeed, ocean deoxygenation is associated with several of the 'big 5' mass extinctions on Earth, highlighting the potential impact of varying oxygen levels on marine animals (Meyer and Kump, 2008). However, if evolution via natural selection favors individuals tolerant of seasonal changes in ocean composition on high-obliquity and/or high-eccentricity worlds, then marine animals may be more resilient against the types of perturbations associated with mass extinctions on Earth. Future work should investigate how seasonality in the composition of the ocean-atmosphere system may specifically affect animals, which are not represented in our simulations.
Figure 6: Annual and global average biological export as a function of obliquity and eccentricity. Colored lines represent simulations with identical eccentricities for varied obliquity (left) and identical obliquities for varied eccentricity (right). Export production increases with both increasing obliquity and eccentricity. Eccentricity effects are significant at both low obliquity and eccentricity but drop off as either parameter increases.
The long-term effect of obliquity on planetary habitability is also an area of ongoing research. Previous studies have shown that increasing planetary obliquity can extend the outer edge of the HZ through enhanced heating at the poles due to increased seasonality, thereby increasing the orbital distance an exoplanet with high obliquity can occupy around its host star and remain habitable (Williams & Kasting, 1997; Spiegel et al., 2009; Dressing et al., 2010; Armstrong et al., 2014; Linsenmeier et al., 2015; Wang et al., 2016; Kilic et al., 2018; Nowajewski et al., 2018; Guendelman & Kaspi, 2019; Kang, 2019; Colose et al., 2019; Palubski et al., 2020; Komacek et al., 2021). However, a GCM study by Kang (2019) showed that planets inside the HZ can experience higher stratospheric water vapor concentrations due to the effects of high obliquity. Higher concentrations of stratospheric water vapor could impact the long-term habitability of high-obliquity planets through enhanced water loss (Kasting, 1988; Meadows, 2017). While high-obliquity planets are attractive candidates for future observation because of their potential to increase biological productivity, enhanced water loss could create false-positive biosignature detections from the subsequent accumulation of abiotic oxygen or potentially threaten long-term habitability (Meadows et al., 2018). Future work is needed to constrain the effect of obliquity on planetary water loss to discern the relative advantage of higher obliquity on overall planetary habitability.
## 6 Conclusions
We used cGENIE-PlaSim, a 3D marine biogeochemical model coupled to an atmospheric GCM, to explore the habitability of high-obliquity and high-eccentricity planets. We considered a large range of orbital scenarios including simulations with 0-90\({}^{\circ}\) obliquity and/or 0-0.4 eccentricity. Sea surface temperatures consistently remained within the bounds considered hospitable for mesophilic life on Earth at all latitudes for the entire year in nearly all of our model scenarios, despite extreme surface air temperatures in some simulations. We therefore conclude that marine life could thrive year-round on high-obliquity and/or high-eccentricity worlds.
In fact, our simulations suggest that seasonal variations driven by obliquity and/or eccentricity can be beneficial for life. Biosheric productivity on a high-obliquity planet typically exceeds that on an otherwise identical low-obliquity planet in our model. The same holds true for high-eccentricity versus low-eccentricity worlds, although the observed increase in productivity with increasing eccentricity is comparatively subdued.
Figure 7: Zonally averaged export density across the seasonal cycle for varied obliquity (columns) and eccentricity (rows). Time advances along the x-axis and latitude is represented on the y-axis for each subpanel. Nearly all biological export occurs in latitudes where the depth of the mixed layer varies seasonally, just after the transition from a deep to a shallow mixed layer.
In both cases, the mechanism by which productivity increases is enhanced nutrient recycling via seasonal breakdown of thermal stratification within the ocean, consistent with previous studies (Olson et al., 2020; Barnett and Olson, 2022). Higher rates of photosynthesis introduce more chemical energy to the global biosphere, benefiting many metabolisms in addition to photosynthesis. High-obliquity and high-eccentricity planets may therefore be'superhabitable' worlds capable of supporting larger and possibly more diverse biospheres than Earth.
Our results also suggest that high obliquity and high eccentricity can favorably influence biosignatures in addition to habitability. Higher biospheric productivity may ultimately increase the annual production of biosignature gases such as O\({}_{2}\) and CH\({}_{4}\) on high-obliquity and high-eccentricity planets, minimizing the risk of false negative non-detections. At the same time, seasonal oscillations in biological activity on these planets may manifest as remotely detectable oscillations in atmospheric composition. This time-variability provides an independent temporal biosignature unique to planets experiencing seasons that may mitigate against false-positives and false-negatives (Olson et al., 2018; Schwieterman et al., 2018).
Our study suggests that high-obliquity and high-eccentricity planets may be appealing targets for exoplanet life detection. However, there are still some open questions that require urgent attention. For instance, the extent to which seasonality affects the development of complex life remains to be explored. Long-term water loss and its implications for both habitability and biosignature false positives on high-obliquity planets may also be a concern. Moving forward, observationally constraining planetary obliquity should also be a priority. With these caveats, we argue that high-obliquity and high-eccentricity planets may be among our most promising targets for the search for exoplanet life.
SLO acknowledges support from the NASA Exobiology, Habitable Worlds, and ICAR programs under grants 80NSSC20K1437, 80NSSC20K1409, and 80NSSC21K0594, respectively. We also thank Christopher Colose and Rene Heller for providing helpful comments to improve this manuscript. This project benefited from participation in the NASA NExSS and NASA NOW Research Coordination Networks.
|
2305.06060 | Local Galois representations associated to additive polynomials | For an additive polynomial and a positive integer, we define an irreducible
smooth representation of a Weil group of a non-archimedean local field. We
study several invariants of this representation. We deduce a necessary and
sufficient condition for it to be primitive. | Takahiro Tsushima | 2023-05-10T11:26:02Z | http://arxiv.org/abs/2305.06060v1 | # Local Galois representations associated to additive polynomials
###### Abstract
For an additive polynomial and a positive integer, we define an irreducible smooth representation of a Weil group of a non-archimedean local field. We study several invariants of this representation. We deduce a necessary and sufficient condition for it to be primitive.
+
Footnote †: _Keywords_: Local Galois representations, additive polynomials, symplectic modules
2020 _Mathematics Subject Classification_. Primary: 11F80; Secondary: 14H37.
## 1 Introduction
Let \(p\) be a prime number and \(q\) a power of it. An additive polynomial \(R(x)\) over \(\mathbb{F}_{q}\) is a one-variable polynomial with coefficients in \(\mathbb{F}_{q}\) such that \(R(x+y)=R(x)+R(y)\). It is known that \(R(x)\) has the form \(\sum_{i=0}^{e}a_{i}x^{p^{i}}\) (\(a_{e}\neq 0\)) with an integer \(e\geq 0\). Let \(F\) be a non-archimedean local field with residue field \(\mathbb{F}_{q}\). We take a separable closure \(\overline{F}\) of \(F\). Let \(W_{F}\) be the Weil group of \(\overline{F}/F\). Let \(v_{F}(\cdot)\) denote the normalized valuation on \(F\). We take a prime number \(\ell\neq p\). For a non-trivial character \(\psi\colon\mathbb{F}_{p}\to\overline{\mathbb{Q}}_{\ell}^{\times}\), a non-zero additive polynomial \(R(x)\) over \(\mathbb{F}_{q}\) and a positive integer \(m\) which is prime to \(p\), we define an irreducible smooth \(W_{F}\)-representation \(\tau_{\psi,R,m}\) over \(\overline{\mathbb{Q}}_{\ell}\) of degree \(p^{e}\) if \(v_{F}(p)\) is sufficiently large. This is unconditional if \(F\) has positive characteristic. The integer \(m\) is related to the Swan conductor exponent of \(\tau_{\psi,R,m}\). As \(m\) varies, the isomorphism class of \(\tau_{\psi,R,m}\) varies.
Let \(C_{R}\) denote the algebraic affine curve defined by \(a^{p}-a=xR(x)\) in \(\mathbb{A}_{\mathbb{F}_{q}}^{2}=\operatorname{Spec}\mathbb{F}_{q}[a,x]\). This curve is studied in [6] and [1] in detail. For example, the smooth compactification of \(C_{R}\) is proved to be supersingular if \((p,e)\neq(2,0)\). The automorphism group of \(C_{R}\) contains a semidirect product \(Q_{R}\) of a cyclic group and an extra-special \(p\)-group ((2.5)). Let \(\mathbb{F}\) be an algebraic closure of \(\mathbb{F}_{q}\). Then a semidirect group \(Q_{R}\rtimes\mathbb{Z}\) acts on the base change \(C_{R,\mathbb{F}}:=C_{R}\times_{\mathbb{F}_{q}}\mathbb{F}\) as endomorphisms, where \(1\in\mathbb{Z}\) acts on \(C_{R,\mathbb{F}}\) as the Frobenius endomorphism over \(\mathbb{F}_{q}\). The center \(Z(Q_{R})\) of \(Q_{R}\) is identified with \(\mathbb{F}_{p}\), which acts on \(C_{R}\) as \(a\mapsto a+\zeta\) for \(\zeta\in\mathbb{F}_{p}\). Let \(H^{1}_{\mathrm{c}}(C_{R,\mathbb{F}},\overline{\mathbb{Q}}_{\ell})\) be the first etale cohomology group of \(C_{R,\mathbb{F}}\) with compact support. Each element of \(Z(Q_{R})\) is fixed by the action of \(\mathbb{Z}\) on \(Q_{R}\). Thus its \(\psi\)-isotypic part \(H^{1}_{\mathrm{c}}(C_{R,\mathbb{F}},\overline{\mathbb{Q}}_{\ell})[\psi]\) is regarded as a \(Q_{R}\rtimes\mathbb{Z}\)-representation.
We construct a concrete Galois extension over \(F\) whose Weil group is isomorphic to a subgroup of \(Q_{R}\rtimes\mathbb{Z}\) associated to the integer \(m\) (Definition 3.1 and (3.8)). Namely we will define a homomorphism \(\Theta_{R,m}\colon W_{F}\to Q_{R}\rtimes\mathbb{Z}\) in (3.12). As a result, we define \(\tau_{\psi,R,m}\) to be the composite
\[W_{F}\xrightarrow{\Theta_{R,m}}Q_{R}\rtimes\mathbb{Z}\to\operatorname{Aut}_{ \overline{\mathbb{Q}}_{\ell}}(H^{1}_{\mathrm{c}}(C_{R,\mathbb{F}},\overline{ \mathbb{Q}}_{\ell})[\psi]).\]
This is a smooth irreducible representation of \(W_{F}\) of degree \(p^{e}\).
We state our motivation and reason why we introduce and study \(\tau_{\psi,R,m}\). It is known that the reductions of concentric affinoids in the Lubin-Tate curve fit into this type of curves \(C_{R}\) with special \(R\). For example, see [16] and [17]. When \(R\) is a monomial and \(m=1\), the representation \(\tau_{\psi,R,m}\) is studied in [9] and [10] in detail. In these papers, the reduction of a certain affinoid in the Lubin-Tate space is related to \(C_{R}\) in some sense and the supercuspidal representation \(\pi\) of \(\operatorname{GL}_{p^{e}}(F)\) which corresponds to \(\tau_{\psi,R,m}\) under the local Langlands correspondence explicitly. The homomorphism \(\Theta_{R,1}\) with \(R(x)=x^{p^{e}}\) (\(e\in\mathbb{Z}_{\geq 1}\)) does appear in the work [9]. An irreducible representation of a group is said to be primitive if it is not isomorphic to an induction of any representation of a proper subgroup. The representation \(\tau_{\psi,R,m}\) in [9] and [10] is primitive and this property makes it complicated to describe \(\pi\) in a view point of type theory. It is an interesting problem to do the same thing for general \(\tau_{\psi,R,m}\). In this direction, it would be valuable to know when \(\tau_{\psi,R,m}\) is primitive. We expect that another \(C_{R}\) will be related to concentric affinoids in the Lubin-Tate spaces as in [9].
We briefly explain the content of each section. In SS2, we state several things on the curves \(C_{R}\) and the extra-special \(p\)-subgroups contained in the automorphism groups of the curves.
In SS3.1 and SS3.2, we construct the Galois extension mentioned above and define \(\tau_{\psi,R,m}\). Let \(d_{R}:=\gcd\{p^{i}+1\mid a_{i}\neq 0\}\). We show that the Swan conductor exponent of \(\tau_{\psi,R,m}\) equals \(m(p^{e}+1)/d_{R}\) (Corollary 3.15). In SS3.3, we study primitivity of \(\tau_{\psi,R,m}\). In particular, we write down a necessary and sufficient condition for \(\tau_{\psi,R,m}\) to be primitive. By this, we give examples such that \(\tau_{\psi,R,m}\) is primitive (Example 3.29). The necessary and sufficient condition is that a symplectic module \((V_{R},\omega_{R})\) associated to \(\tau_{\psi,R,m}\) is completely anisotropic (Corollary 3.28). If \(R\) is a monomial, \((V_{R},\omega_{R})\) is studied in SS3.4 in more detail. In Proposition 3.44, a primary module in the sense of [11, SS9] is constructed geometrically by using the Kunneth formula.
Our aim in SS4 is to show the following theorem.
**Theorem 1.1**.: _Assume \(p\neq 2\). The following two conditions are equivalent._
1. _There exists a non-trivial finite etale morphism_ \[C_{R}\to C_{R_{1}};\ (a,x)\mapsto\left(a-\Delta(x),r(x)\right),\] _where_ \(\Delta(x)\in\mathbb{F}_{q}[x]\) _and_ \(r(x),R_{1}(x)\) _are additive polynomials over_ \(\mathbb{F}_{q}\) _such that_ \(d_{R,m}\mid d_{R_{1}}\) _and_ \(r(\alpha x)=\alpha r(x)\) _for any_ \(\alpha\in\mu_{d_{R,m}}\)_._
2. _The_ \(W_{F}\)_-representation_ \(\tau_{\psi,R,m}\) _is imprimitive._
If \(\tau_{\psi,R,m}\) is imprimitive, it is written as a form of an induced representation of a certain explicit \(W_{F^{\prime}}\)-representation \(\tau^{\prime}_{\psi,R_{1},m}\) associated to a finite extension \(F^{\prime}/F\). The proof of the above theorem depends on several geometric properties of \(C_{R}\) developed in [6] and [1]. See the beginning of SS4 for more details.
### Notation
Let \(k\) be a field. Let \(\mu(k)\) denote the set of all roots of unity in \(k\). For a positive integer \(r\), let \(\mu_{r}(k):=\{x\in k\mid x^{r}=1\}\).
For a positive integer \(i\), let \(\mathbb{A}_{k}^{i}\) and \(\mathbb{P}_{k}^{i}\) be an \(i\)-dimensional affine space and a projective space over \(k\), respectively. For a scheme \(X\) over \(k\) and a field extension \(l/k\), let \(X_{l}\) denote the base change of \(X\) to \(l\). For a closed subset \(Z\) of a variety \(X\), we regard \(Z\) as a closed subscheme with reduced scheme structure.
Throughout this paper, we set \(q:=p^{f}\) with a positive integer \(f\). For a positive integer \(i\), we simply write \(\mathrm{Nr}_{q^{i}/q}\) and \(\mathrm{Tr}_{q^{i}/q}\) for the norm map and the trace map from \(\mathbb{F}_{q^{i}}\) to \(\mathbb{F}_{q}\), respectively.
Let \(X\) be a scheme over \(\mathbb{F}_{q}\), let \(F_{q}\colon X\to X\) be the \(q\)-th power Frobenius endomorphism. Let \(\mathbb{F}\) be an algebraic closure of \(\mathbb{F}_{q}\). Let \(\mathrm{Fr}_{q}\colon X_{\mathbb{F}}\to X_{\mathbb{F}}\) be the base change of \(F_{q}\). This endomorphism \(\mathrm{Fr}_{q}\) is called the Frobenius endomorphism of \(X\) over \(\mathbb{F}_{q}\).
For a Galois extension \(l/k\), let \(\mathrm{Gal}(l/k)\) denote the Galois group of the extension.
## 2 Extra-special \(p\)-groups and affine curves
**Definition 2.1**.: Let \(k\) be a field. A polynomial \(f(x)\in k[x]\) is called additive if \(f(x+y)=f(x)+f(y)\). Let \(\mathscr{A}_{k}\) be the set of all additive polynomials with coefficients in \(k\).
Let \(p\) be a prime number. We simply \(\mathscr{A}_{q}\) for \(\mathscr{A}_{\mathbb{F}_{q}}\). Let \(R(x):=\sum_{i=0}^{e}a_{i}x^{p^{i}}\in\mathscr{A}_{q}\) with \(e\in\mathbb{Z}_{\geq 0}\) and \(a_{e}\neq 0\). Let
\[E_{R}(x):=R(x)^{p^{e}}+\sum_{i=0}^{e}(a_{i}x)^{p^{e-i}}\in\mathscr{A}_{q}. \tag{2.1}\]
We always assume
\[(p,e)\neq(2,0). \tag{2.2}\]
We simply write \(\mu_{r}\) for \(\mu_{r}(\mathbb{F})\) for a positive integer \(r\). Let
\[d_{R}:=\gcd\{p^{i}+1\mid a_{i}\neq 0\}.\]
If \(a_{i}\neq 0\), we have \(\alpha^{p^{i}}=\alpha^{-1}\) and \(\alpha^{p^{e-i}}=\alpha\) for \(\alpha\in\mu_{d_{R}}\). Hence we have
\[\alpha R(\alpha x)=R(x),\quad E_{R}(\alpha x)=\alpha E_{R}(x)\quad\text{for } \alpha\in\mu_{d_{R}}. \tag{2.3}\]
Let
\[f_{R}(x,y):=-\sum_{i=0}^{e-1}\left(\sum_{j=0}^{e-i-1}(a_{i}x^{p^{i}}y)^{p^{j}}+ (xR(y))^{p^{i}}\right)\in\mathbb{F}_{q}[x,y].\]
This is linear with respect to \(x\) and \(y\). By (2.3), we have
\[f_{R}(\alpha x,\alpha y)=f_{R}(x,y)\quad\text{for }\alpha\in\mu_{d_{R}}. \tag{2.4}\]
**Lemma 2.2**.: _We have \(f_{R}(x,y)^{p}-f_{R}(x,y)=-x^{p^{e}}E_{R}(y)+xR(y)+yR(x).\) In particular, if \(E_{R}(y)=0\), we have \(f_{R}(x,y)^{p}-f_{R}(x,y)=xR(y)+yR(x).\)_
Proof.: The former equality follows from
\[f_{R}(x,y)^{p}-f_{R}(x,y) =xR(y)-(xR(y))^{p^{e}}+\sum_{i=0}^{e-1}(a_{i}x^{p^{i}}y-(a_{i}x^{ p^{i}}y)^{p^{e-i}})\] \[=-x^{p^{e}}E_{R}(y)+xR(y)+yR(x).\]
**Definition 2.3**.:
1. Let \(V_{R}:=\{\beta\in\mathbb{F}\mid E_{R}(\beta)=0\}\), which is a \((2e)\)-dimensional \(\mathbb{F}_{p}\)-vector space.
2. Let \[Q_{R}:=\left\{(\alpha,\beta,\gamma)\in\mathbb{F}^{3}\mid\alpha\in\mu_{d_{R}}, \ \beta\in V_{R},\ \gamma^{p}-\gamma=\beta R(\beta)\right\}\] be the group whose group law is given by \[(\alpha_{1},\beta_{1},\gamma_{1})\cdot(\alpha_{2},\beta_{2},\gamma_{2}):=( \alpha_{1}\alpha_{2},\beta_{1}+\alpha_{1}\beta_{2},\gamma_{1}+\gamma_{2}+f_{R} (\beta_{1},\alpha_{1}\beta_{2}))\,.\] This is well-defined according to (2.3) and Lemma 2.2.
3. Let \(H_{R}:=\{(\alpha,\beta,\gamma)\in Q_{R}\mid\alpha=1\}\), which is a normal subgroup of \(Q_{R}\).
If \(e=0\), we have \(p\neq 2\) by (2.2). We have \(H_{R}=\mathbb{F}_{p}\subset Q_{R}=\mu_{2}\times\mathbb{F}_{p}\) if \(e=0\).
For a group \(G\) and elements \(g,g^{\prime}\in G\), let \([g,g^{\prime}]:=gg^{\prime}g^{-1}g^{\prime-1}\).
**Lemma 2.4**.: _For \(g=(1,\beta,\gamma),\ g^{\prime}=(1,\beta^{\prime},\gamma^{\prime})\in H_{R}\), we have \([g,g^{\prime}]=(1,0,f_{R}(\beta,\beta^{\prime})-f_{R}(\beta^{\prime},\beta))\). In particular, we have \(f_{R}(\beta,\beta^{\prime})-f_{R}(\beta^{\prime},\beta)\in\mathbb{F}_{p}\)._
Proof.: This is directly checked. We omit the details.
For a group \(G\), let \(Z(G)\) denote its center and \([G,G]\) the commutator subgroup of \(G\).
**Definition 2.5**.: A non-abelian \(p\)-group \(G\) is called an _extra-special \(p\)-group_ if \([G,G]=Z(G)\) and \(|Z(G)|=p\).
**Lemma 2.6**.: _Assume \(e\geq 1\)._
1. _The group_ \(H_{R}\) _is non-abelian. We have_ \(Z(H_{R})=Z(Q_{R})=\{(1,0,\gamma)\mid\gamma\in\mathbb{F}_{p}\}\)_. The quotient_ \(H_{R}/Z(H_{R})\) _is isomorphic to_ \(V_{R}\)_._
2. _The group_ \(H_{R}\) _is an extra-special_ \(p\)_-group. The pairing_ \(\omega_{R}\colon V_{R}\times V_{R}\to\mathbb{F}_{p};\ (\beta,\beta^{\prime}) \mapsto f_{R}(\beta,\beta^{\prime})-f_{R}(\beta^{\prime},\beta)\) _is a non-degenerate symplectic form._
Proof.: We show (1). Let \(X_{\beta}:=\{x\in\mathbb{F}\mid f_{R}(\beta,x)=f_{R}(x,\beta)\}\) for \(\beta\in V_{R}\). Then \(X_{\beta}\) is an \(\mathbb{F}_{p}\)-vector space of dimension \(2e-1\) if \(\beta\neq 0\). Since \(V_{R}\) has dimension \(2e\), we have \(V_{R}\nsubseteq X_{\beta}\) for \(\beta\in V_{R}\setminus\{0\}\). This implies that \(H_{R}\) is non-abelian according to Lemma 2.4.
Clearly we have \(Z:=\{(1,0,\gamma)\mid\gamma\in\mathbb{F}_{p}\}\subset Z(Q_{R})\subset Z(H_{R})\). It suffices to show \(Z(H_{R})\subset Z\). Let \((1,\beta,\gamma)\in Z(H_{R})\). We have \(V_{R}\subset X_{\beta}\) by Lemma 2.4. This implies \(\beta=0\). Hence we obtain \(Z(H_{R})\subset Z\). The last claim is easily verified.
We show (2). By Lemma 2.4, we have \([H_{R},H_{R}]\subset Z(H_{R})\). Since \(H_{R}\) is non-abelian, \([H_{R},H_{R}]\) is non-trivial. Hence we have \([H_{R},H_{R}]=Z(H_{R})\) by \(|Z(H_{R})|=p\). Hence \(H_{R}\) is extra-special. Assume \(\omega_{R}(\beta,\beta^{\prime})=0\) for any \(\beta^{\prime}\in V_{R}\). We take an element \((1,\beta,\gamma)\in H_{R}\). By Lemma 2.4, we have \((1,\beta,\gamma)\in Z(H_{R})\). Hence we have \(\beta=0\) by (1).
**Definition 2.7**.:
1. Let \(C_{R}\) be the affine curve over \(\mathbb{F}_{q}\) defined by \(a^{p}-a=xR(x)\).
2. Let \(Q_{R}\) act on \(C_{R,\mathbb{F}}\) by \[(a,x)\cdot(\alpha,\beta,\gamma)=\left(a+f_{R}(x,\beta)+\gamma,\alpha^{-1}(x+ \beta)\right),\] (2.5) for \((a,x)\in C_{R,\mathbb{F}}\) and \((\alpha,\beta,\gamma)\in Q_{R}\). This is well-defined by (2.3) and Lemma 2.2.
The curve \(C_{R}\) is studied in [6] and [1].
We take a prime number \(\ell\neq p\). For a finite abelian group \(A\), let \(A^{\vee}\) denote the character group \(\operatorname{Hom}_{\mathbb{Z}}(A,\overline{\mathbb{Q}}_{\ell}^{\times})\). For a representation \(M\) of \(A\) over \(\overline{\mathbb{Q}}_{\ell}\) and \(\chi\in A^{\vee}\), let \(M[\chi]\) denote the \(\chi\)-isotypic part of \(M\).
According to Lemma 2.6(1), we identify a character \(\psi\in\mathbb{F}_{p}^{\vee}\) with a character of \(Z(H_{R})\).
**Lemma 2.8**.: _Let \(\psi\in\mathbb{F}_{p}^{\vee}\setminus\{1\}\)._
1. _Let_ \(W\subset V_{R}\) _be an_ \(\mathbb{F}_{p}\)_-subspace of dimension_ \(e\)_, which is totally isotropic with respect to_ \(\omega_{R}\)_. Let_ \(W^{\prime}\subset H_{R}\) _be the inverse image of_ \(W\) _by the natural map_ \(H_{R}\to V_{R};\ (1,\beta,\gamma)\mapsto\beta\)_. Let_ \(\xi\in W^{\prime\vee}\) _be an extension of_ \(\psi\in Z(H_{R})^{\vee}\)_. Let_ \(\rho_{\psi}:=\operatorname{Ind}_{W^{\prime}\xi}^{H_{R}}\)_. Then_ \(\rho_{\psi}\) _is a unique (up to isomorphism) irreducible representation of_ \(H_{R}\) _containing_ \(\psi\)_. In particular,_ \(\rho_{\psi}|_{Z(H_{R})}\) _is a multiple of_ \(\psi\)_._
2. _The_ \(\psi\)_-isotypic part_ \(H_{\mathrm{c}}^{1}(C_{R,\mathbb{F}},\overline{\mathbb{Q}}_{\ell})[\psi]\) _is isomorphic to_ \(\rho_{\psi}\) _as_ \(H_{R}\)_-representations._
Proof.: By Lemma 2.6(2) and [8, 16.14(2) Satz], the claim (1) follows. By [16, Remark 3.29], we have \(\dim H_{\mathrm{c}}^{1}(C_{R,\mathbb{F}},\overline{\mathbb{Q}}_{\ell})[\psi]=p ^{e}\). Hence the claim (2) follows from (1).
The representation \(\rho_{\psi}\) induces a projective representation \(\bar{\rho}\colon H_{R}/Z(H_{R})\to\operatorname{PGL}_{p^{e}}(\overline{ \mathbb{Q}}_{\ell})\).
**Lemma 2.9**.: _The map \(\bar{\rho}\) is injective._
Proof.: As in the proof of [14, Theorem 6], we have \(\operatorname{Tr}\rho_{\psi}(x)=0\) for \(x\in H_{R}\setminus Z(H_{R})\). Assume \(\bar{\rho}(xZ(H_{R}))=1\) for \(x\in H_{R}\). Then \(\rho_{\psi}(x)\) is a non-zero scalar matrix. Hence \(\operatorname{Tr}\rho_{\psi}(x)\neq 0\). This implies \(x\in Z(H_{R})\).
Let \(\mathbb{Z}\ni 1\) act on \(H_{\mathrm{c}}^{1}(C_{R,\mathbb{F}},\overline{\mathbb{Q}}_{\ell})\) by the pull-back \(\operatorname{Fr}_{q}^{*}\). Let \(\mathbb{Z}\ni 1\) act on \(Q_{R}\) by \((\alpha,\beta,\gamma)\mapsto(\alpha^{q^{-1}},\beta^{q^{-1}},\gamma^{q^{-1}})\). The semidirect product \(Q_{R}\rtimes\mathbb{Z}\) acts on \(H_{\mathrm{c}}^{1}(C_{R,\mathbb{F}},\overline{\mathbb{Q}}_{\ell})[\psi]\).
Let \(\overline{C}_{R}\) denote the smooth compactification of \(C_{R}\).
**Proposition 2.10**.: _The projective curve \(\overline{C}_{R}\) is supersingular. In particular, this curve has positive genus. The natural map \(H_{\mathrm{c}}^{1}(C_{R,\mathbb{F}},\overline{\mathbb{Q}}_{\ell})\to H^{1}( \overline{C}_{R,\mathbb{F}},\overline{\mathbb{Q}}_{\ell})\) is an isomorphism._
Proof.: The former claim is shown in [6, Theorems (9.4) and (13.7)] ([1, Proposition 8.5]). The last claim follows from [16, Lemmas 3.27 and 3.28(3)].
## 3 Local Galois representation
In this section, we define an irreducible smooth \(W_{F}\)-representation \(\tau_{\psi,R,m}\) and determine several invariants associated to it. In SS3.2.2, we determine the Swan conductor exponent of \(\tau_{\psi,R,m}\). In SS3.3, we determine the symplectic module associated to \(\tau_{\psi,R,m}\), and give a necessary and sufficient condition for \(\tau_{\psi,R,m}\) to be primitive. As a result, we obtain several examples such that \(\tau_{\psi,R,m}\) is primitive. If \(R\) is a monomial, we calculate invariants of the root system corresponding to \((V_{R},\omega_{R})\) defined in [11] ( Lemma 3.36).
### Galois extension
For a valued field \(K\), let \(\mathcal{O}_{K}\) denote the valuation ring of \(K\).
Let \(F\) be a non-archimedean local field. Let \(\overline{F}\) be a separable closure of \(F\). Let \(\widehat{\overline{F}}\) denote the completion of \(\overline{F}\). Let \(v(\cdot)\) denote the unique valuation on \(\widehat{\overline{F}}\) such that \(v(\varpi)=1\) for a uniformizer \(\varpi\) of \(F\), which we now fix. We simply write \(\mathcal{O}\) for \(\mathcal{O}_{\widehat{F}}\). Let \(\mathfrak{p}\) be the maximal ideal of \(\mathcal{O}\).
Let \(q\) be the cardinality of the residue field of \(\mathcal{O}_{F}\).
We take \(R(x)=\sum_{i=0}^{e}a_{i}x^{p^{i}}\in\mathscr{A}_{q}\). For an element \(a\in\mathbb{F}_{q}\), let \(\widetilde{a}\in\mu(F)\cup\{0\}\) be its Teichmuller lift. Let
\[\widetilde{R}(x):=\sum_{i=0}^{e}\widetilde{a}_{i}x^{p^{i}},\quad\widetilde{E} _{R}(x):=\widetilde{R}(x)^{p^{e}}+\sum_{i=0}^{e}(\widetilde{a}_{i}x)^{p^{e-i} }\in\mathcal{O}_{F}[x].\]
Similarly as (2.3), we have
\[\alpha\widetilde{R}(\alpha x)=\widetilde{R}(x),\quad\widetilde{E}_{R}(\alpha x )=\alpha\widetilde{E}_{R}(x)\quad\text{for }\alpha\in\mu_{d_{R}}(\overline{F}). \tag{3.1}\]
**Definition 3.1**.: Let \(m\) be a positive integer prime to \(p\). Let \(\alpha_{R,\varpi},\beta_{R,m,\varpi},\gamma_{R,m,\varpi}\in\overline{F}\) be elements such that
\[\alpha_{R,\varpi}^{d_{R}}=\varpi,\quad\widetilde{E}_{R}(\beta_{R,m,\varpi})= \alpha_{R,\varpi}^{-m},\quad\gamma_{R,m,\varpi}^{p}-\gamma_{R,m,\varpi}= \beta_{R,m,\varpi}\widetilde{R}(\beta_{R,m,\varpi}).\]
For simplicity, we write \(\alpha_{R},\beta_{R,m},\gamma_{R,m}\) for \(\alpha_{R,\varpi},\beta_{R,m,\varpi},\gamma_{R,m,\varpi}\), respectively.
**Remark 3.2**.: By \(\deg\widetilde{E}_{R}(x)=p^{2e}\) and \(\deg\widetilde{R}(x)=p^{e}\), we have
\[v(\alpha_{R})=\frac{1}{d_{R}},\quad v(\beta_{R,m})=-\frac{m}{p^{2e}d_{R}}, \quad v(\gamma_{R,m})=-\frac{m(p^{e}+1)}{p^{2e+1}d_{R}}.\]
The integer \(m\) controls the depth of ramification of the resulting field extension \(F(\alpha_{R},\beta_{R,m},\gamma_{R,m})/F\). We will understand this later in SS3.2.2.
Let
\[\widetilde{f}(x,y):=-\sum_{i=0}^{e-1}\left(\sum_{j=0}^{e-i-1}(\widetilde{a}_{ j}x^{p^{i}}y)^{p^{j}}+(x\widetilde{R}(y))^{p^{i}}\right).\]
Let \(\mathfrak{p}[x]:=\mathfrak{p}\mathcal{O}[x]\) and \(\mathfrak{p}[x,y]:=\mathfrak{p}\mathcal{O}[x,y]\). We assume that
\[\begin{array}{l}\beta_{R,m}^{p^{e}}(\widetilde{E}_{R}(\beta_{R,m}+x)- \widetilde{E}_{R}(\beta_{R,m})-\widetilde{E}_{R}(x)),\quad\beta_{R,m}( \widetilde{R}(\beta_{R,m}+x)-\widetilde{R}(\beta_{R,m})-\widetilde{R}(x)),\\ \widetilde{f}(\beta_{R,m},x)^{p}-\widetilde{f}(\beta_{R,m},x)+\beta_{R,m}^{p^ {e}}\widetilde{E}_{R}(x)-x\widetilde{R}(\beta_{R,m})-\beta_{R,m}\widetilde{R }(x)\text{ are contained in }\mathfrak{p}[x]\text{ and }\\ (\gamma_{R,m}+\widetilde{f}(\beta_{R,m},y)+x)^{p}-\gamma_{R,m}^{p}-\widetilde{ f}(\beta_{R,m},y)^{p}-x^{p}\in\mathfrak{p}[x,y].\end{array} \tag{3.2}\]
For \(r\in\mathbb{Q}_{\geq 0}\) and \(f,g\in\overline{F}\), we write \(f\equiv g\mod r+\) if \(v(f-g)>r\). For a local field \(K\) contained in \(\overline{F}\), let \(W_{K}\) be the Weil group of \(\overline{F}/K\). For \(\sigma\in W_{K}\), let \(n_{\sigma}\in\mathbb{Z}\) be the integer such that \(\sigma(x)\equiv x^{q^{-n_{\sigma}}}\mod 0+\) for \(x\in\mathcal{O}_{\overline{F}}\). Let \(v_{K}(\cdot)\) denote the normalized valuation on \(K\).
**Definition 3.3**.: For \(\sigma\in W_{F}\), we set
\[\begin{array}{l}a_{R,\sigma}:=\sigma(\alpha_{R})/\alpha_{R}\in\mu_{d_{R}}( \overline{F}),\quad b_{R,\sigma}:=a_{R,\sigma}^{m}\sigma(\beta_{R,m})-\beta_{ R,m},\\ c_{R,\sigma}:=\sigma(\gamma_{R,m})-\gamma_{R,m}-\widetilde{f}(\beta_{R,m},b_{R, \sigma}).\end{array} \tag{3.3}\]
In the following, we simply write \(a_{\sigma},b_{\sigma},c_{\sigma}\) for \(a_{R,\sigma},b_{R,\sigma},c_{R,\sigma}\), respectively.
For an element \(x\in\mathcal{O}\), let \(\bar{x}\) denote the image of \(x\) by the map \(\mathcal{O}\to\mathcal{O}/\mathfrak{p}\). In the following proofs, for simplicity, we often write \(\alpha\), \(\beta\) and \(\gamma\) for \(\alpha_{R}\), \(\beta_{R,m}\) and \(\gamma_{R,m}\), respectively.
**Lemma 3.4**.: _We have \(b_{\sigma},c_{\sigma}\in\mathcal{O}\), \(E_{R}(\bar{b}_{\sigma})=0\) and \(\bar{c}_{\sigma}^{p}-\bar{c}_{\sigma}=\bar{b}_{\sigma}R(\bar{b}_{\sigma})\)._
Proof.: Using (3.1), the equality \(\widetilde{E}_{R}(\beta)=\alpha^{-m}\) in Definition 3.1 and (3.3),
\[\widetilde{E}_{R}(\beta+b_{\sigma})=\widetilde{E}_{R}(a_{\sigma}^{m}\sigma( \beta))=a_{\sigma}^{m}\widetilde{E}_{R}(\sigma(\beta))=a_{\sigma}^{m}\sigma( \alpha)^{-m}=\alpha^{-m}=\widetilde{E}_{R}(\beta).\]
Using \(v(\beta)<0\) in Remark 3.2 and (3.2), we have \(\Delta(x):=\widetilde{E}_{R}(\beta+x)-\widetilde{E}_{R}(\beta)-\widetilde{E} _{R}(x)\in\mathfrak{p}[x]\). By letting \(x=b_{\sigma}\) and applying the previous relationship, we deduce that \(\widetilde{E}_{R}(b_{\sigma})+\Delta(b_{\sigma})=0\). Hence \(b_{\sigma}\in\mathcal{O}\) and \(E_{R}(\bar{b}_{\sigma})=0\).
By (3.2), we have
\[\beta\widetilde{R}(\beta+b_{\sigma})\equiv\beta\widetilde{R}(\beta)+\beta \widetilde{R}(b_{\sigma}),\quad\widetilde{f}(\beta,b_{\sigma})^{p}- \widetilde{f}(\beta,b_{\sigma})\equiv b_{\sigma}\widetilde{R}(\beta)+\beta \widetilde{R}(b_{\sigma})\mod 0+. \tag{3.4}\]
Substituting \(y=b_{\sigma}\in\mathcal{O}\) to (3.2), we obtain
\[\Delta_{1}(x):=(\gamma+\widetilde{f}(\beta,b_{\sigma})+x)^{p}-\gamma^{p}- \widetilde{f}(\beta,b_{\sigma})^{p}-x^{p}\in\mathfrak{p}[x].\]
We have \(\sigma(\beta)\widetilde{R}(\sigma(\beta))=(\beta+b_{\sigma})\widetilde{R}( \beta+b_{\sigma})\) by substituting (3.3) and using (3.1). By multiplying the first congruence in (3.4) by \(b_{\sigma}\beta^{-1}\), we obtain \(b_{\sigma}\widetilde{R}(\beta+b_{\sigma})\equiv b_{\sigma}\widetilde{R}( \beta)+b_{\sigma}\widetilde{R}(b_{\sigma})\mod 0+\). Hence, we compute
\[\sigma(\gamma)^{p}-\sigma(\gamma) =\sigma(\beta)\widetilde{R}(\sigma(\beta))=(\beta+b_{\sigma}) \widetilde{R}(\beta+b_{\sigma})\] \[\equiv\beta\widetilde{R}(\beta)+b_{\sigma}\widetilde{R}(\beta)+ \beta\widetilde{R}(b_{\sigma})+b_{\sigma}\widetilde{R}(b_{\sigma})\] \[\equiv\gamma^{p}-\gamma+\widetilde{f}(\beta,b_{\sigma})^{p}- \widetilde{f}(\beta,b_{\sigma})+b_{\sigma}\widetilde{R}(b_{\sigma})\] \[\equiv\sigma(\gamma)^{p}-\sigma(\gamma)-(c_{\sigma}^{p}-c_{ \sigma}+\Delta_{1}(c_{\sigma}))+b_{\sigma}\widetilde{R}(b_{\sigma})\mod 0+,\]
where we have used (3.3) for the last congruence. Hence we obtain \(c_{\sigma}^{p}-c_{\sigma}+\Delta_{1}(c_{\sigma})\equiv b_{\sigma}\widetilde{ R}(b_{\sigma})\mod 0+\). By \(b_{\sigma}\in\mathcal{O}\), we have \(c_{\sigma}\in\mathcal{O}\) and \(\bar{c}_{\sigma}^{p}-\bar{c}_{\sigma}=\bar{b}_{\sigma}R(\bar{b}_{\sigma})\).
Assume that
\[(x+\beta_{R,m})^{p^{i}}-x^{p^{i}}-\beta_{R,m}^{p^{i}}\in \mathfrak{p}[x]\quad\text{for }1\leq i\leq e-1, \tag{3.5}\] \[\widetilde{f}(\beta_{R,m},x+y)-\widetilde{f}(\beta_{R,m},x)- \widetilde{f}(\beta_{R,m},y)\in\mathfrak{p}[x,y],\]
Let
\[\Theta_{R,m,\varpi}\colon W_{F}\to Q_{R}\rtimes\mathbb{Z};\ \sigma\mapsto((\bar{a}_{\sigma}^{m},\bar{b}_{\sigma},\bar{c}_{\sigma}),n_{ \sigma}). \tag{3.6}\]
**Lemma 3.5**.: _The map \(\Theta_{R,m,\varpi}\) is a homomorphism._
Proof.: Let \(\sigma,\sigma^{\prime}\in W_{F}\). Recall that \(\sigma(x)\equiv x^{q^{-n_{\sigma}}}\mod 0+\) for \(x\in\mathcal{O}_{\overline{F}}\). Using Definition 2.3(2), we reduce to checking that
\[\bar{a}_{\sigma\sigma^{\prime}}=\bar{a}_{\sigma}\bar{a}_{\sigma^{\prime}}^{q^{ -n_{\sigma}}},\quad\bar{b}_{\sigma\sigma^{\prime}}=\bar{a}_{\sigma}^{m}\bar{b} _{\sigma^{\prime}}^{q^{-n_{\sigma}}}+\bar{b}_{\sigma},\quad\bar{c}_{\sigma \sigma^{\prime}}=\bar{c}_{\sigma}+\bar{c}_{\sigma^{\prime}}^{q^{-n_{\sigma}}}+ f_{R}(\bar{b}_{\sigma},\bar{a}_{\sigma}^{m}\bar{b}_{\sigma^{\prime}}^{-n_{\sigma}}). \tag{3.7}\]
We easily check that \(a_{\sigma\sigma^{\prime}}=\sigma(a_{\sigma^{\prime}})a_{\sigma}\) and \(b_{\sigma\sigma^{\prime}}=a_{\sigma}^{m}\sigma(b_{\sigma^{\prime}})+b_{\sigma}\). Hence the first equalities in (3.7) follow. We compute
\[c_{\sigma\sigma^{\prime}} =c_{\sigma}+\sigma(c_{\sigma^{\prime}})+\sigma(\widetilde{f}( \beta,b_{\sigma^{\prime}}))+\widetilde{f}(\beta,b_{\sigma})-\widetilde{f}( \beta,b_{\sigma\sigma^{\prime}})\] \[\equiv c_{\sigma}+\sigma(c_{\sigma^{\prime}})+\sigma(\widetilde{f}( \beta,b_{\sigma^{\prime}}))-\widetilde{f}(\beta,a_{\sigma}^{m}\sigma(b_{ \sigma^{\prime}}))\mod 0+,\]
where we use the second condition in (3.5) for the second congruence. We have
\[\sigma(\widetilde{f}(\beta,b_{\sigma^{\prime}})) =-\sum_{i=0}^{e-1}\sum_{j=0}^{e-i-1}(\widetilde{a}_{j}\sigma(b_{ \sigma^{\prime}})\sigma(\beta)^{p^{i}})^{p^{j}}-\sum_{i=0}^{e-1}(\sigma(\beta )\widetilde{R}(\sigma(b_{\sigma^{\prime}})))^{p^{i}}\] \[\equiv\widetilde{f}(b_{\sigma},a_{\sigma}^{m}\sigma(b_{\sigma^{ \prime}}))+\widetilde{f}(\beta,a_{\sigma}^{m}\sigma(b_{\sigma^{\prime}}))\mod 0+,\]
where we substitute \(\sigma(\beta)=a_{\sigma}^{-m}(\beta+b_{\sigma})\), (3.5) and (3.1) for the second congruence. The last equality in (3.7) follows from \(\overline{\widetilde{f}(b_{\sigma},a_{\sigma}^{m}\sigma(b_{\sigma^{\prime}})) }=f_{R}(\bar{b}_{\sigma},\bar{a}_{\sigma}^{m}\bar{b}_{\sigma^{\prime}}^{q^{- n_{\sigma}}})\), since \(\widetilde{f}(x,y)\) is a lift of \(f_{R}(x,y)\) to \(\mathcal{O}_{F}[x,y]\).
**Lemma 3.6**.: _If \(v(p)\) is large enough, the conditions (3.2) and (3.5) are satisfied._
Proof.: There exists \(s\in\mathbb{Z}_{\geq 1}\) such that the coefficients of all polynomials in (3.2) and (3.5) have the form: \(p\cdot\beta_{R,m}^{*}\cdot a\) with \(a\in\mathcal{O}_{\overline{F}}\) by Remark 3.2. Since the valuation of \(v(\beta_{R,m})\) is independent of \(F\), the claim follows.
In the sequel, we assume that the conditions (3.2) and (3.5) are satisfied. Let \(F^{\rm ur}\) denote the maximal unramified extension of \(F\) in \(\overline{F}\).
**Lemma 3.7**.: _The extension \(F^{\rm ur}(\alpha_{R}^{m},\beta_{R,m},\gamma_{R,m})/F\) is Galois._
Proof.: Let \(L_{0}:=F^{\rm ur}(\alpha^{m},\beta,\gamma)\) and \(L:=\widehat{L}_{0}\) be the completion of \(L_{0}\). Let \(\sigma\in G_{F}\). We note that \(a_{\sigma}\in\mu_{d_{R}}(\overline{F})\subset F^{\rm ur}\) by \(p\nmid d_{R}\). Hence \(\sigma(\alpha^{m})=a_{\sigma}^{m}\alpha^{m}\in L_{0}\). We show \(\sigma(\beta),\sigma(\gamma)\in L_{0}\). It suffices to prove
\[b_{\sigma},c_{\sigma}\in L_{0},\]
since
\[\sigma(\beta)=\frac{b_{\sigma}+\beta}{a_{\sigma}^{m}},\quad\sigma(\gamma)= \gamma+c_{\sigma}+\widetilde{f}(\beta,b_{\sigma})\]
by (3.3). As in the proof of Lemma 3.4, we have \((\widetilde{E}_{R}+\Delta)(b_{\sigma})=0\), \(E(x):=(\widetilde{E}_{R}+\Delta)(x)\in\mathcal{O}_{L_{0}}[x]\) and \(\deg E(x)=p^{2e}\). The equation \(E(x)\equiv 0\mod 0+\) has \(p^{2e}\) different roots. Hence by Hensel's lemma, \(E(x)=0\) has \(p^{2e}\) different roots in \(\mathcal{O}_{L}\). Hence we have \(b_{\sigma}\in L\cap\overline{F}=L_{0}\). As in the proof of Lemma 3.4, we have
\[f(c_{\sigma}):=c_{\sigma}^{p}-c_{\sigma}+\Delta_{1}(c_{\sigma})-y=0\text{ with }y\in\mathcal{O}_{L_{0}},\]
where \(f(x)\in\mathcal{O}_{L_{0}}[x]\) with \(\deg f(x)=p\). We have \(y\equiv b_{\sigma}\widetilde{R}(b_{\sigma})\mod 0+\). The equation \(f(x)\equiv x^{p}-x-y\equiv x^{p}-x-b_{\sigma}\widetilde{R}(b_{\sigma})\equiv 0 \mod 0+\) has \(p\) different roots. By Hensel's lemma, \(f(x)=0\) has \(p\) different roots in \(\mathcal{O}_{L}\). Hence we have \(c_{\sigma}\in L\cap\overline{F}=L_{0}\).
**Definition 3.8**.: Let
\[d_{R,m}:=\frac{d_{R}}{\gcd(d_{R},m)},\quad Q_{R,m}:=\{(\alpha,\beta,\gamma) \in Q_{R}\mid\alpha\in\mu_{d_{R,m}}\}.\]
**Lemma 3.9**.: _We have the isomorphism_
\[W(F^{\rm ur}(\alpha_{R}^{m},\beta_{R,m},\gamma_{R,m})/F)\xrightarrow{\sim}Q_{R,m} \rtimes\mathbb{Z};\ \sigma\mapsto((\bar{a}_{\sigma}^{m},\bar{b}_{\sigma},\bar{c}_{\sigma}),n_{ \sigma}). \tag{3.8}\]
Proof.: We use the notation in the proof of Lemma 3.4. Let
\[I:=\operatorname{Gal}(F^{\rm ur}(\alpha^{m},\beta,\gamma)/F^{\rm ur})\]
and \(\Theta\colon I\to Q_{R,m}\) be the restriction of (3.8). First, we show that \(\Theta\) is injective. Assume \(\Theta(\sigma)=1\) for \(\sigma\in I\). We will show \(\sigma=1\). By the assumption, \(\bar{a}_{\sigma}^{m}=1\), \(\bar{b}_{\sigma}=0\) and \(\bar{c}_{\sigma}=0\). We have a natural isomorphism \(\mu_{r}(\overline{F})\xrightarrow{\sim}\mu_{r}\) for an integer \(r\) prime to \(p\). By \(\bar{a}_{\sigma}^{m}=1\) and \(a_{\sigma}\in\mu_{d_{R}}(\overline{F})\) in (3.3), we have \(a_{\sigma}^{m}=1\) and \(\sigma(\alpha^{m})=\alpha^{m}\). We recall the equality \(\widetilde{E}_{R}(b_{\sigma})+\Delta(b_{\sigma})=0\) in the proof of Lemma 3.4, where \(\Delta(x)\in\mathfrak{p}[x]\) has no constant coefficient. We have \(v(b_{\sigma})>0\) by \(\bar{b}_{\sigma}=0\). Assume that \(b_{\sigma}\neq 0\). Then \(v(b_{\sigma})=v(\widetilde{E}_{R}(b_{\sigma})+\Delta(b_{\sigma}))\) by \(E^{\prime}_{R}(0)\neq 0\) and \(v(b_{\sigma})>0\). This implies that \(v(b_{\sigma})=\infty\), which is a contradiction. Hence \(b_{\sigma}=0\) and \(\sigma(\beta)=\beta\). By the last condition in (3.2) with \(y=0\),
\[\Lambda(x):=(\gamma+x)^{p}-\gamma^{p}-x^{p}\in\mathfrak{p}[x].\]
We have \(\sigma(\gamma)^{p}-\sigma(\gamma)=\gamma^{p}-\gamma\) by Definition 3.1. Hence \((\gamma+c_{\sigma})^{p}-\gamma^{p}=c_{\sigma}\) and \(c_{\sigma}^{p}+\Lambda(c_{\sigma})=c_{\sigma}\). Since \(\Lambda(x)\in\mathfrak{p}[x]\) has no constant coefficient, if \(0<v(c_{\sigma})<\infty\), we have \(v(c_{\sigma}^{p}+\Lambda(c_{\sigma}))>v(c_{\sigma})\), which can not occur. Hence \(c_{\sigma}=0\) and \(\sigma(\gamma)=\gamma\). We obtain \(\sigma=1\). Hence \(\Theta\) is injective. We easily check that \(F^{\rm ur}(\alpha^{m},\beta,\gamma)/F^{\rm ur}\) is a totally ramified extension of degree \(d_{R,m}p^{2e+1}\). Hence \(\Theta\) is an isomorphism. By the snake lemma, (3.8) is an isomorphism.
### Galois representations associated to additive polynomials
#### 3.2.1 Construction of Galois representation
We assume that (3.2) and (3.5) are satisfied. If the characteristic of \(F\) is positive, these are unconditional. If the characteristic of \(F\) is zero, these conditions are satisfied if the absolute ramification index of \(F\) is large enough as in Lemma 3.6.
**Definition 3.10**.: Let \(\psi\in\mathbb{F}_{p}^{\vee}\setminus\{1\}\). We define \(\tau_{\psi,R,m,\varpi}\) to be the \(W_{F}\)-representation which is the inflation of the irreducible \(Q_{R}\rtimes\mathbb{Z}\) -representation \(H^{1}_{\rm c}(C_{R,\mathbb{F}},\overline{\mathbb{Q}}_{\ell})[\psi]\) by \(\Theta_{R,m,\varpi}\) in (3.6). For simplicity, we write \(\tau_{\psi,R,m}\) for \(\tau_{\psi,R,m,\varpi}\).
For a non-archimedean local field \(K\), let \(I_{K}\) denote the inertia subgroup of \(K\). Then \(\operatorname{Ker}\tau_{\psi,R,m}\) contains the open compact subgroup \(I_{F(\alpha_{R}^{m},\beta_{R,m},\gamma_{R,m})}\). Hence the representation \(\tau_{\psi,R,m}\) is a smooth irreducible representation of \(W_{F}\) by Lemma 2.8(1).
Let \(G_{F}:=\operatorname{Gal}(\overline{F}/F)\). We consider a general setting in the following lemma.
**Lemma 3.11**.: _Let \(\widetilde{\tau}\) be a continuous representation of \(G_{F}\) over \(\overline{\mathbb{Q}}_{\ell}\) such that there exists an unramified continuous character \(\phi\) of \(G_{F}\) such that \((\widetilde{\tau}\otimes\phi)(G_{F})\) is finite. Assume that \(\tau:=\widetilde{\tau}|_{W_{F}}\) is irreducible. Then \(\widetilde{\tau}\otimes\phi\) is primitive if and only if \(\tau\) is primitive._
Proof.: Let \(\widetilde{\tau}^{\prime}:=\widetilde{\tau}\otimes\phi\) and \(\tau^{\prime}:=\tau\otimes\phi|_{W_{F}}\). The subgroup \(\operatorname{Ker}\widetilde{\tau}^{\prime}\) is open by \(|G_{F}/\operatorname{Ker}\widetilde{\tau}^{\prime}|<\infty\). Hence \(\operatorname{Ker}\tau^{\prime}\subset W_{F}\) is open. Therefore \(\tau^{\prime}\) is smooth. Hence so is \(\tau\). Since \(\tau\) is irreducible and smooth, we have \(\dim\tau<\infty\). We will show that \(\widetilde{\tau}^{\prime}\) is imprimitive if and only if \(\tau\) is imprimitive.
First, assume an isomorphism \(\widetilde{\tau}^{\prime}\simeq\operatorname{Ind}_{H}^{G_{F}}\eta^{\prime}\) with a proper subgroup \(H\). We can check \(\operatorname{Ker}\widetilde{\tau}^{\prime}\subset H\). Hence \(H\) is open. Hence we can write \(H=G_{F^{\prime}}\) with a finite extension \(F^{\prime}/F\). Hence we obtain an isomorphism \(\tau\simeq\operatorname{Ind}_{W_{F^{\prime}}}^{W_{F}}(\eta^{\prime}|_{W_{F^{ \prime}}}\otimes\phi^{-1}|_{W_{F^{\prime}}})\).
To the contrary, assume \(\tau\simeq\operatorname{Ind}_{H}^{W_{F}}\sigma\). In the same manner as above with replacing \(G_{F}\) by \(W_{F}\), the subgroup \(H\) is an open subgroup of \(W_{F}\) of finite index by \(\dim\tau<\infty\). Hence we can write \(H=W_{F^{\prime}}\) with a finite extension \(F^{\prime}/F\). Let \(\sigma^{\prime}:=\sigma\otimes\phi|_{W_{F^{\prime}}}\). We have \(\tau^{\prime}\simeq\operatorname{Ind}_{W_{F^{\prime}}}^{W_{F}}\sigma^{\prime}\). Frobenius reciprocity implies that \(\sigma^{\prime}(W_{F^{\prime}})\subset\tau^{\prime}(W_{F})\). By the assumption, the image \(\sigma^{\prime}(W_{F^{\prime}})\) is finite. Hence the smooth \(W_{F^{\prime}}\)-representation \(\sigma^{\prime}\) extends to a smooth representation of \(G_{F^{\prime}}\), for which we write \(\widetilde{\sigma}\) ([2, Proposition 28.6]). The restriction of \(\operatorname{Ind}_{G_{F^{\prime}}}^{G_{F}}\widetilde{\sigma}\) to \(W_{F}\) is isomorphic to \(\operatorname{Ind}_{W_{F^{\prime}}}^{W_{F}}\sigma^{\prime}\simeq\tau^{\prime}\). Both of \(\operatorname{Ind}_{G_{F^{\prime}}}^{G_{F}}\widetilde{\sigma}\) and \(\widetilde{\tau}^{\prime}\) are smooth irreducible \(G_{F}\)-representations whose restrictions to \(W_{F}\) are isomorphic to \(\tau^{\prime}\). Hence we obtain an isomorphism \(\widetilde{\tau}^{\prime}\simeq\operatorname{Ind}_{G_{F^{\prime}}}^{G_{F}} \widetilde{\sigma}\) as \(G_{F}\)-representations by [2, Lemma 28.6.2(2)].
**Lemma 3.12**.: _The eigenvalues of \(\operatorname{Fr}_{q}^{*}\) on \(H^{1}_{\operatorname{c}}(C_{R,\mathbb{F}},\overline{\mathbb{Q}}_{\ell})[\psi]\) have the forms \(\zeta\sqrt{q}\) with roots of unity \(\zeta\). The automorphism \(\operatorname{Fr}_{q}^{*}\) is semi-simple._
Proof.: The claims follow from Proposition 2.10.
The cohomology group \(H^{1}_{\operatorname{c}}(C_{R,\mathbb{F}},\overline{\mathbb{Q}}_{\ell})[\psi]\) is regarded as a representation of \(Q_{R,m}\rtimes\widehat{\mathbb{Z}}\). By inflating this by a natural map \(G_{F}\to Q_{R,m}\rtimes\widehat{\mathbb{Z}}\) extending \(\Theta_{R,m,\varpi}\), we obtain a continuous representation of \(G_{F}\). We denote this representation by \(\widetilde{\tau}_{\psi,R,m}\). Let \(\phi\colon G_{F}\to\overline{\mathbb{Q}}_{\ell}^{\times}\) be the unramified character sending a geometric Frobenius to \(\sqrt{q}^{-1}\). The image of \(G_{F}\) by the twist \(\widetilde{\tau}^{\prime}:=\widetilde{\tau}_{\psi,R,m}\otimes\phi\) is finite by Lemma 3.12. By fixing an isomorphism \(\overline{\mathbb{Q}}_{\ell}\simeq\mathbb{C}\), we obtain a continuous representation \(\widetilde{\tau}_{\mathbb{C}}^{\prime}\) of \(G_{F}\) over \(\mathbb{C}\) by \(\widetilde{\tau}^{\prime}\). Then \(\widetilde{\tau}_{\mathbb{C}}^{\prime}\) is primitive if and only if \(\widetilde{\tau}_{\psi,R,m}\) is primitive.
**Corollary 3.13**.: _The \(W_{F}\)-representation \(\tau_{\psi,R,m}\) is primitive if and only if the continuous \(G_{F}\)-representation \(\widetilde{\tau}_{\mathbb{C}}^{\prime}\) is primitive._
Proof.: Clearly \(\widetilde{\tau}_{\mathbb{C}}^{\prime}\) is primitive if and only if \(\widetilde{\tau}^{\prime}\) is primitive. We obtain the claim by applying Lemma 3.11 with \(\widetilde{\tau}=\widetilde{\tau}_{\psi,R,m}\) and \(\tau=\tau_{\psi,R,m}\).
#### 3.2.2 Swan conductor exponent
In the sequel, we compute the Swan conductor exponent \(\operatorname{Sw}(\tau_{\psi,R,m})\).
We simply write \(\alpha,\beta,\gamma\) for \(\alpha_{R},\beta_{R,m},\gamma_{R,m}\) in Definition 3.1, respectively. We consider the unramified field extension \(F_{r}/F\) of degree \(r\) such that \(N:=F_{r}(\alpha,\beta,\gamma)\) is Galois over \(F\). Let \(T:=F_{r}(\alpha)\) and \(M:=T(\beta)\). Then we have
\[F\subset F_{r}\subset T\subset M\subset N.\]
Let \(L/K\) be a Galois extension of non-archimedean local fields with Galois group \(G\). Let \(\{G^{i}\}_{i\geq-1}\) denote the upper numbering ramification groups of \(G\) in [15, IVSS3]. Let \(\psi_{L/K}\) denote the Herbrand function of \(L/K\).
**Lemma 3.14**.: _Let \(G:=\operatorname{Gal}(N/F)\). Then we have_
\[\psi_{N/F}(t)=\begin{cases}t&\text{if }t\leq 0,\\ d_{R}t&\text{if }0<t\leq\frac{m}{d_{R}},\\ p^{2e}d_{R}t-(p^{2e}-1)m&\text{if }\frac{m}{d_{R}}<t\leq\frac{p^{e}+1}{p^{e}} \frac{m}{d_{R}},\\ p^{2e+1}d_{R}t-(p^{e}+1)(p^{e+1}-1)m&\text{otherwise}\end{cases}\]
_and_
\[G^{i}=\begin{cases}G&\text{if }i=-1,\\ \operatorname{Gal}(N/F_{r})&\text{if }-1<i\leq 0,\\ \operatorname{Gal}(N/T)&\text{if }0<i\leq\frac{m}{d_{R}},\\ \operatorname{Gal}(N/M)&\text{if }\frac{m}{d_{R}}<i\leq\frac{p^{e}+1}{p^{e}} \frac{m}{d_{R}},\\ \{1\}&\text{otherwise}.\end{cases}\]
Proof.: We easily have
\[\psi_{T/F}(t)=\begin{cases}t&\text{if }t\leq 0,\\ d_{R}t&\text{otherwise}.\end{cases}\]
For a finite Galois extension \(L/K\), let \(\{\operatorname{Gal}(L/K)_{u}\}_{u\geq-1}\) be the lower numbering ramification subgroups. Let \(1\neq\sigma\in\operatorname{Gal}(M/T)\). Let \(b_{\sigma}=\sigma(\beta)-\beta\) as before. We have \(\widetilde{E}_{R}(\beta+b_{\sigma})=\widetilde{E}_{R}(\beta)\) by the proof of Lemma 3.4. If \(v(b_{\sigma})>0\), we obtain \(b_{\sigma}=0\) by the same argument in the proof of Lemma 3.9. This implies that \(\sigma=1\). By (3.2), we have \(v(b_{\sigma})=0\). By \(v_{M}(\beta)=-m\), we have \(v_{M}(\sigma(\varpi_{M})-\varpi_{M})=m+1\). Hence
\[\operatorname{Gal}(M/T)_{u}=\begin{cases}\operatorname{Gal}(M/T)&\text{if }u \leq m,\\ \{1\}&\text{otherwise},\end{cases}\quad\psi_{M/T}(t)=\begin{cases}t&\text{if }t \leq m,\\ p^{2e}t-(p^{2e}-1)m&\text{otherwise}.\end{cases}\]
Let \(1\neq\sigma\in\operatorname{Gal}(N/M)\). If \(v_{N}(\sigma(\gamma)-\gamma)>0\), we obtain \(\sigma(\gamma)=\gamma\) in the same way as the proof of Lemma 3.9. This implies that \(\sigma=1\). Hence \(v_{N}(\sigma(\gamma)-\gamma)=0\). Let \(\varpi_{N}\) be a uniformizer of \(N\). By \(v_{N}(\gamma^{-1})=(p^{e}+1)m\), we have \(v_{N}(\sigma(\varpi_{N})-\varpi_{N})=(p^{e}+1)m+1\). Thus
\[\operatorname{Gal}(N/M)_{u} =\begin{cases}\operatorname{Gal}(N/M)&\text{if }u\leq(p^{e}+1)m,\\ \{1\}&\text{otherwise},\end{cases}\] \[\psi_{N/M}(t) =\begin{cases}t&\text{if }t\leq(p^{e}+1)m,\\ pt-(p-1)(p^{e}+1)m&\text{otherwise}.\end{cases}\]
Hence the former claim follows from \(\psi_{N/F}=\psi_{N/M}\circ\psi_{M/T}\circ\psi_{T/F}\).
We can check
\[G_{u}=\begin{cases}G&\text{if }u=-1,\\ \operatorname{Gal}(N/F_{r})&\text{if }-1<u\leq 0,\\ \operatorname{Gal}(N/T)&\text{if }0<u\leq m,\\ \operatorname{Gal}(N/M)&\text{if }m<u\leq(p^{e}+1)m,\\ \{1\}&\text{otherwise}\end{cases}\]
by using the former claim and [15, Propositions 12(c), 13(c) and 15 in IVSS3]. Hence the latter claim follows from \(G^{i}=G_{\psi_{N/F}(i)}\).
**Corollary 3.15**.: _We have \(\operatorname{Sw}(\tau_{\psi,R,m})=m(p^{e}+1)/d_{R}\)._
Proof.: Recall that the twist \(\tau_{\psi,R,m}\otimes\phi\) factors through a finite group \(Q_{R}\rtimes(\mathbb{Z}/r\mathbb{Z})\simeq\operatorname{Gal}(F_{r}(\alpha, \beta,\gamma)/F)\) with a certain integer \(r\). Since \(\phi\) is unramified, we have \(\operatorname{Sw}(\tau_{\psi,R,m})=\operatorname{Sw}(\tau_{\psi,R,m}\otimes\phi)\). We have \(\operatorname{Sw}(\tau_{\psi,R,m}\otimes\phi)=m(p^{e}+1)/d_{R}\) by Lemma 3.14 and [7, Theoreme 7.7] ([15, Exercise 2 in SS2VI]).
### Symplectic module associated to Galois representation
Let \(\rho\colon W_{F}\to\operatorname{PGL}_{p^{e}}(\overline{\mathbb{Q}}_{\ell})\) denote the composite of \(\tau_{\psi,R,m}\colon W_{F}\to\operatorname{GL}_{p^{e}}(\overline{\mathbb{Q}}_{ \ell})\) with the natural map \(\operatorname{GL}_{p^{e}}(\overline{\mathbb{Q}}_{\ell})\to\operatorname{PGL}_{p ^{e}}(\overline{\mathbb{Q}}_{\ell})\).
Let \(\rho^{\prime}\) be the projective representation associated to \(\widetilde{\tau}^{\prime}=\widetilde{\tau}_{\psi,R,m}\otimes\phi\).
**Lemma 3.16**.: _We have \(\rho(W_{F})=\rho^{\prime}(G_{F})\), which is finite._
Proof.: Since \(\widetilde{\tau}^{\prime}\) is a smooth irreducible \(G_{F}\)-representation, we have \(\widetilde{\tau}^{\prime}(G_{F})=(\tau_{\psi,R,m}\otimes\phi)(W_{F})\) ([2, the proof of Lemma 2 in 28.6]). This implies the claim.
Let \(F_{\rho}\) denote the kernel field of \(\rho\) and \(T_{\rho}\) the maximal tamely ramified extension of \(F\) in \(F_{\rho}\). Let
\[H:=\operatorname{Gal}(F_{\rho}/T_{\rho})\subset G:=\operatorname{Gal}(F_{ \rho}/F).\]
The homomorphism \(\rho\) induces an injection \(\bar{\rho}\colon G\to\operatorname{PGL}_{p^{e}}(\overline{\mathbb{Q}}_{\ell})\). Let \(V_{R}\) be as in Lemma 2.6.
Let \(\tau\) denote the composite
\[Q_{R,m}\rtimes\mathbb{Z}\to\operatorname{Aut}_{\overline{\mathbb{Q}}_{\ell}} (H^{1}_{\operatorname{c}}(C_{R,\mathbb{F}},\overline{\mathbb{Q}}_{\ell})[ \psi])\to\operatorname{Aut}_{\overline{\mathbb{Q}}_{\ell}}(H^{1}_{ \operatorname{c}}(C_{R,\mathbb{F}},\overline{\mathbb{Q}}_{\ell})[\psi])/ \overline{\mathbb{Q}}_{\ell}^{\times}.\]
**Lemma 3.17**.: _We have an isomorphism \(\bar{\rho}(H)\simeq V_{R}\)._
Proof.: Let \(L\coloneqq F^{\operatorname{ur}}(\alpha_{R}^{m},\beta_{R,m},\gamma_{R,m})\) and \(K:=F^{\operatorname{ur}}(\alpha_{R}^{m})\). By (3.8), we have
\[W(L/F)\simeq Q_{R,m}\rtimes\mathbb{Z},\quad W(L/K)\simeq H_{R}.\]
The subfield \(K\) is the maximal tamely ramified extension of \(F\) in \(L\). We have \(F_{\rho}\subset L\) and \(T_{\rho}=F_{\rho}\cap K\). We have isomorphisms \(G=W(F_{\rho}/F)\simeq W_{F}/\operatorname{Ker}\rho\simeq(Q_{R,m}\rtimes \mathbb{Z})/\operatorname{Ker}\tau\) and \(H=\operatorname{Gal}(F_{\rho}/T_{\rho})\simeq H_{R}/(H_{R}\cap\operatorname{ Ker}\tau)\). By Lemma 2.9, we have \(H_{R}\cap\operatorname{Ker}\tau=Z(H_{R})\). Hence the claim follows from the isomorphism \(V_{R}\xrightarrow{\sim}H_{R}/Z(H_{R});\ \beta\mapsto(1,\beta,0)\).
Let
\[\mathscr{H}_{0}:=G/H=\operatorname{Gal}(T_{\rho}/F).\]
Let \(\sigma\in\mathscr{H}_{0}\). We take a lifting \(\widetilde{\sigma}\in G\twoheadrightarrow\mathscr{H}_{0}\) of \(\sigma\). Let \(\mathscr{H}_{0}\) act on \(H\) by \(\sigma\cdot\sigma^{\prime}:=\widetilde{\sigma}\sigma^{\prime}\widetilde{\sigma }^{-1}\) for \(\sigma^{\prime}\in H\). This is well-defined because \(H\) is abelian by Lemma 3.17. We regard \(H\simeq V_{R}\) as an \(\mathbb{F}_{p}[\mathscr{H}_{0}]\)-module.
By Lemma 3.12, we can take a positive integer \(r\) such that \(r\mathbb{Z}\subset\operatorname{Ker}\tau\) and \(x^{q^{r}}=x\) for \(x\in\mu_{d_{R,m}}\). Let \(\mathbb{Z}/r\mathbb{Z}\) act on \(\mu_{d_{R,m}}\) by \(1\cdot x=x^{q^{-1}}\). We take a generator \(\alpha\in\mu_{d_{R,m}}\). Let
\[\mathscr{H}:=\mu_{d_{R,m}}\rtimes(\mathbb{Z}/r\mathbb{Z})\xrightarrow{\sim} \left\langle\sigma,\tau\mid\sigma^{r}=1,\ \tau^{d_{R,m}}=1,\ \sigma\tau\sigma^{-1}=\tau^{q}\right\rangle, \tag{3.9}\]
where the isomorphism is given by \((\alpha,0)\mapsto\tau\) and \((1,-1)\mapsto\sigma\). The groups \(\mathscr{H}_{0}\) and \(\mathscr{H}\) are supersolvable. We consider the commutative diagram
where every map is canonical and surjective.
**Lemma 3.18**.: _The elements \(\varphi(\alpha,0)\) and \(\varphi(1,-1)\) in \(\mathscr{H}_{0}\) act on \(H\simeq V_{R}\) by \(x\mapsto\alpha x\) and \(x\mapsto x^{q}\) for \(x\in V_{R}\), respectively._
Proof.: These are directly checked.
We can regard \(V_{R}\) as an \(\mathbb{F}_{p}[\mathscr{H}]\)-module via \(\varphi\). Let \(\omega_{R}\) be as in Lemma 2.6(2).
**Lemma 3.19**.: _We have \(\omega_{R}(hx,hx^{\prime})=\omega_{R}(x,x^{\prime})\) for \(h\in\mathscr{H}\)._
Proof.: The claim for \(h=\alpha\) follows from (2.4). For \(h=(1,-1)\), the claim follows from \(\omega_{R}(x^{q},x^{\prime q})=(f_{R}(x,x^{\prime})-f_{R}(x^{\prime},x))^{q}= f_{R}(x,x^{\prime})-f_{R}(x^{\prime},x)=\omega_{R}(x,x^{\prime})\).
**Definition 3.20**.: Let \(G\) be a finite group. Let \(V\) be an \(\mathbb{F}_{p}[G]\)-module with \(\dim_{\mathbb{F}_{p}}V<\infty\). Let \(\omega\colon V\times V\to\mathbb{F}_{p}\) be a symplectic form. We say that the pair \((V,\omega)\) is _symplectic_ if \(\omega\) is non-degenerate and satisfies \(\omega(gv,gv^{\prime})=\omega(v,v^{\prime})\) for \(g\in G\) and \(v,v^{\prime}\in V\).
**Lemma 3.21**.: _The \(\mathbb{F}_{p}[\mathscr{H}]\)-module \((V_{R},\omega_{R})\) is symplectic._
Proof.: The claim follows from Lemma 2.6(2) and Lemma 3.19.
**Definition 3.22**.: The \(\mathbb{F}_{p}[\mathscr{H}_{0}]\)-module \((V_{R},\omega_{R})\) is called a symplectic module associated to \(\tau_{\psi,R,m}\).
**Definition 3.23**.: Let \(\sigma\colon\mathbb{F}\to\mathbb{F};\ x\mapsto x^{q}\). For \(f(x)=\sum_{i=0}^{n}a_{i}x^{i}\in\mathbb{F}[x]\), we set \(f^{\sigma}(x):=\sum_{i=0}^{n}\sigma(a_{i})x^{i}\).
Let \(k\) be a field. We say that a polynomial \(f(x)\in k[x]\) is reduced if the ring \(k[x]/(f(x))\) is reduced.
**Lemma 3.24**.: _Let \(E(x)\in\mathscr{A}_{\mathbb{F}}\) be a reduced polynomial. Let \(V:=\{\beta\in\mathbb{F}\mid E(\beta)=0\}\)._
1. _Assume that_ \(E(x)\) _is monic and_ \(V\) _is stable under_ \(\sigma\)_. Then we have_ \(E(x)\in\mathbb{F}_{q}[x]\)_._
2. _Let_ \(r\) _be a positive integer. Assume that_ \(V\) _is stable under_ \(\mu_{r}\)_-multiplication. Then we have_ \(E(\alpha x)=\alpha E(x)\) _for_ \(\alpha\in\mu_{r}\)_._
Proof.: Recall that \(f(x)\in\mathscr{A}_{\mathbb{F}}\) is reduced if and only if \(f^{\prime}(0)\neq 0\).
We show (1). By the assumption, we have \(E^{\sigma}(\beta)=(E(\beta^{q^{-1}}))^{q}=0\) for any \(\beta\in V\). Since \(E(x)\) is separable, there exists \(\alpha\in\mathbb{F}^{\times}\) such that \(E^{\sigma}(x)=\alpha E(x)\). Since \(E(x)\) is monic, we have \(\alpha=1\). Hence we have the claim.
We show (2). Let \(\alpha\in\mu_{r}\). By the assumption, \(E(\alpha\beta)=0\) for any \(\beta\in V\). Since \(E(x)\) is separable, we have \(E(\alpha x)=cE(x)\) with a constant \(c\in\mathbb{F}^{\times}\). By considering the derivatives of \(E(\alpha x)\), \(cE(x)\) and substituting \(x=0\), we have \(\alpha=c\) by \(E^{\prime}(0)\neq 0\). Hence the claim follows.
**Definition 3.25**.: Let \(f(x)\in\mathscr{A}_{q}\).
1. A decomposition \(f(x)=f_{1}(f_{2}(x))\) with \(f_{i}(x)\in\mathscr{A}_{q}\) is said to be _non-trivial_ if \(\deg f_{i}>1\) for \(i\in\{1,2\}\).
2. We say that \(f(x)\in\mathscr{A}_{q}\) is _prime_ if it does not admit a non-trivial decomposition \(f(x)=f_{1}(f_{2}(x))\) with \(f_{i}(x)\in\mathscr{A}_{q}\).
**Definition 3.26**.: Let \((V,\omega)\) be a symplectic \(\mathbb{F}_{p}[\mathscr{H}]\)-module. Then \((V,\omega)\) is said to be _completely anisotropic_ if \(V\) does not admit a non-zero totally isotropic \(\mathbb{F}_{p}[\mathscr{H}]\)-submodule.
For an \(\mathbb{F}_{p}\)-subspace \(W\subset V\), let \(W^{\perp}:=\{v\in V\mid\omega(v,w)=0\text{ for all }w\in W\}\).
**Proposition 3.27**.: _The symplectic \(\mathbb{F}_{p}[\mathscr{H}]\)-module \((V_{R},\omega_{R})\) is completely anisotropic if and only if there does not exist a non-trivial decomposition \(E_{R}(x)=f_{1}(f_{2}(x))\) with \(f_{i}(x)\in\mathscr{A}_{q}\) such that \(f_{2}(\alpha x)=\alpha f_{2}(x)\) for \(\alpha\in\mu_{d_{R,m}}\) and \(V_{f_{2}}:=\{\beta\in\mathbb{F}\mid f_{2}(\beta)=0\}\) satisfies \(V_{f_{2}}\subset V_{f_{2}}^{\perp}\)._
Proof.: Assume that there exists such a decomposition \(E_{R}(x)=f_{1}(f_{2}(x))\). Since the decomposition is non-trivial, we have \(V_{f_{2}}\neq\{0\}\). Hence \(V_{f_{2}}\) is a non-zero totally isotropic \(\mathbb{F}_{p}[\mathscr{H}]\)-submodule of \(V_{R}\). Hence \(V_{R}\) is not completely anisotropic.
Assume that \(V_{R}\) is not completely anisotropic. We take a non-zero totally isotropic \(\mathbb{F}_{p}[\mathscr{H}]\)-submodule \(V^{\prime}\subset V_{R}\). By [13, 4 in Chapter 1], there exists a monic reduced polynomial \(f(x)\in\mathscr{A}_{\mathbb{F}}\) such that \(V^{\prime}=\{\beta\in\mathbb{F}\mid f(\beta)=0\}\). Since \(V^{\prime}\) is stable by \(\sigma\), we have \(f(x)\in\mathbb{F}_{q}[x]\) by Lemma 3.24(1). Since \(V^{\prime}\) is stable by \(\tau\), we have \(f(\alpha x)=\alpha f(x)\) for \(\alpha\in\mu_{d_{R,m}}\) by Lemma 3.18 and Lemma 3.24(2). There exist \(f_{1}(x),r(x)\in\mathscr{A}_{q}\) such that \(E_{R}(x)=f_{1}(f(x))+r(x)\) and \(\deg r(x)<\deg f(x)\) by [13, Theorem 1]. For any root \(\beta\in V^{\prime}\) of \(f(x)\), we have \(r(\beta)=0\) by \(E_{R}(\beta)=0\). Since \(f(x)\) is separable, \(r(x)\) is divisible by \(f(x)\). Hence \(r(x)\equiv 0\) by \(\deg r(x)<\deg f(x)\). By definition, we have \(V^{\prime}\subset V^{\prime\perp}\). Hence the converse is shown.
**Corollary 3.28**.:
1. _The_ \(W_{F}\)_-representation_ \(\tau_{\psi,R,m}\) _is primitive if and only if the symplectic_ \(\mathbb{F}_{p}[\mathscr{H}]\)_-module_ \((V_{R},\omega_{R})\) _is completely anisotropic._
2. _The_ \(W_{F}\)_-representation_ \(\tau_{\psi,R,m}\) _is primitive if and only if there does not exist a non-trivial decomposition_ \(E_{R}(x)=f_{1}(f_{2}(x))\) _with_ \(f_{i}(x)\in\mathscr{A}_{q}\) _such that_ \(f_{2}(\alpha x)=\alpha f_{2}(x)\) _for_ \(\alpha\in\mu_{d_{R,m}}\) _and_ \(V_{f_{2}}:=\{\beta\in\mathbb{F}\mid f_{2}(\beta)=0\}\) _satisfies_ \(V_{f_{2}}\subset V_{f_{2}}^{\perp}\)_._
3. _If_ \(E_{R}(x)\in\mathscr{A}_{q}\) _is prime, the_ \(W_{F}\)_-representation_ \(\tau_{\psi,R,m}\) _is primitive._
4. _If_ \(R(x)=a_{e}x^{p^{e}}\) _and_ \(\mathbb{F}_{p}(\mu_{d_{R,m}})=\mathbb{F}_{p^{2e}}\)_, the_ \(\mathbb{F}_{p}[\mathscr{H}]\)_-module_ \(V_{R}\) _is irreducible. The_ \(W_{F}\)_-representation_ \(\tau_{\psi,R,m}\) _is primitive. If_ \(\gcd(p^{e}+1,m)=1\)_, the condition_ \(\mathbb{F}_{p}(\mu_{d_{R,m}})=\mathbb{F}_{p^{2e}}\) _is satisfied._
Proof.: The claim (1) follows from Corollary 3.13, Lemma 3.17, and [11, Theorem 4.1].
The claim (2) follows from the claim (1) and Proposition 3.27. The claim (3) follows from (2) immediately.
We show (4). We assume that there exists a non-zero \(\mathbb{F}_{p}[\mathscr{H}]\)-submodule \(W\subset V_{R}=\{\beta\in\mathbb{F}\mid(a_{e}x^{p^{e}})^{p^{e}}+a_{e}x=0\}\). We take a non-zero element \(\beta\in W\). By \(\mathbb{F}_{p}(\mu_{d_{R,m}})=\mathbb{F}_{p^{2e}}\), we have \(\mathbb{F}_{p^{2e}}\beta=\mathbb{F}_{p}(\mu_{d_{R,m}})\beta\subset W\). Since \(V_{R}\) is the set of the roots of a separable polynomial \(E_{R}(x)\) of degree \(p^{2e}\), we have \(|V_{R}|=p^{2e}\). Hence \(W=V_{R}=\mathbb{F}_{p^{2e}}\beta\). Thus the first claim follows. The second claim follows from the first one and [11, Theorem 4.1]. If \(\gcd(p^{e}+1,m)=1\), we have \(d_{R,m}=d_{R}=p^{e}+1\). Hence the third claim follows from \(\mathbb{F}_{p}(\mu_{p^{e}+1})=\mathbb{F}_{p^{2e}}\).
**Example 3.29**.: For a positive integer \(s\), we consider the set
\[\mathcal{A}_{q,s}:=\left\{\varphi(x)\in\mathbb{F}_{q}[x]\ \Big{|}\ \varphi(x)=\sum_{i=0}^{n}c_{i}x^{p^{si}} \right\},\]
which is regarded as a ring with multiplication \(\varphi_{1}\circ\varphi_{2}:=\varphi_{1}(\varphi_{2}(x))\) for \(\varphi_{1},\varphi_{2}\in\mathcal{A}_{q,s}\).
In the following, we give examples such that \(E_{R}(x)\) is prime. We write \(d_{R}=p^{t}+1\) with \(t\geq 0\). Then we have \(E_{R}\in\mathcal{A}_{q,t}\). We write \(q=p^{f}\). Assume \(f\mid t\). We have
\[E_{R}(x)=\sum_{i=0}^{e}a_{i}x^{p^{e+i}}+\sum_{i=0}^{e}a_{i}x^{p^{e-i}}. \tag{3.10}\]
By \(f\mid t\), we have the ring isomorphism \(\Phi\colon\mathcal{A}_{q,t}\xrightarrow{\sim}\mathbb{F}_{q}[y]\); \(\sum_{i=0}^{r}c_{i}x^{p^{ti}}\mapsto\sum_{i=0}^{r}c_{i}y^{i}\), where \(\mathbb{F}_{q}[y]\) is a usual polynomial ring. The polynomial \(E_{R}(x)\in\mathscr{A}_{q}\) is prime if and only if \(\Phi(E_{R}(x))\) is irreducible in \(\mathbb{F}_{q}[y]\) in a usual sense. Recall that a polynomial \(\sum_{i=0}^{r}c_{i}y^{i}\in\mathbb{F}_{q}[y]\) is said to be _reciprocal_ if \(c_{i}=c_{r-i}\) for \(0\leq i\leq r\). By (3.10), we know that \(\Phi(E_{R}(x))\) is a reciprocal polynomial. The number of the monic irreducible reciprocal polynomials is calculated in [3, Theorems 2 and 3].
In general, we do not know a necessary and sufficient condition on \(R(x)\) for \(E_{R}(x)\) to be prime. The number of prime elements in \(\mathcal{A}_{q,s}\) is calculated in [4] and [12].
**Proposition 3.30**.: _Assume \(d_{R,m}\in\{1,2\}\). There exists an unramified finite extension \(F^{\prime}/F\) such that \(\tau_{\psi,R,m}|_{W_{F^{\prime}}}\) is imprimitive._
Proof.: For a positive integer \(r\), let \(F_{r}\) be the unramified extension of \(F\) of degree \(r\). We take a non-zero element \(\beta\in V_{R}\). Let \(t\) be the positive integer such that \(\mathbb{F}_{q^{t}}=\mathbb{F}_{q}(\beta)\). Let \(\mathscr{H}_{t}\subset\mathscr{H}\) be the subgroup generated by \(\sigma^{t},\tau\). The subspace \(W_{R}:=\mathbb{F}_{p}\beta\subset V_{R}\) is a totally isotropic \(\mathbb{F}_{p}[\mathscr{H}_{t}]\)-submodule because of \(d_{R,m}\leq 2\). Hence \(V_{R}\) is not a completely anisotropic \(\mathbb{F}_{p}[\mathscr{H}_{t}]\)-module. Thus \(\tau_{\psi,R,m}|_{W_{F_{t}}}\) is imprimitive by [11, Theorem 4.1].
**Lemma 3.31**.: _The \(W_{T_{\rho}}\)-representation \(\tau_{\psi,R,m}|_{W_{T_{\rho}}}\) is imprimitive._
Proof.: We take a non-zero element \(\beta\in V_{R}\). Then \(\mathbb{F}_{p}\beta\) is a totally isotropic symplectic submodule of the symplectic module \(V_{R}\) associated to \(\tau_{\psi,R,m}|_{W_{T_{\rho}}}\). Hence \(\tau_{\psi,R,m}|_{W_{T_{\rho}}}\) is imprimitive by Corollary 3.28(1).
### Root system associated to irreducible \(\mathbb{F}_{p}[\mathscr{H}]\)-module
A root system associated to an irreducible \(\mathbb{F}_{p}[\mathscr{H}]\)-module is defined in [11]. We determine the root system associated to \(V_{R}\) in the situation of Corollary 3.28(3).
We recall the definition of a root system.
**Definition 3.32**.: ([11, SS7])
1. Let \(\Phi\) be the group of the automorphisms of the torus \((\mathbb{F}^{\times})^{2}\) generated by the automorphisms \(\theta\colon(\alpha,\beta)\mapsto(\alpha^{p},\beta^{p})\) and \(\sigma\colon(\alpha,\beta)\mapsto(\alpha^{q^{-1}},\beta)\). A \(\Phi\)-orbit of \((\mathbb{F}^{\times})^{2}\) is called a _root system_.
2. Let \(W=\Phi(\alpha,\beta)\) be a root system. Let \(a=a(W)\) be the minimal positive integer with \(\alpha^{q^{a}}=\alpha\), \(b=b(W)\) the minimal positive integer with \(\alpha^{p^{b}}=\alpha^{q^{x}}\), \(\beta^{p^{b}}=\beta\) with \(x\in\mathbb{Z}\), and \(c=c(W)\) the minimal non-negative integer with \(\alpha^{p^{b}}=\alpha^{q^{e}}\). Let \(e^{\prime}=e^{\prime}(W)\) and \(f^{\prime}=f^{\prime}(W)\) be the orders of \(\alpha\) and \(\beta\), respectively. These integers are independent of \((\alpha,\beta)\) in \(W\).
3. Let \(\mathscr{H}_{d,r}:=\left\langle\sigma,\tau\mid\tau^{d}=1,\ \sigma^{r}=1,\ \sigma\tau\sigma^{-1}=\tau^{q}\right\rangle\) with \(q^{r}\equiv 1\pmod{d}\).
4. We say that a root system \(W\)_belongs to_\(\mathscr{H}_{d,r}\) if \(e^{\prime}\mid d\) and \(af^{\prime}\mid r\).
5. Let \(W=\Phi(\alpha,\beta)\) be a root system which belongs to \(\mathscr{H}_{d,r}\). Let \(\overline{M(W)}\) be the \(\mathbb{F}\)-module with the basis \[\{\theta^{i}\sigma^{j}m\mid 0\leq i\leq b-1,\ 0\leq j\leq a-1\}\] and with the action of \(\mathscr{H}\) by \[\tau m=\alpha m,\quad\sigma^{a}m=\beta m,\quad\theta^{b}m=\sigma^{-c}m.\]
**Theorem 3.33**.: _([11, Theorems 7.1 and 7.2])_
1. _There exists an irreducible_ \(\mathbb{F}_{p}[\mathscr{H}_{d,r}]\)_-module_ \(M(W)\) _such that_ \(M(W)\otimes_{\mathbb{F}_{p}}\mathbb{F}\) _is isomorphic to_ \(\overline{M(W)}\) _as_ \(\mathbb{F}[\mathscr{H}_{d,r}]\)_-modules._
2. _The map_ \(W\mapsto M(W)\) _defines a one-to-one correspondence between the set of root systems belonging to_ \(\mathscr{H}_{d,r}\) _and the set of isomorphism classes of irreducible_ \(\mathbb{F}_{p}[\mathscr{H}_{d,r}]\)_-modules._
We go back to the original situation. Assume that \(R(x)=a_{e}x^{p^{e}}\) and \(\mathbb{F}_{p}(\mu_{d_{R,m}})=\mathbb{F}_{p^{2e}}\). Let \(\mathscr{H}\) be as in (3.9). In the above notation, we have \(\mathscr{H}=\mathscr{H}_{d_{R,m},r}\). As in Corollary 3.28(3), the \(\mathbb{F}_{p}[\mathscr{H}]\)-module \(V_{R}\) is irreducible.
**Proposition 3.34**.: _We write \(q=p^{f}\). Let \(e_{1}:=\gcd(f,2e)\) and \(\beta:=\mathrm{Nr}_{q/p^{e_{1}}}(-a_{e}^{-(p^{e}-1)})\). Let \(\alpha\in\mu_{d_{R,m}}\) be a primitive \(d_{R,m}\)-th root of unity. We consider the root system \(W:=\Phi(\alpha,\beta)\)._
1. _We have_ \(a(W)=2e/e_{1}\) _and_ \(b(W)=e_{1}.\) _Further,_ \(c(W)\) _is the minimal non-negative integer such that_ \(fc(W)\equiv e_{1}\pmod{2e}\)_._
2. _The root system_ \(W\) _belongs to_ \(\mathscr{H}\)_._
3. _We have an isomorphism_ \(V_{R}\simeq M(W)\) _as_ \(\mathbb{F}_{p}[\mathscr{H}]\)_-modules._
Proof.: We show (1). We simply write \(a,b,c\) for \(a(W),b(W),c(W)\), respectively. By definition, \(a\) is the minimal natural integer such that \(\alpha^{q^{a}}=\alpha\). By \(\mathbb{F}_{p}(\alpha)=\mathbb{F}_{p^{2e}}\), \(a\) is the minimal positive integer satisfying \(fa\equiv 0\pmod{2e}\). Thus we obtain \(a=2e/e_{1}\).
By definition, \(b\) is the minimal natural integer such that \(\alpha^{p^{b}}=\alpha^{q^{x}}\) with some integer \(x\) and \(\beta^{p^{b}}=\beta\). The first condition implies that \(fx\equiv b\pmod{2e}\). Hence \(b\) is divisible by \(e_{1}\). By \(\beta\in\mathbb{F}_{p^{e_{1}}}^{\times}\), we have \(\beta^{p^{b}}=\beta\) if \(e_{1}\mid b\). Hence \(b=e_{1}\).
By definition, \(c\) is the minimal non-negative integer such that \(\alpha^{p^{b}}=\alpha^{q^{e}}\). This is equivalent to \(e_{1}=b\equiv fc\pmod{2e}\).
We show (2). The order \(e^{\prime}\) of \(\alpha\) equals \(d_{R,m}\). Let \(f^{\prime}\) be the order of \(\beta\). It suffices to show \(af^{\prime}\mid r\). By the choice of \(r\), we have \(\alpha^{q^{r}}=\alpha\). Hence \(2e\mid fr\) by \(\mathbb{F}_{p^{2e}}=\mathbb{F}_{p}(\alpha)\), and \(a\mid r\). These imply that \(\mathbb{F}_{p^{2e}}\subset\mathbb{F}_{q^{a}}\subset\mathbb{F}_{q^{r}}\).
Let \(\eta\in V_{R}\setminus\{0\}\). By \(\eta^{p^{2e}}=-a_{e}^{-(p^{e}-1)}\eta\), \(a_{e}\in\mathbb{F}_{q}^{\times}\) and \(2e\mid fr\), we compute
\[\eta^{q^{r}}=(\eta^{p^{2e}-1})^{\frac{q^{r}-1}{p^{2e}-1}}\eta=\mathrm{Nr}_{q^ {r}/p^{2e}}(-a_{e}^{-(p^{e}-1)})\eta=\mathrm{Nr}_{q^{a}/p^{2e}}(-a_{e}^{-(p^{ e}-1)})^{r/a}\eta. \tag{3.11}\]
The restriction map \(\operatorname{Gal}(\mathbb{F}_{q^{a}}/\mathbb{F}_{p^{2e}})\to\operatorname{Gal}( \mathbb{F}_{q}/\mathbb{F}_{\mathbb{F}^{e_{1}}})\) is an isomorphism because of \(a=2e/e_{1}\). By \(a_{e}\in\mathbb{F}_{q}^{\times}\), we have \(\operatorname{Nr}_{q^{a}/p^{2e}}(-a_{e}^{-(p^{e}-1)})=\beta\). Hence \(\eta^{q^{r}}=\beta^{r/a}\eta\) by (3.11). Since \(\eta^{q^{r}}=\eta\) by Lemma 3.18, we obtain \(\beta^{r/a}=1\). Hence \(f^{\prime}\mid(r/a)\).
We show (3). Let \(\eta\in V_{R}\setminus\{0\}\). Similarly to (3.11), we have \(\sigma^{a}\eta=\eta^{q^{a}}=\beta\eta\). By definition and Lemma 3.18, we have \(\tau\eta=\alpha\eta\). The \(\mathbb{F}_{p}[\mathscr{H}]\)-module \(V_{R}\) satisfies the assumption in [11, Lemma 7.3] by (2). Hence \(\{0\}\neq M(W)\subset V_{R}\) by [11, Lemma 7.3]. By the irreducibility of \(V_{R}\) in Corollary 3.28(4), we obtain \(M(W)=V_{R}\).
A necessary and sufficient condition for an irreducible \(\mathbb{F}_{p}[\mathscr{H}]\)-module to have a symplectic form is determined in [11, Theorem 8.1]. We recall the result.
**Theorem 3.35**.: _([11, Theorem 8.1]) Let \(W=\Phi(\alpha,\beta)\) be a root system. The irreducible \(\mathbb{F}_{p}[\mathscr{H}]\)-module \(M(W)\) has a symplectic form if and only if_
1. \(a(W)\equiv 0\pmod{2}\)_,_ \(\alpha\in\mu_{q^{a(W)/2}+1}\) _and_ \(\beta=-1\)_,_
2. \(b(W),c(W)\equiv 0\pmod{2}\)_,_ \(\alpha\in\mu_{p^{b(W)/2}+q^{c(W)/2}}\) _and_ \(\beta\in\mu_{p^{b(W)/2}+1}\)_, or_
3. \(b(W)\equiv 0\pmod{2}\)_,_ \(c(W)\equiv a(W)\pmod{2}\)_,_ \(\alpha\in\mu_{p^{b(W)/2}+q^{a(W)+c(W)/2}}\) _and_ \(\beta\in\mu_{p^{b(W)/2}+1}\)_._
_There are two isomorphism classes of symplectic structures on \(M(W)\) in the case A, \(p\neq 2\) and one in all other cases._
**Lemma 3.36**.: _Let \(W\) be as in Proposition 3.34. Let \(v_{2}(\cdot)\) denote the \(2\)-adic valuation on \(\mathbb{Q}\)._
1. _Assume_ \(v_{2}(e)\geq v_{2}(f)\)_. Then the module_ \(M(W)\) _is of type A in Theorem_ 3.35_._
2. _Assume_ \(v_{2}(e)<v_{2}(f)\)_. Then we have_ \(a(W)\equiv 1\pmod{2}\)_,_ \(b(W)\equiv 0\pmod{2}\) _and_ \((b(W)/2)\mid e\)_. Hence we have_ \(\beta\in\mu_{p^{b(W)/2}+1}\)_._ 1. _If_ \(c(W)\equiv 0\pmod{2}\)_, the module_ \(M(W)\) _is of type B in Theorem_ 3.35_._ 2. _If_ \(c(W)\equiv 1\pmod{2}\)_, the module_ \(M(W)\) _is of type C in Theorem_ 3.35_._
Proof.: We show (1). Recall that \(e_{1}=\gcd(f,2e)\) and \(\beta=\operatorname{Nr}_{q/p_{1}}(-a_{e}^{-(p^{e}-1)})\). We have \(a(W)=2e/e_{1}\equiv 0\pmod{2}\). We have \(e_{1}\mid e\) and \(f/e_{1}\equiv 1\pmod{2}\). By \((p^{e_{1}}-1)\mid(p^{e}-1)\),
\[\beta=(-1)^{\frac{f}{e_{1}}}\big{(}a_{e}^{-\frac{p^{e}-1}{p^{e}-1}}\big{)}^{q- 1}=-1,\]
where we use \(a_{e}\in\mathbb{F}_{q}^{\times}\) for the last equality. By \(fa(W)/2=fe/e_{1}\) and \(q=p^{f}\), we have \(\alpha^{q^{a(W)/2}+1}=\alpha^{p^{fe/e_{1}+1}}\). Since \(fe/e_{1}\) is divisible by \(e\) and \(f/e_{1}\) is odd, \(d_{R,m}\mid(p^{e}+1)\mid(p^{fe/e_{1}}+1)\). Hence we obtain \(\alpha\in\mu_{q^{a(W)/2}+1}\). Thus the claim follows.
We show (2). Recall \(b(W)=e_{1}\). The former claims are clear. By \((e_{1}/2)\mid e\), we have \((p^{e_{1}/2}-1)\mid(p^{e}-1)\). By definition of \(\beta\) and \(a_{e}\in\mathbb{F}_{q}^{\times}\), we obtain
\[\beta^{p^{\frac{e_{1}}{2}}+1}=\big{(}a_{e}^{-\frac{p^{e}-1}{p^{e_{1}/2}-1}} \big{)}^{q-1}=1.\]
Hence \(\beta\in\mu_{p^{b(W)/2}+1}\). Assume that \(c(W)\) is even. We write \((c(W)/2)f=(e_{1}/2)+le\) with \(l\in\mathbb{Z}\) by Proposition 3.34(1). Then \(l\) is odd by \(e_{1}=\gcd(f,2e)\). Hence \((p^{e}+1)\mid(p^{le}+1)\). This implies \(\alpha\in\mu_{p^{b(W)/2}+q^{c(W)/2}}\). Hence we obtain (2)(i). The remaining claim is shown similarly.
#### 3.4.1 Kunneth formula and primary module
Classification results in [11]We recall classification results on completely anisotropic symplectic modules given in [11] restricted to the case \(p\neq 2\).
**Theorem 3.37**.: _([11, Theorem 9.1]) Let \((V,\omega)=\bigoplus_{i=1}^{n}(V_{i},\omega_{i})\) be a direct sum of irreducible symplectic \(\mathbb{F}_{p}[\mathscr{H}]\)-modules. Assume that \(p\neq 2\). Then \((V,\omega)\) is completely anisotropic if and only if, for each isomorphism class, the modules of type B or C occur at most once and of type A at most twice among \(V_{1},\ldots,V_{n}\)._
Assume that \(p\neq 2\). Let \((M(W),0)\) denote the unique symplectic module on \(M(W)\) which is of type B or C by Theorem 3.35. Let \((M(W),0)\), \((M(W),1)\) denote the two symplectic modules on \(M(W)\) in the case where \(p\neq 2\) and \(M(W)\) is of type A. We denote by \((M(W),2)\) the completely anisotropic symplectic module on \(M(W)\oplus M(W)\), where \(M(W)\) is of type A.
**Theorem 3.38**.: _([11, Theorem 8.2]) Each completely anisotropic symplectic \(\mathbb{F}_{p}[\mathscr{H}]\)-module is isomorphic to one and only one symplectic module of the form_
\[\bigoplus_{i=1}^{n}(M(W_{i}),\nu_{i}),\]
_where \(W_{1},\ldots,W_{n}\) are mutually different root systems belonging to \(\mathscr{H}\)._
Let \(k\) be a positive integer. Let \(R:=\{R_{i}\}_{1\leq i\leq k}\) with \(R_{i}\in\mathscr{A}_{q}\). We consider the \(k\)-dimensional affine smooth variety \(X_{R}\) defined by
\[a^{p}-a=\sum_{i=1}^{k}x_{i}R_{i}(x_{i})\]
in \(\mathbb{A}_{\mathbb{F}_{q}}^{k+1}\). The product group \(Q_{R}:=Q_{R_{1}}\times\cdots\times Q_{R_{k}}\) acts on \(X_{R}\) naturally similarly as (2.5). Let \(\mathbb{Z}\) act on \(Q_{R}\) naturally. Let \(\psi\in\mathbb{F}_{p}^{\vee}\setminus\{1\}\). We regard \(H_{\mathrm{c}}^{k}(X_{R,\mathbb{F}},\overline{\mathbb{Q}}_{\ell})[\psi]\) as a \(Q_{R}\rtimes\mathbb{Z}\)-representation. Let the notation be as in (3.3). Let \(m=\{m_{i}\}_{1\leq i\leq k}\), where \(m_{i}\) is a positive integer. We have the homomorphism
\[\Theta_{R,m}\colon W_{F}\to Q_{R}\rtimes\mathbb{Z};\ \sigma\mapsto((a_{R_{i}, \sigma}^{m_{i}},b_{R_{i},\sigma},c_{R_{i},\sigma})_{1\leq i\leq k},n_{\sigma}). \tag{3.12}\]
**Definition 3.39**.: We define a smooth \(W_{F}\)-representation \(\tau_{\psi,R,m}\) to be the inflation of the \(Q_{R}\rtimes\mathbb{Z}\)-representation \(H_{\mathrm{c}}^{k}(X_{R,\mathbb{F}},\overline{\mathbb{Q}}_{\ell})[\psi]\) by \(\Theta_{R,m}\).
**Lemma 3.40**.: _We have an isomorphism \(\tau_{\psi,R,m}\simeq\bigotimes_{i=1}^{k}\tau_{\psi,R_{i},m_{i}}\) as \(W_{F}\)-representations._
Proof.: Let \(Q_{R_{i},\mathbb{Z}}:=Q_{R_{i}}\rtimes\mathbb{Z}\) and \(\Theta_{R_{i},m_{i}}\colon W_{F}\to Q_{R_{i},\mathbb{Z}}\) be as in (3.6). Let
\[\delta^{\prime}\colon Q_{R}\rtimes\mathbb{Z}\to Q_{R_{1},\mathbb{Z}}\times \cdots\times Q_{R_{k},\mathbb{Z}};\ ((g_{i})_{1\leq i\leq k},n)\mapsto(g_{i},n)_{1\leq i\leq k}.\]
Each \(H_{\mathrm{c}}^{1}(C_{R_{i},\mathbb{F}},\overline{\mathbb{Q}}_{\ell})[\psi]\) is regarded as a \(Q_{R_{i},\mathbb{Z}}\)-representation. By the Kunneth formula, we have an isomorphism \(H_{\mathrm{c}}^{k}(X_{R,\mathbb{F}},\overline{\mathbb{Q}}_{\ell})[\psi]\simeq \bigotimes_{i=1}^{k}(H_{\mathrm{c}}^{1}(C_{R_{i},\mathbb{F}},\overline{\mathbb{ Q}}_{\ell})[\psi])\) as \(Q_{R}\rtimes\mathbb{Z}\)-representations, where the right hand side is regarded as a \(Q_{R}\rtimes\mathbb{Z}\)-representation via \(\delta^{\prime}\). We consider the commutative diagram
where \(\delta\) is the diagonal map. Hence the claim follows.
**Remark 3.41**.: Let \(+\colon\prod_{i=1}^{k}Z(Q_{R_{i}})\to\mathbb{F}_{p}\); \((1,0,\gamma_{i})_{1\leq i\leq k}\mapsto\sum_{i=1}^{k}\gamma_{i}\) and \(\overline{Q}_{R}:=Q_{R}/\operatorname{Ker}+\). The action of \(Q_{R}\rtimes\mathbb{Z}\) on \(H_{\mathrm{c}}^{k}(X_{R,\mathbb{F}},\overline{\mathbb{Q}}_{\ell})\) factors through \(\overline{Q}_{R}\rtimes\mathbb{Z}\). Let \(\overline{H}_{R}\) denote the image of \(H_{R_{1}}\times\cdots\times H_{R_{k}}\) under \(Q_{R}\to\overline{Q}_{R}\). The group \(\overline{H}_{R}\) is an extra-special \(p\)-group. The quotient \(\overline{H}_{R}/Z(\overline{H}_{R})\) is isomorphic to \(\bigoplus_{i=1}^{k}V_{R_{i}}\). Moreover, \(\overline{Q}_{R}/\overline{H}_{R}\) is supersolvable.
**Lemma 3.42**.: _The \(W_{F}\)-representation \(\tau_{\psi,R,m}\) is irreducible._
Proof.: The \(\overline{H}_{R}\)-representation \(H_{\mathrm{c}}^{k}(X_{R,\mathbb{F}},\overline{\mathbb{Q}}_{\ell})[\psi]\) is irreducible by [8, 16.14(2) Satz]. The claim follows from this.
Let \(\rho_{\psi,R_{i},m_{i}}\) denote the projective representation associated to \(\tau_{\psi,R_{i},m_{i}}\). Let \(F_{i}\) denote the kernel field of \(\rho_{\psi,R_{i},m_{i}}\) and \(T_{i}\) the maximal tamely ramified extension of \(F\) in \(F_{i}\). The field \(T_{i}\) is called the tame kernel field of \(\rho_{\psi,R_{i},m_{i}}\). Let \(F_{R}:=F_{1}\cdots F_{k}\).
**Lemma 3.43**.: _Let \(\rho_{\psi,R,m}\) be the projective representation associated to \(\tau_{\psi,R,m}\). The kernel field of \(\rho_{\psi,R,m}\) is \(F_{R}\)._
Proof.: By Lemma 3.40, we can check \(\operatorname{Ker}\rho_{\psi,R,m}=\bigcap_{i=1}^{k}\operatorname{Ker}\rho_{ \psi,R_{i},m_{i}}\). The claim follows from this.
Let \(T_{R}\) be the maximal tamely ramified extension of \(F\) in \(F_{R}\). We have the restriction map \(V_{R}\hookrightarrow\prod_{i=1}^{k}\operatorname{Gal}(F_{i}/T_{i})\simeq \bigoplus_{i=1}^{k}V_{R_{i}}\). Then \(V_{R}:=\operatorname{Gal}(F_{R}/T_{R})\) has a bilinear form stable under the action of \(\mathbb{F}_{p}[\operatorname{Gal}(T_{R}/F)]\) ([11, SS4]). The form on \(V_{R}\) is given by \(\omega_{R}:=\sum_{i=1}^{k}\omega_{R_{i}}\).
Let \(\omega_{R_{i}}\) be the form on \(V_{R_{i}}\) in Lemma 2.6(2). We give a recipe to make an example of \((M(W),2)\) below.
**Proposition 3.44**.: _Assume \(k=2\). Let \(R_{i}(x)=a_{e,i}x^{p^{e}}\neq 0\) for \(i\in\{1,2\}\). Assume_
\[m_{1}\neq m_{2},\quad d:=d_{R_{1},m_{1}}=d_{R_{2},m_{2}}.\]
1. _We have an isomorphism_ \(V_{R}\simeq V_{R_{1}}\oplus V_{R_{2}}\)_._
2. _We have_ \(T_{R}=T_{1}\cdot T_{2}\)_._
3. _Assume that_ \(p\neq 2\)_,_ \(v_{2}(e)\geq v_{2}(f)\) _and_ \(\mathbb{F}_{p}(\mu_{d})=\mathbb{F}_{p^{2e}}\)_. If_ \((V_{R},\omega_{R})\) _is completely anisotropic as a symplectic_ \(\mathbb{F}_{p}[\operatorname{Gal}(T_{R}/F)]\)_-module,_ \(V_{R}\) _is isomorphic to a primary module_ \((M(W),2)\) _with a root system_ \(W\)_._
Proof.: By Lemma 2.9 and Lemma 3.12, there exists an unramified finite extension \(E\) of \(F\) such that \(F_{i}\subset E(\alpha_{R_{i}}^{m_{i}},\beta_{R_{i},m_{i}})\) for \(i=1,2\) and \(E(\alpha_{R_{i}}^{m_{i}},\beta_{R_{i},m_{i}})/E\) is Galois. We put \(T:=E(\alpha_{R_{i}}^{m_{i}})=E(\varpi^{1/d})\) and \(E_{i}:=T(\beta_{R_{i},m_{i}})\) for \(i=1,2\). Let \(n_{i}:=m_{i}d/d_{R}=m_{i}/\gcd(d_{R},m_{i})\). Let \(\{\operatorname{Gal}(E_{i}/T)^{v}\}_{v\geq-1}\) be the upper numbering ramification subgroups of \(\operatorname{Gal}(E_{i}/T)\). Similarly as the proof of Lemma 3.14, we have
\[\operatorname{Gal}(E_{i}/T)^{v}=\begin{cases}\operatorname{Gal}(E_{i}/T)&\text {if $v\leq n_{i}$},\\ \{1\}&\text{if $v>n_{i}$}.\end{cases}\]
Let \(H:=E_{1}\cap E_{2}\). Since \(E_{i}/T\) is Galois, so is \(H/T\). By [15, Proposition 14 in IVSS3], the subgroup \(\operatorname{Gal}(H/T)^{v}\) equals \(\operatorname{Gal}(H/T)\) if \(v\leq n_{i}\) and \(\{1\}\) if \(v>n_{i}\). Hence we conclude \(\operatorname{Gal}(H/T)=\{1\}\) by \(n_{1}\neq n_{2}\). We obtain \(H=T\). Hence we have an isomorphism
\(\operatorname{Gal}(E_{1}E_{2}/T)\simeq\operatorname{Gal}(E_{1}/T)\times \operatorname{Gal}(E_{2}/T)\simeq V_{R_{1}}\oplus V_{R_{2}}\). The extension \(E_{1}E_{2}/T\) is totally ramified and the degree is \(p\)-power. Hence, \(T\) is the maximal tamely ramified extension of \(E\) in \(E_{1}\cdot E_{2}\). Therefore, \(T_{R}=F_{R}\cap T\). We have the commutative diagram
where every map is the restriction map. The right vertical isomorphism follows from Lemma 3.17. Clearly \(g\) is injective. By the commutative diagram, \(g\) is bijective. Hence we obtain (1).
By the commutative diagram
and injectivity of \(g_{1}\), the map \(g_{2}\) is injective. Hence \(T_{R}=T_{1}T_{2}\).
We show (3). Let \(r:=[E:F]\) and \(\mathscr{H}_{d,r}:=\mu_{d}\rtimes(\mathbb{Z}/r\mathbb{Z})\) as in (3.9). We identify \(\operatorname{Gal}(T/F)\) with \(\mathscr{H}_{d,r}\). By \(T_{R}\subset T\), the \(V_{R}\), \(V_{R_{i}}\) are naturally regarded as \(\mathbb{F}_{p}[\mathscr{H}_{d,r}]\)-modules. Let \(\alpha\) be a primitive \(d\)-th root of unity. Let \(W:=\Phi(\alpha,-1)\). Then we have an isomorphism \(V_{R_{i}}\simeq M(W)\) as \(\mathbb{F}_{p}[\mathscr{H}_{d,r}]\)-modules and know that \(V_{R_{i}}\) is of type A by Proposition 3.34(3), Lemma 3.36(1) and \(d_{R_{1},m_{1}}=d_{R_{2},m_{2}}\). This implies an isomorphism \(V_{R_{1}}\simeq V_{R_{2}}\) as \(\mathbb{F}_{p}[\mathscr{H}_{d,r}]\)-modules. Hence the claim follows from the assumption that \((V_{R},\omega_{R})\) is completely anisotropic and the definition of \((M(W),2)\).
**Example 3.45**.: Assume \(p\neq 2\). Let \(e=f=1\), \(R_{1}(x)=x^{p}\) and \(R_{2}(x)=ax^{p}\in\mathbb{F}_{p}[x]\setminus\{0\}\). We assume that \(m_{1}\neq m_{2}\) and \(d_{R_{1},m_{1}}=d_{R_{2},m_{2}}=p+1\). We have \(V_{R_{i}}=\{x\in\mathbb{F}\mid x^{p^{2}}+x=0\}\) for \(i=1,2\).
Let \(W\subset V_{R_{1}}\oplus V_{R_{2}}\) be a totally isotropic \(\mathbb{F}_{p}[\operatorname{Gal}(T_{R}/F)]\)-subspace. Assume \(W\neq\{0\}\). We take a non-zero element \((x_{1},x_{2})\in W\). We have \(f_{R_{1}}(x,y)=-xy^{p}\), \(f_{R_{2}}(x,y)=-axy^{p}\) and hence \(\omega_{R}((x_{1},x_{2}),(\xi x_{1},\xi x_{2}))=(x_{1}^{p+1}+ax_{2}^{p+1})(\xi -\xi^{p})=0\) for any \(\xi\in\mu_{p+1}\). Hence \(x_{1}^{p+1}+ax_{2}^{p+1}=0\) and \(x_{2}\neq 0\). There exists \(\eta\in\mathbb{F}\) such that \(\eta^{p+1}=-a\) and \(x_{1}=\eta x_{2}\). By \(\mathbb{F}_{p^{2}}=\mathbb{F}_{p}(\mu_{p+1})\), we have \(W_{1}:=\{(\eta x,x)\mid x\in V_{R_{2}}\}\subset W\). We also have \(W_{2}:=\{(\eta^{p}x,x)\mid x\in V_{R_{2}}\}\subset W\). Let \(\left(\frac{\cdot}{p}\right)\) be the Legendre symbol. If \(W_{1}\cap W_{2}\neq\{0\}\), we have \(\eta\in\mathbb{F}_{p}\) and \(\eta^{2}=-a\). This implies \(\left(\frac{-a}{p}\right)=1\).
Assume \(\left(\frac{-a}{p}\right)=-1\). Then \(W=W_{1}\oplus W_{2}=V_{R_{1}}\oplus V_{R_{2}}\) by \(W_{1}\cap W_{2}=\{0\}\). This is a contradiction. Hence \(V_{R_{1}}\oplus V_{R_{2}}\) is completely anisotropic if \(\left(\frac{-a}{p}\right)=-1\).
If \(\left(\frac{-a}{p}\right)=1\), we have \(W_{1}=W_{2}\), which is the unique non-zero totally isotropic \(\mathbb{F}_{p}[\mathscr{H}]\)-subspace. Hence \(V_{R_{1}}\oplus V_{R_{2}}\) is not completely anisotropic.
## 4 Geometric interpretation of imprimitivity
Through this section, we always assume \(p\neq 2\). Our aim in this section is to show Theorem 4.13. To show the theorem, we use the explicit understanding of the automorphism group of \(C_{R}\) and the mechanism of taking quotients of \(C_{R}\) by certain abelian groups, which are developed in [1] and [6].
### Quotient of \(C_{r}\) and description of \(\tau_{\psi,R,m}\)
Let \(C_{R}\) be as in (2.7). In this subsection, we always assume that there exists a finite etale morphism
\[\phi\colon C_{R}\to C_{R_{1}};\ (a,x)\mapsto(a-\Delta(x),r(x)), \tag{4.1}\]
where \(\Delta(x)\in\mathbb{F}_{q}[x]\) and \(r(x),R_{1}(x)\in\mathscr{A}_{q}\) satisfy \(d_{R,m}\mid d_{R_{1}}\) and \(r(\alpha x)=\alpha r(x)\) for \(\alpha\in\mu_{d_{R,m}}\). Since \(\phi\) is etale, \(r(x)\) is a reduced polynomial. Hence \(r^{\prime}(0)\neq 0\). By the above assumption,
\[xR(x)=r(x)R_{1}(r(x))+\Delta(x)^{p}-\Delta(x) \tag{4.2}\] \[r^{\prime}(0)\neq 0,\quad d_{R,m}\mid d_{R_{1}},\quad r(\alpha x)= \alpha r(x)\quad\text{for }\alpha\in\mu_{d_{R,m}}. \tag{4.3}\]
Let \(e^{\prime}\) be a non-negative integer such that \(\deg R_{1}(x)=p^{e^{\prime}}\) and \(e^{\prime}\leq e\). Then \(\deg r(x)=p^{e-e^{\prime}}\) by (4.2).
We have \(\alpha R_{1}(\alpha x)=R_{1}(x)\) for \(\alpha\in\mu_{d_{R,m}}\) by \(d_{R,m}\mid d_{R_{1}}\) and (2.3). Hence \(\Delta(\alpha x)-\Delta(x)\in\mathbb{F}_{p}\) for \(\alpha\in\mu_{d_{R,m}}\) by (4.2). We have \(\Delta(\alpha x)=\Delta(x)\), since the constant coefficient of \(\Delta(\alpha x)-\Delta(x)\) is zero.
**Lemma 4.1**.: _Let \(\phi\colon C_{R}\to C_{R};\ (x,a)\mapsto(x+c,a+g(x))\) be the automorphism with \(g(x)\in\mathbb{F}_{q}[x]\) and \(c\in\mathbb{F}\). Then we have \(E_{R}(c)=0\)._
Proof.: We have \(g(x)^{p}-g(x)=cR(x)+xR(c)+cR(c)\). Let \(\mathcal{P}\colon\mathbb{F}[x]\to\mathbb{F}[x];\ f(x)\mapsto f(x)^{p}-f(x)\). By the definition of \(E_{R}(x)\), we obtain \(cR(x)+xR(c)+cR(c)\equiv E_{R}(c)^{1/p^{e}}x\mod\mathcal{P}(\mathbb{F}[x])\). Thus we must have \(E_{R}(c)=0\).
**Lemma 4.2**.: _We have \(E_{R_{1}}(r(x))\mid E_{R}(x)\)._
Proof.: Let \(\beta\in\mathbb{F}\) be an element such that \(E_{R_{1}}(r(\beta))=0\). We take an element \(\gamma\in\mathbb{F}\) such that \(\gamma^{p}-\gamma=r(\beta)R_{1}(r(\beta))\). The curve \(C_{R,\mathbb{F}}\) admits the automorphism \(\phi\) defined by
\[\phi(a,x)=(a+f_{R_{1}}(r(x),r(\beta))+\Delta(x+\beta)-\Delta(x)+\gamma,x+\beta )\,.\]
This is well-defined by Lemma 2.2 and (4.2). By Lemma 4.1, we have \(E_{R}(\beta)=0\). Since \(E_{R_{1}}(r(x))\) is separable, the claim follows.
**Lemma 4.3**.: _Let \(\alpha,\alpha^{\prime}\in\mu_{d_{R,m}}\). Assume \(E_{R_{1}}(r(\alpha y))=0\) for a certain \(y\in\mathbb{F}\). Then we have the equality_
\[\Delta(\alpha^{\prime}x+\alpha y)+f_{R_{1}}(r(\alpha^{\prime}x),r(\alpha y))= \Delta(x)+\Delta(y)+f_{R}(\alpha^{\prime}x,\alpha y).\]
Proof.: By \(\Delta(\alpha^{\prime}x+\alpha y)=\Delta(x+(\alpha/\alpha^{\prime})y)\) and (2.4), we may assume \(\alpha^{\prime}=1\) by (4.3). We have \(E_{R}(\alpha y)=0\) by Lemma 4.2. Let \(\Delta_{1}(x)\) and \(\Delta_{2}(x)\) denote the left and right hand sides of the required equality, respectively. We have \(\Delta_{1}(0)=\Delta(\alpha y)=\Delta(y)=\Delta_{2}(0)\), since \(f_{R}(0,x^{\prime})\equiv 0\) in \(\mathbb{F}_{q}[x^{\prime}]\) by definition. Hence it suffices to show \(\Delta_{1}(x)^{p}-\Delta_{1}(x)=\Delta_{2}(x)^{p}-\Delta_{2}(x)\). By Lemma 4.2 and the assumption, \(E_{R_{1}}(r(\alpha y))=E_{R}(\alpha y)=0\). Hence each \(\Delta_{i}(x)^{p}-\Delta_{i}(x)\) for \(i=1,2\) equals \((x+\alpha y)R(x+\alpha y)-r(y)R_{1}(r(y))-r(x)R_{1}(r(x))\) according to Lemma 2.2. Hence the claim follows.
Let
\[U_{R}:=\{x\in\mathbb{F}\mid r(x)=0\}\subset V_{R}^{\prime}:=\{x\in\mathbb{F} \mid E_{R_{1}}(r(x))=0\}.\]
We have \(V_{R}^{\prime}\subset V_{R}\) by Lemma 4.2. Then \(U_{R}\) and \(V_{R}^{\prime}\) are regarded as \(\mathbb{F}_{p}[\mathscr{H}]\)-modules by \(r(x),R_{1}(x)\in\mathbb{F}_{q}[x]\) and (4.3).
**Lemma 4.4**.: _We have \(V_{R}^{\prime}\subset U_{R}^{\perp}\). In particular, the \(\mathbb{F}_{p}[\mathscr{H}]\)-module \(U_{R}\) is totally isotropic._
Proof.: Let \(\beta\in U_{R}\) and \(\beta^{\prime}\in V_{R}^{\prime}\). By Lemma 4.3, \(r(\beta)=0\) and \(E_{R_{1}}(r(\beta^{\prime}))=0\), we have \(f_{R}(\beta^{\prime},\beta)=f_{R}(\beta,\beta^{\prime})=\Delta(\beta+\beta^{ \prime})-\Delta(\beta)-\Delta(\beta^{\prime})\). Hence \(\omega_{R}(\beta,\beta^{\prime})=0\).
Let
\[Q_{R,m}^{\prime}:=\{(\alpha,\beta,\gamma)\in Q_{R,m}\mid\beta\in V_{R}^{\prime}\}.\]
Then \(Q_{R,m}^{\prime}\) is a subgroup of \(Q_{R,m}\) of index \(p^{e-e^{\prime}}\), because of (4.3) and \([V_{R}:V_{R}^{\prime}]=p^{e-e^{\prime}}\). We have the map
\[\pi\colon Q_{R,m}^{\prime}\to Q_{R_{1},m};\ (\alpha,\beta,\gamma)\mapsto( \alpha,r(\beta),\gamma-\Delta(\beta)).\]
**Corollary 4.5**.: _The map \(\pi\) is a homomorphism._
Proof.: The claim follows from Lemma 4.3 and (4.3).
We have
\[U_{R}^{\prime}:=\{(1,\beta,\Delta(\beta))\in Q_{R,m}^{\prime}\mid\beta\in U_{ R}\}=\operatorname{Ker}\pi. \tag{4.4}\]
The space \(V_{R}^{\prime}\) is stable by the \(q\)-th power map. Hence we can consider the semidirect product \(Q_{R,m}^{\prime}\rtimes\mathbb{Z}\). The map \(\pi\) induces \(\pi^{\prime}\colon Q_{R,m}^{\prime}\rtimes\mathbb{Z}\to Q_{R_{1},m}\rtimes \mathbb{Z}\).
Quotient of \(C_{R}\)Let \(\phi\) be as in (4.1). We can check that \(\phi\) factors through \(C_{R,\mathbb{F}}\to C_{R,\mathbb{F}}/U_{R}^{\prime}\xrightarrow{\tilde{\phi}} C_{R_{1},\mathbb{F}}\) by (2.5). We obtain an isomorphism \(\bar{\phi}\colon C_{R,\mathbb{F}}/U_{R}^{\prime}\xrightarrow{\sim}C_{R_{1}, \mathbb{F}}\).
**Lemma 4.6**.: _We have \(\phi((a,x)g)=\phi(a,x)\pi^{\prime}(g)\) for \(g\in Q_{R,m}^{\prime}\rtimes\mathbb{Z}\)._
Proof.: The claim follows from Lemma 4.3.
Let \(\tau_{\psi,R_{1},m}^{\prime}\) denote the \(Q_{R,m}^{\prime}\rtimes\mathbb{Z}\)-representation which is the inflation of the \(Q_{R_{1},m}\rtimes\mathbb{Z}\)-representation \(H_{\mathrm{c}}^{1}(C_{R_{1},\mathbb{F}},\overline{\mathbb{Q}}_{\ell})[\psi]\) by \(\pi^{\prime}\). By (3.6), we have the homomorphism \(\Theta_{R,m}\colon W_{F}\to Q_{R,m}\rtimes\mathbb{Z}\). We define a \(W_{F}\)-representation \(\tau_{\psi,R_{1},m}^{\prime\prime}\) to be the inflation of \(\operatorname{Ind}_{Q_{R,m}^{\prime}\rtimes\mathbb{Z}}^{Q_{R,m}\rtimes \mathbb{Z}}\tau_{\psi,R_{1},m}^{\prime}\) via \(\Theta_{R,m}\). We have \(\dim\tau_{\psi,R_{1},m}^{\prime\prime}=p^{e}\) by \([Q_{R,m}:Q_{R,m}^{\prime}]=p^{e-e^{\prime}}\) and \(\dim\tau_{\psi,R_{1},m}^{\prime}=p^{e^{\prime}}\).
**Proposition 4.7**.: _We have an isomorphism \(\tau_{\psi,R,m}\simeq\tau_{\psi,R_{1},m}^{\prime\prime}\) as \(W_{F}\)-representations._
Proof.: By Lemma 4.6, we have the injection
\[\tau_{\psi,R_{1},m}^{\prime}=H_{\mathrm{c}}^{1}(C_{R_{1},\mathbb{F}},\overline {\mathbb{Q}}_{\ell})[\psi]\xrightarrow{\phi^{*}}H_{\mathrm{c}}^{1}(C_{R, \mathbb{F}},\overline{\mathbb{Q}}_{\ell})[\psi]\]
as \(Q_{R,m}^{\prime}\rtimes\mathbb{Z}\)-representations. Hence we have a non-zero homomorphism
\[\operatorname{Ind}_{Q_{R,m}^{\prime}\rtimes\mathbb{Z}}^{Q_{R,m}\rtimes \mathbb{Z}}\tau_{\psi,R_{1},m}^{\prime}\to H_{\mathrm{c}}^{1}(C_{R, \mathbb{F}},\overline{\mathbb{Q}}_{\ell})[\psi] \tag{4.5}\]
as \(Q_{R,m}\rtimes\mathbb{Z}\)-representations by Frobenius reciprocity. Since the target is irreducible by Lemma 2.8(2), the map (4.5) is surjective. By comparing the dimensions, (4.5) is an isomorphism. By inflating it by \(\Theta_{R,m}\), we obtain the claim.
We consider the open subgroup \(W^{\prime}:=\Theta_{R,m}^{-1}(Q^{\prime}_{R,m}\rtimes\mathbb{Z})\subset W_{F}\) of index \(p^{e-e^{\prime}}\). We can write \(W^{\prime}=W_{F^{\prime}}\) with a finite field extension \(F^{\prime}/F\) of degree \(p^{e-e^{\prime}}\). Let
\[\tau^{\prime}_{\psi,R_{1},m}\colon W_{F^{\prime}}\xrightarrow{\Theta_{R,m}}Q^{ \prime}_{R,m}\rtimes\mathbb{Z}\xrightarrow{\pi^{\prime}}Q_{R_{1},m}\rtimes \mathbb{Z}\to\operatorname{Aut}_{\overline{\mathbb{Q}}_{\ell}}(H^{1}_{\rm c}(C_ {R_{1},\mathbb{F}},\overline{\mathbb{Q}}_{\ell})[\psi]) \tag{4.6}\]
be the composite.
**Corollary 4.8**.: _We have an isomorphism \(\tau_{\psi,R,m}\simeq\operatorname{Ind}_{W_{F}}^{W_{F}}\tau^{\prime}_{\psi,R_ {1},m}\) as \(W_{F}\)-representations. If \(e^{\prime}<e\), the \(W_{F}\)-representation \(\tau_{\psi,R,m}\) is imprimitive._
Proof.: The assertion follows from Proposition 4.7.
### Totally isotropic subspace and geometry of \(C_{r}\)
Let \((1,\beta,\gamma)\in H_{R}\) so, as in Definition 2.3(2), we know that \(\gamma^{p}-\gamma=\beta R(\beta)\). We obtain \((f_{R}(\beta,\beta)-2\gamma)^{p}=f_{R}(\beta,\beta)-2\gamma\) by definition of the pairing \(\omega_{R}\) (Lemma 2.6(2)). Assume
\[\beta\neq 0,\quad\gamma=\frac{f_{R}(\beta,\beta)}{2}. \tag{4.7}\]
The following lemma is given in [6, Propositions (9.1) and (13.5)] and [1, Proposition 7.2]. This lemma gives an algorithm of taking quotients of \(C_{R}\) by certain abelian groups.
**Lemma 4.9**.: _Let \(C_{R}\) be as in Definition 2.7. Assume \(e\geq 1\)._
1. _Let_ \[u:=x^{p}-\beta^{p-1}x,\quad v:=a+(x/\beta)(\gamma(x/\beta)-f_{R}(x,\beta)).\] (4.8) _Then there exists_ \(P_{1}(u)\in\mathscr{A}_{\mathbb{F}}\) _of degree_ \(p^{e-1}\) _such that_ \(v^{p}-v=uP_{1}(u)\)_._
2. _Let_ \(U:=\{(1,\xi\beta,\xi^{2}\gamma)\in H_{R}\mid\xi\in\mathbb{F}_{p}\}=\langle(1,\beta,\gamma)\rangle\)_. Then the quotient_ \(C_{R,\mathbb{F}}/U\) _is isomorphic to_ \(C_{P_{1},\mathbb{F}}\)_._
Proof.: We show (1). Let \(x_{1}:=x/\beta\) and \(u_{1}:=u/\beta^{p}\). Then \(u_{1}=x_{1}^{p}-x_{1}\). We compute
\[v^{p}-v =xR(x)+\gamma^{p}x_{1}^{2p}-\gamma x_{1}^{2}-x_{1}^{p}f_{R}(x, \beta)^{p}+x_{1}f_{R}(x,\beta)\] \[=xR(x)+\gamma(x_{1}^{2p}-x_{1}^{2})+\beta^{-2p+1}R(\beta)x^{2p}\] \[\quad-u_{1}f_{R}(x,\beta)-(x/\beta)^{p}(\beta R(x)+xR(\beta))\] \[=u\beta^{-p}(-\beta R(x)+\beta^{-p+1}R(\beta)x^{p}+\gamma(x_{1}^ {p}+x_{1})-f_{R}(x,\beta)),\]
where we use \(\gamma^{p}-\gamma=\beta R(\beta)\) and Lemma 2.2 for the second equality. Let \(P(x):=\beta^{-p}(-\beta R(x)+\beta^{-p+1}R(\beta)x^{p}+\gamma(x_{1}^{p}+x_{1} )-f_{R}(x,\beta))\). Since \(P(x)\) is additive, there exists \(P_{1}(u)\in\mathscr{A}_{\mathbb{F}}\) such that \(P(x)=P_{1}(u)+\alpha x\) with a constant \(\alpha\). By (4.7), we have \(P(\beta)=\beta^{-p}(2\gamma-f_{R}(\beta,\beta))=0\). Hence \(\alpha=0\). By \(\deg P(x)=p^{e}\), we have \(\deg P_{1}(u)=p^{e-1}\). Hence we obtain (1).
We show (2). We easily check that the finite etale morphism of degree \(p\): \(C_{R,\mathbb{F}}\to C_{P_{1},\mathbb{F}};\ (a,x)\mapsto(v,u)\) factors through \(C_{R,\mathbb{F}}\to C_{R,\mathbb{F}}/U\to C_{P_{1},\mathbb{F}}\). Hence we obtain the claim. Since \(C_{R,\mathbb{F}}\to C_{R,\mathbb{F}}/U\) is a finite etale morphism of degree \(p\), the claim follows.
Let
\[\Delta_{0}(x):=-(x/\beta)(\gamma(x/\beta)-f_{R}(x,\beta)).\]
We have
\[xR(x)=uP_{1}(u)+\Delta_{0}(x)^{p}-\Delta_{0}(x) \tag{4.9}\]
by Lemma 4.9(1). We write \(u(x)\) for \(u\).
Let \((1,\beta^{\prime},\gamma^{\prime})\in H_{R}\) be an element satisfying (4.7). Assume \(\omega_{R}(\beta,\beta^{\prime})=0\). Then \((1,\beta,\gamma)\) commutes with \((1,\beta^{\prime},\gamma^{\prime})\). Hence the action of \((1,\beta^{\prime},\gamma^{\prime})\) induces the automorphism of \(C_{P_{1},\mathbb{F}}\simeq C_{R,\mathbb{F}}/U\) ((2.5)).
**Lemma 4.10**.: _Let \(\pi(\beta^{\prime},\gamma^{\prime}):=(1,u(\beta^{\prime}),\gamma^{\prime}- \Delta_{0}(\beta^{\prime}))\)._
1. _We have_ \(\pi(\beta^{\prime},\gamma^{\prime})\in H_{P_{1}}\) _and_ \(f_{P_{1}}(u(\beta^{\prime}),u(\beta^{\prime}))=2(\gamma^{\prime}-\Delta_{0}( \beta^{\prime}))\)_._
2. _The action of_ \((1,\beta^{\prime},\gamma^{\prime})\) _on_ \(C_{R,\mathbb{F}}\) _induces_ \(\pi(\beta^{\prime},\gamma^{\prime})\) _on_ \(C_{P_{1},\mathbb{F}}\)_._
Proof.: Let \(\Delta_{1}(x):=f_{R}(x,\beta^{\prime})-\Delta_{0}(x+\beta^{\prime})+\Delta_{0} (x)\). By (4.8), the action of \((1,\beta^{\prime},\gamma^{\prime})\) on \(C_{R,\mathbb{F}}\) induces the automorphism of \(C_{P_{1},\mathbb{F}}\) given by \(u\mapsto u+u(\beta^{\prime})\) and \(v\mapsto v+\Delta_{1}(x)+\gamma^{\prime}\) on \(C_{P_{1},\mathbb{F}}\). We can easily check that \(\Delta_{1}(x)-\Delta_{1}(0)\) is an additive polynomial such that \(\Delta_{1}(\beta)-\Delta_{1}(0)=\omega_{R}(\beta,\beta^{\prime})=0\). Hence there exists \(g(u)\in\mathbb{F}_{q}[u]\) such that \(\Delta_{1}(x)=g(u(x))+\Delta_{1}(0)\). Lemma 4.1 implies that \(E_{P_{1}}(u(\beta^{\prime}))=0\). Hence \(u(\beta^{\prime})\in V_{P_{1}}\). We show (1). The former claim follows from (4.9). By using \(\Delta_{0}(0)=E_{P_{1}}(u(\beta^{\prime}))=E_{R}(\beta^{\prime})=0\) in the same way as Lemma 4.3, we have
\[\Delta_{0}(x+\beta^{\prime})+f_{P_{1}}(u(x),u(\beta^{\prime}))=\Delta_{0}(x)+ \Delta_{0}(\beta^{\prime})+f_{R}(x,\beta^{\prime}). \tag{4.10}\]
Substituting \(x=\beta^{\prime}\), and using \(\Delta_{0}(2\beta^{\prime})=4\Delta_{0}(\beta^{\prime})\) and (4.7) for \((\beta^{\prime},\gamma^{\prime})\), we obtain the latter claim in (1).
By (4.10), we have
\[v+f_{R}(x,\beta^{\prime})-\Delta_{0}(x+\beta^{\prime})+\Delta_{0}(x)+\gamma^{ \prime}=v+f_{P_{1}}(u(x),u(\beta^{\prime}))+\gamma^{\prime}-\Delta_{0}(\beta^ {\prime}).\]
Hence the claim (2) follows from (2.5).
Assume that \(V_{R}\) is not completely anisotropic. Let \(U_{R}\) be a non-zero totally isotropic \(\mathbb{F}_{p}[\mathscr{H}]\)-submodule in \(V_{R}\). There exists a monic reduced polynomial \(r(x)\in\mathscr{A}_{\mathbb{F}}\) such that \(U_{R}=\{x\in\mathbb{F}\mid r(x)=0\}\) by [13, Theorem 7]. Since \(U_{R}\) is an \(\mathbb{F}_{p}[\mathscr{H}]\)-module, we have
\[r(\alpha x)=\alpha r(x)\text{ for }\alpha\in\mu_{d_{R,m}}\text{ and }r(x)\in\mathbb{F}_{q}[x] \tag{4.11}\]
by Lemma 3.24. We write \(\deg r(x)=p^{e-e^{\prime}}\) with a non-negative integer \(0\leq e^{\prime}<e\).
We take a basis \(\beta_{1},\dots,\beta_{e-e^{\prime}}\) of \(U_{R}\) over \(\mathbb{F}_{p}\). Let \((1,\beta_{i},\gamma_{i})\in H_{R}\) be an element which satisfies (4.7). Let \(U_{i}:=\{(1,\xi\beta_{i},\xi^{2}\gamma_{i})\mid\xi\in\mathbb{F}_{p}\}\subset H _{R}\), which is a subgroup. Since \(U_{R}\) is totally isotropic, we have \(\omega_{R}(\beta_{i},\beta_{j})=0\). Thus \(g_{i}g_{j}=g_{j}g_{i}\) for any \(g_{i}\in U_{i}\) and \(g_{j}\in U_{j}\) by Lemma 2.6(2). Let
\[U_{R}^{\prime}:=U_{1}\cdots U_{e-e^{\prime}}\subset H_{R}, \tag{4.12}\]
which is an abelian subgroup.
**Proposition 4.11**.: _Assume that \(V_{R}\) is not completely anisotropic. Then there exist \(R_{1}(x)\in\mathscr{A}_{\mathbb{F}}\) of degree \(p^{e^{\prime}}\) and a polynomial \(\Delta(x)\in\mathbb{F}[x]\) such that \(\Delta(0)=0\) and the quotient \(C_{R,\mathbb{F}}/U_{R}^{\prime}\) is isomorphic to the affine curve \(C_{R_{1},\mathbb{F}}\) and the isomorphism is induced by \(\pi\colon C_{R,\mathbb{F}}\to C_{R_{1},\mathbb{F}};\ (a,x)\mapsto(a-\Delta(x),r(x))\). In particular, we have \(xR(x)=r(x)R_{1}(r(x))+\Delta(x)^{p}-\Delta(x)\). Furthermore, we have \(d_{R,m}\mid d_{R_{1}}\)._
Proof.: By applying Lemmas 4.9 and 4.10 successively, the quotient \(C_{R,\mathbb{F}}/U^{\prime}_{R}\) is isomorphic to the curve \(C_{R_{1},\mathbb{F}}\) with some \(R_{1}(x)\in\mathscr{A}_{\mathbb{F}}\), and we obtain \(\pi\colon C_{R,\mathbb{F}}\to C_{R_{1},\mathbb{F}};\ (a,x)\mapsto(a-\Delta(x),r(x))\). By (4.8), we have \(\Delta(0)=0\). Since \(U_{R}\) is an \(\mathbb{F}_{p}[\mathscr{H}]\)-module, the subgroup \(A:=\{(\alpha,0,0)\in Q_{R,m}\mid\alpha\in\mu_{d_{R,m}}\}\) normalizes \(U^{\prime}_{R}\). Hence \(A\) acts on the quotient \(C_{R_{1},\mathbb{F}}\). We recall that \(b^{p}-b=yR_{1}(y)\) is the defining equation of \(C_{R_{1},\mathbb{F}}\). Then \(A\ni(\alpha,0,0)\) acts on \(C_{R_{1},\mathbb{F}}\) is given by \(b\mapsto b+\Delta(x)-\Delta(\alpha^{-1}x),\ y=r(x)\mapsto r(\alpha^{-1}x)= \alpha^{-1}y\) through the morphism \(\pi\) by (4.11). By [6, Theorems (4.1) and (13.3)] or [1, Theorem 4.3.2], we must have \(\alpha\in\mu_{d_{R_{1}}}\). Hence the last claim follows.
**Corollary 4.12**.: _Let the assumption be as in Proposition 4.11. We have \(\Delta(x),R_{1}(x)\in\mathbb{F}_{q}[x]\)._
Proof.: We use the same notation in Definition 3.23. We consider the equality \(xR(x)=r(x)R_{1}(r(x))+\Delta(x)^{p}-\Delta(x)\) in Proposition 4.11. Let \(S(x):=-R_{1}^{\sigma}(x)+R_{1}(x)\) and \(\Pi(x):=\Delta^{\sigma}(x)-\Delta(x)\). We have \(S(x)\in\mathscr{A}_{\mathbb{F}}\). By \(r(x),R(x)\in\mathbb{F}_{q}[x]\),
\[\Pi(x)^{p}-\Pi(x)=r(x)S(r(x)). \tag{4.13}\]
Assume \(S(x)\neq 0\). We have the non-constant morphism \(f\colon\mathbb{A}^{1}_{\mathbb{F}}\to C_{S,\mathbb{F}};\ x\mapsto(\Pi(x),r(x))\) by \(\deg r(x)>0\). Let \(\overline{C}_{S,\mathbb{F}}\) be the smooth compactification of \(C_{S,\mathbb{F}}\). The morphism \(f\) extends to a non-constant morphism \(\mathbb{P}^{1}_{\mathbb{F}}\to\overline{C}_{S,\mathbb{F}}\). Hence this is a finite morphism. By the Riemann-Hurwitz formula, we know that the genus of \(\overline{C}_{S,\mathbb{F}}\) equals zero. This is a contradiction by Lemma 2.10. Hence \(S(x)\equiv 0\) and \(R_{1}(x)\in\mathbb{F}_{q}[x]\). We have \(\Pi(x)\in\mathbb{F}_{p}\) by (4.13). We have \(\Pi(0)=0\) by \(\Delta(0)=0\) as in Proposition 4.11. Hence \(\Pi(x)\equiv 0\). Thus the claim follows.
### Theorem
Finally, we summarize the contents of SS4.1 and SS4.2 as a theorem.
**Theorem 4.13**.: _Assume \(p\neq 2\). The following conditions are equivalent._
1. _There exists a non-trivial finite etale morphism_ \[C_{R}\to C_{R_{1}};\ (a,x)\mapsto(a-\Delta(x),r(x)),\] _where_ \(\Delta(x)\in\mathbb{F}_{q}[x]\) _and_ \(r(x),R_{1}(x)\in\mathscr{A}_{q}\) _satisfy_ \(d_{R,m}\mid d_{R_{1}}\) _and_ \(r(\alpha x)=\alpha r(x)\) _for_ \(\alpha\in\mu_{d_{R,m}}\)_._
2. _The_ \(\mathbb{F}_{p}[\mathscr{H}]\)_-module_ \((V_{R},\omega_{R})\) _is not completely anisotropic._
3. _The_ \(W_{F}\)_-representation_ \(\tau_{\psi,R,m}\) _is imprimitive._
_If the above equivalent conditions are satisfied, the \(W_{F}\)-representation \(\tau_{\psi,R,m}\) is isomorphic to \(\operatorname{Ind}_{W_{F^{\prime}}}^{W_{F}}\tau^{\prime}_{\psi,R_{1},m}\), where \(\tau^{\prime}_{\psi,R_{1},m}\) is given in (4.6)._
Proof.: Assume (1). Since \(C_{R}\to C_{R_{1}}\) is non-trivial, we have \(e^{\prime}<e\), where \(\deg r(x)=p^{e-e^{\prime}}\). Then we have (2) by Lemma 4.4. Assume (2). We obtain (1) by (4.11), Proposition 4.11 and Corollary 4.12.
The equivalence of (2) and (3) follows from Corollary 3.28(1).
The last claim follows from Corollary 4.8.
### Acknowledgements
This work was supported by JSPS KAKENHI Grant Numbers 20K03529/21H00973.
|
2301.04380 | Ideals with approximate unit in semicrossed products | We characterize the ideals of the semicrossed product $C_0(X) \times_\phi
\mathbb{Z}_+$ with left (resp. right) approximate unit. | Charalampos Magiatis | 2023-01-11T09:49:56Z | http://arxiv.org/abs/2301.04380v1 | # Ideals with approximate unit in semicrossed products
###### Abstract
We characterize the ideals of the semicrossed product \(C_{0}(X)\times_{\phi}\mathbb{Z}_{+}\) with left (resp. right) approximate unit.
## 1 Introduction and Notation
The semicrossed product is a non-selfadjoint operator algebra which is constructed from a dynamical system. We recall the construction of the semicrossed product we will consider in this work. Let \(X\) be a locally compact Hausdorff space and \(\phi:X\to X\) be a continuous and proper surjection (recall that a map \(\phi\) is _proper_ if the inverse image \(\phi^{-1}(K)\) is compact for every compact \(K\subseteq X\)). The pair \((X,\phi)\) is called a _dynamical system_. An action of \(\mathbb{Z}_{+}:=\mathbb{N}\cup\{0\}\) on \(C_{0}(X)\) by isometric \(*\)-automorphisms \(\alpha_{n}\), \(n\in\mathbb{Z}_{+}\), is obtained by defining \(\alpha_{n}(f)=f\circ\phi^{n}\). We write the elements of the Banach space \(\ell^{1}(\mathbb{Z}_{+},C_{0}(X))\) as formal series \(A=\sum_{n\in\mathbb{Z}_{+}}U^{n}f_{n}\) with the norm given by \(\|A\|_{1}=\sum_{n\in\mathbb{Z}_{+}}\|f_{n}\|_{C_{0}(X)}\). Multiplication on \(\ell^{1}(\mathbb{Z}_{+},C_{0}(X))\) is defined by setting
\[(U^{n}f)(U^{m}g)=U^{n+m}(\alpha^{m}(f)g)\,\]
and extending by linearity and continuity. With this multiplication \(\ell^{1}(\mathbb{Z}_{+},C_{0}(X))\) is a Banach algebra.
The Banach algebra \(\ell^{1}(\mathbb{Z}_{+},C_{0}(X))\) can be faithfully represented as a (concrete) operator algebra on a Hilbert space. This is achieved by assuming a faithful action of \(C_{0}(X)\) on a Hilbert space \(\mathcal{H}_{0}\). Then we can define a faithful contractive representation \(\pi\) of \(\ell_{1}(\mathbb{Z}_{+},C_{0}(X))\) on the Hilbert space \(\mathcal{H}=\mathcal{H}_{0}\otimes\ell^{2}(\mathbb{Z}_{+})\) by defining \(\pi(U^{n}f)\) as
\[\pi(U^{n}f)(\xi\otimes e_{k})=\alpha^{k}(f)\xi\otimes e_{k+n}\.\]
The _semicrossed product_\(C_{0}(X)\times_{\phi}\mathbb{Z}_{+}\) is the closure of the image of \(\ell^{1}(\mathbb{Z}_{+},C_{0}(X))\) in \(\mathcal{B}(\mathcal{H})\) in the representation just defined. We will denote an element \(\pi(U^{n}f)\) of \(C_{0}(X)\times_{\phi}\mathbb{Z}_{+}\) by \(U^{n}f\) to simplify the notation.
For \(A=\sum_{n\in\mathbb{Z}_{+}}U^{n}f_{n}\in\ell^{1}(\mathbb{Z}_{+},C_{0}(X))\) we call \(f_{n}\equiv E_{n}(A)\) the _\(n\)th Fourier coefficient_ of \(A\). The maps \(E_{n}:\ell^{1}(\mathbb{Z}_{+},C_{0}(X))\to C_{0}(X)\) are contractive in the (operator) norm of \(C_{0}(X)\times_{\phi}\mathbb{Z}_{+}\), and therefore they extend to contractions \(E_{n}:C_{0}(X)\times_{\phi}\mathbb{Z}_{+}\to C_{0}(X)\). An element \(A\) of the semicrossed product \(C_{0}(X)\times_{\phi}\mathbb{Z}_{+}\) is \(0\) if and only if \(E_{n}(A)=0\), for all \(n\in\mathbb{Z}_{+}\), and thus \(A\) is completely determined by its Fourier coefficients. We will denote \(A\) by the formal series \(A=\sum_{n\in\mathbb{Z}_{+}}U^{n}f_{n}\), where \(f_{n}=E_{n}(A)\). Note however that the series \(\sum_{n\in\mathbb{Z}_{+}}U^{n}f_{n}\) does not in general converge to \(A\)[6, II.9 p.512]. The \(k\)_th arithmetic mean_ of \(A\) is defined to be \(\bar{A}_{k}=\frac{1}{k+1}\sum_{l=0}^{k}S_{l}(A)\), where \(S_{l}(A)=\sum_{n=0}^{l}U^{n}f_{n}\). Then, the sequence \(\{\bar{A}_{k}\}_{k\in\mathbb{Z}_{+}}\) is norm convergent to \(A\)[6, Remark p.524]. We refer to [6, 4, 3] for more information about the semicrossed product.
Let \(\{X_{n}\}_{n=0}^{\infty}\) be a sequence of closed subsets of \(X\) satisfying
\[X_{n+1}\cup\phi(X_{n+1})\subseteq X_{n}\,\]
for all \(n\in\mathbb{N}\). Peters proved in [7] that there is a one-to-one correspondence between closed two-sided ideals \(\mathcal{I}\subseteq C_{0}(X)\times_{\phi}\mathbb{Z}_{+}\) and sequences \(\{X_{n}\}_{n=0}^{\infty}\) of closed subsets of \(X\) satisfying \((*)\), under the additional assumptions that \(X\) is metrizable and the dynamical system \((X,\phi)\) contains no periodic points. In fact, the ideal \(\mathcal{I}\) associated with the sequence \(\{X_{n}\}_{n=0}^{\infty}\) is \(\mathcal{I}=\{A\in C_{0}(X)\times_{\phi}\mathbb{Z}_{+}:E_{n}(A)(X_{n})=\{0\}\}\). We will write this as \(\mathcal{I}\sim\{X_{n}\}_{n=0}^{\infty}\). Moreover, under the above assumptions, he characterizes the maximal and prime ideals of the semicrossed product \(C_{0}(X)\times_{\phi}\mathbb{Z}_{+}\).
Donsig, Katavolos and Manousos obtained in [4] a characterization of the Jacobson radical for the semicrossed product \(C_{0}(X)\times_{\phi}\mathbb{Z}_{+}\), where \(X\) is a locally compact metrisable space and \(\phi:X\to X\) is a continuous and proper surjection. Andreolas, Anoussis and the author characterized in [2] the ideal generated by the compact elements and in [1] the hypocompact and the scattered radical of the semicrossed product \(C_{0}(X)\times_{\phi}\mathbb{Z}_{+}\), where \(X\) is a locally compact Hausdorff space and \(\phi:X\to X\) is a homeomorphism. All these ideals are of the form \(\mathcal{I}\sim\{X_{n}\}_{n=0}^{\infty}\) for suitable families of closed subsets \(\{X_{n}\}_{n=0}^{\infty}\).
In the present paper we characterize the closed two-sided ideals \(\mathcal{I}\sim\{X_{n}\}_{n=0}^{\infty}\) of \(C_{0}(X)\times_{\phi}\mathbb{Z}_{+}\) with left (resp. right) approximate unit. As a consequence, we obtain a complete characterization of ideals with left (resp. right) approximate unit under the additional assumptions that \(X\) is metrizable and the dynamical system \((X,\phi)\) contains no periodic points.
Recall that a _left_ (resp. _right_) _approximate unit_ of a Banach algebra \(\mathcal{A}\) is a net \(\{u_{\lambda}\}_{\lambda\in\Lambda}\) of elements of \(\mathcal{A}\) such that:
1. for some positive number \(r\), \(\|u_{\lambda}\|\leq r\) for all \(\lambda\in\Lambda\),
2. \(\lim u_{\lambda}a=a\) (resp. \(\lim au_{\lambda}=a\)), for all \(a\in\mathcal{A}\), in the norm topology of \(\mathcal{A}\).
A net which is both a left and a right approximate unit of \(\mathcal{A}\) is called an _approximate unit_ of \(\mathcal{A}\). A left (resp. right) approximate unit \(\{u_{\lambda}\}_{\lambda\in\Lambda}\) that satisfies \(\|u_{\lambda}\|\leq 1\) for all \(\lambda\in\Lambda\) is called a _contractive left_ (resp. _right_) _approximate unit_.
We will say that an ideal \(\mathcal{I}\) of a Banach algebra \(\mathcal{A}\) has a left (resp. right) approximate unit if it has a left (resp. right) approximate unit as an algebra.
Ideals with approximate unit
In the following theorem the ideals \({\cal I}\sim\{X_{n}\}_{n=0}^{\infty}\) with right approximate unit are characterized.
**Theorem 2.1**: _Let \({\cal I}\sim\{X_{n}\}_{n=0}^{\infty}\) be a non-zero ideal of \(C_{0}(X)\times_{\phi}{\mathbb{Z}}_{+}\). The following are equivalent:_
1. \({\cal I}\) _has a right approximate unit._
2. \(X_{n}=X_{n+1}\)_, for all_ \(n\in{\mathbb{Z}}_{+}\)_._
**Proof.** We start by proving that \((1)\Rightarrow(2)\). Let \({\cal I}\sim\{X_{n}\}_{n=0}^{\infty}\) be an ideal with right approximate unit \(\{V_{\lambda}\}_{\lambda\in\Lambda}\). We suppose that there exists \(n\in{\mathbb{Z}}_{+}\) such that \(X_{n+1}\subsetneq X_{n}\). Let
\[n_{0}=\min\{n\in{\mathbb{Z}}_{+}:X_{n+1}\subsetneq X_{n}\}\,\]
\(x_{0}\in X_{n_{0}}\setminus X_{n_{0}+1}\) and \(f\in C_{0}(X)\) such that \(f(x_{0})=1\), \(f(X_{n_{0}+1})=\{0\}\) and \(\|f\|=1\). Then, for \(A=U^{n_{0}+1}f\), we have
\[\|AV_{\lambda}-A\|\geq\|E_{n_{0}+1}(AV_{\lambda}-A)\|=\|fE_{0}(V_{\lambda})-f \|\geq|(fE_{0}(V_{\lambda})-f)(x_{0})|=1\,\]
for all \(\lambda\in\Lambda\), since \(x_{0}\in X_{n_{0}}\) and \(E_{0}(V_{\lambda})(X_{n_{0}})=0\), which is a contradiction. Therefore \(X_{n}=X_{n+1}\) for all \(n\in{\mathbb{Z}}_{+}\).
For \((2)\Rightarrow(1)\), assume that \(X_{n}=X_{n+1}\) for all \(n\in{\mathbb{Z}}_{+}\). By \((*)\), we get that \(\phi(X_{0})\subseteq X_{0}\). We will show that if \(\{u_{\lambda}\}_{\lambda\in\Lambda}\) is a contractive approximate unit of the ideal \(C_{0}(X\setminus X_{0})\) of \(C_{0}(X)\), then \(\{U^{0}u_{\lambda}\}_{\lambda\in\Lambda}\) is a right approximate unit of \({\cal I}\). Since \(\|u_{\lambda}\|\leq 1\), we have \(\|U^{0}u_{\lambda}\|\leq 1\).
Let \(A\in{\cal I}\) and \(\varepsilon>0\). Then there exists \(k\in{\mathbb{Z}}_{+}\) such that
\[\|A-\bar{A}_{k}\|<\frac{\varepsilon}{4}\,\]
where \(\bar{A}_{k}\) is the \(k\)th arithmetic mean of \(A\). Since \(X_{n}=X_{0}\), \(E_{n}(\bar{A}_{k})\in C_{0}(X\setminus X_{0})\) and \(\{u_{\lambda}\}_{\lambda\in\Lambda}\) is an approximate unit of \(C_{0}(X\setminus X_{0})\), there exists \(\lambda_{0}\in\Lambda\) such that
\[\|E_{l}(\bar{A}_{k})u_{\lambda}-E_{l}(\bar{A}_{k})\|<\frac{\varepsilon}{2(k+1) }\,\]
for all \(l\leq k\) and \(\lambda>\lambda_{0}\). So, for \(\lambda>\lambda_{0}\) we get that
\[\|AU^{0}u_{\lambda}-A\| < \|AU^{0}u_{\lambda}-\bar{A}_{k}U^{0}u_{\lambda}+\bar{A}_{k}U^{0}u_ {\lambda}-\bar{A}_{k}+\bar{A}_{k}-A\|\] \[< \|AU^{0}u_{\lambda}-\bar{A}_{k}U^{0}u_{\lambda}\|+\|\bar{A}_{k}U^ {0}u_{\lambda}-\bar{A}_{k}\|+\|A-\bar{A}_{k}\|\] \[< \|\bar{A}_{k}U^{0}u_{\lambda}-\bar{A}_{k}\|+\frac{\varepsilon}{2}\] \[\leq \sum_{l=0}^{k}\|E_{l}(\bar{A}_{k})u_{\lambda}-E_{l}(\bar{A}_{k})\| +\frac{\varepsilon}{2}\] \[< \varepsilon\,\]
which concludes the proof. \(\square\)
In the following theorem the ideals \({\cal I}\sim\{X_{n}\}_{n=0}^{\infty}\) with left approximate unit are characterized.
**Theorem 2.2**: _Let \({\cal I}\sim\{X_{n}\}_{n=0}^{\infty}\) be a non-zero ideal of \(C_{0}(X)\times_{\phi}{\mathbb{Z}}_{+}\). The following are equivalent:_
1. \({\cal I}\) _has a left approximate unit._
2. \(X_{0}\subsetneq X\) _and_ \(\phi^{n}(X\setminus X_{n})=X\setminus X_{0}\)_, for all_ \(n\in{\mathbb{Z}}_{+}\)_._
3. \(\phi(X\setminus X_{1})=X\setminus X_{0}\) _and_ \(\phi(X_{n+1}\setminus X_{n+2})=X_{n}\setminus X_{n+1}\)_, for all_ \(n\in{\mathbb{Z}}_{+}\)_._
**Proof.** We start by proving that \((1)\Rightarrow(2)\). Let \({\cal I}\sim\{X_{n}\}_{n=0}^{\infty}\) be an ideal with left approximate unit \(\{V_{\lambda}\}_{\lambda\in\Lambda}\).
First we prove that \(X_{0}\subsetneq X\). We suppose that \(X_{0}=X\). Then \(E_{0}(V_{\lambda})=0\), for all \(\lambda\in\Lambda\), and hence for every \(U^{n}f\in{\cal I}\) we have
\[\|V_{\lambda}U^{n}f-U^{n}f\|\geq\|E_{n}(V_{\lambda}U^{n}f-U^{n}f)\|=\|E_{0}(V_ {\lambda})f-f\|=\|f\|\,\]
for all \(\lambda\in\Lambda\), which is a contradiction. Therefore \(X_{0}\subsetneq X\).
Now we prove that \(\phi^{n}(X\setminus X_{n})=X\setminus X_{0}\), for all \(n\in{\mathbb{Z}}_{+}\). We suppose that there exists \(n\in{\mathbb{Z}}_{+}\) such that \(\phi^{n}(X\setminus X_{n})\not\subseteq X\setminus X_{0}\) and let
\[n_{0}=\min\{n\in{\mathbb{Z}}_{+}:\phi^{n}(X\setminus X_{n})\not\subseteq X \setminus X_{0}\}\.\]
Then, there exist \(x_{0}\in X\setminus X_{n_{0}}\) such that \(\phi^{n_{0}}(x_{0})\in X_{0}\) and a function \(f\in C_{0}(X)\) such that \(f(x_{0})=1\), \(f(X_{n_{0}})=\{0\}\) and \(\|f\|=1\). If \(A=U^{n_{0}}f\), we have that \(A\in{\cal I}\), \(\|A\|=1\) and
\[\|V_{\lambda}A-A\| \geq \|E_{n_{0}}(V_{\lambda}A-A)\|\] \[= \|E_{0}(V_{\lambda})\circ\phi^{n_{0}}f-f\|\] \[\geq |(E_{0}(V_{\lambda})\circ\phi^{n_{0}}f-f)(x_{0})|\] \[= 1\,\]
for all \(\lambda\in\Lambda\), since \(\phi^{n_{0}}(x_{0})\in X_{0}\) and \(E_{0}(V_{\lambda})(X_{0})=\{0\}\), which is a contradiction. Therefore \(\phi^{n}(X\setminus X_{n})\subseteq X\setminus X_{0}\). Furthermore, by \((*)\) we get that \(\phi^{n}(X_{n})\subseteq X_{0}\), for all \(n\in{\mathbb{Z}}_{+}\), and hence
\[X=\phi^{n}(X)=\phi^{n}(X_{n}\cup(X\setminus X_{n}))=\phi^{n}(X_{n})\cup\phi^{ n}(X\setminus X_{n})\subseteq X_{0}\cup\phi^{n}(X\setminus X_{n})\.\]
Since \(\phi^{n}(X\setminus X_{n})\subseteq X\setminus X_{0}\) and \(\phi\) is surjective, \(\phi^{n}(X\setminus X_{n})=X\setminus X_{0}\), for all \(n\in{\mathbb{Z}}_{+}\).
For \((2)\Rightarrow(1)\), assume that \(X_{0}\subsetneq X\) and \(\phi^{n}(X\setminus X_{n})=X\setminus X_{0}\), for all \(n\in{\mathbb{Z}}_{+}\). We will show that if \(\{u_{\lambda}\}_{\lambda\in\Lambda}\) is a contractive approximate unit of the ideal \(C_{0}(X\setminus X_{0})\) of \(C_{0}(X)\), then \(\{U^{0}u_{\lambda}\}_{\lambda\in\Lambda}\) is a left approximate unit of \({\cal I}\). Since \(\|u_{\lambda}\|\leq 1\), we have \(\|U^{0}u_{\lambda}\|\leq 1\).
Let \(A\) be a norm-one element of \({\cal I}\) and \(\varepsilon>0\). Then there exists \(k\in{\mathbb{Z}}_{+}\) such that
\[\|A-\bar{A}_{k}\|<\frac{\varepsilon}{4}\,\]
where \(\bar{A}_{k}\) is the \(k\)th arithmetic mean of \(A\). For \(l\leq k\), let
\[D_{\varepsilon}(E_{l}(\bar{A}_{k}))=\left\{x\in X:|E_{l}(\bar{A}_{k})(x)|\geq \frac{\varepsilon}{4(k+1)}\right\}\.\]
Since \(A\in{\cal I}\), we have \(E_{l}(\bar{A}_{k})(X_{l})=\{0\}\) and hence \(D_{\varepsilon}(E_{l}(\bar{A}_{k}))\subseteq X\setminus X_{l}\). Furthermore, since \(\phi^{n}(X\setminus X_{n})=X\setminus X_{0}\), for all \(n\in{\mathbb{Z}}_{+}\), we have that \(\phi^{l}(D_{\varepsilon}(E_{l}(\bar{A}_{k})))\subseteq X\setminus X_{0}\). Moreover, the set \(D_{\varepsilon}(E_{l}(\bar{A}_{k}))\) is compact and hence the set \(\phi^{l}(D_{\varepsilon}(E_{l}(\bar{A}_{k})))\) is also compact. By Urysohn's lemma, there is a norm-one function \(v_{l}\in C_{0}(X)\) such that
\[v_{l}(x)=\left\{\begin{array}{ll}1,&x\in\phi^{l}(D_{\varepsilon}(E_{l}(\bar {A}_{k})))\\ 0,&x\in X_{0}\end{array}\right.\.\]
Then, there exists \(\lambda_{0}\in\Lambda\) such that
\[\|u_{\lambda}v_{l}-v_{l}\|<\frac{\varepsilon}{2(k+1)}\,\]
for all \(l\leq k\) and \(\lambda>\lambda_{0}\), and hence
\[|u_{\lambda}(x)-1|<\frac{\varepsilon}{2(k+1)}\,\]
for all \(x\in\cup_{l=0}^{k}\phi^{l}(D_{\varepsilon}(E_{l}(\bar{A}_{k})))\) and \(\lambda>\lambda_{0}\). Therefore, if \(x\in\cup_{l=0}^{k}(D_{\varepsilon}(E_{l}(\bar{A}_{k})))\) then \(\phi^{l}(x)\in\cup_{l=0}^{k}\phi^{l}(D_{\varepsilon}(E_{l}(\bar{A}_{k})))\) and hence
\[\|((u_{\lambda}\circ\phi^{l})E_{l}(\bar{A}_{k})-E_{l}(\bar{A}_{k}))(x)\|< \frac{\varepsilon}{2(k+1)}\,\]
for all \(l\leq k\) and \(\lambda>\lambda_{0}\). On the other hand, if \(x\not\in\cup_{l=0}^{k}(D_{\varepsilon}(E_{l}(\bar{A}_{k})))\), then
\[|E_{l}(\bar{A}_{k})(x)|<\frac{\varepsilon}{4(k+1)}\,\]
for all \(l\leq k\), and hence
\[\|((u_{\lambda}\circ\phi^{l})E_{l}(\bar{A}_{k})-E_{l}(\bar{A}_{k}))(x)\|<\frac {\varepsilon}{2(k+1)}\.\]
From what we said so far we get that
\[\|U^{0}u_{\lambda}A-A\| < \|U^{0}u_{\lambda}\bar{A}_{k}-\bar{A}_{k}\|+\frac{\varepsilon}{2}\] \[\leq \sum_{l=0}^{k}\|(u_{\lambda}\circ\phi^{l})E_{l}(\bar{A}_{k})-E_{l }(\bar{A}_{k})\|+\frac{\varepsilon}{2}\] \[< \varepsilon\,\]
for all \(\lambda>\lambda_{0}\).
Now we show that \((2)\Rightarrow(3)\). We assume that \(\phi^{n}(X\setminus X_{n})=X\setminus X_{0}\), for all \(n\in\mathbb{Z}_{+}\). Then \(\phi(X\setminus X_{n+2})\subseteq X\setminus X_{n+1}\). Indeed, if \(x\in X\setminus X_{n+2}\) and \(\phi(x)\in X_{n+1}\) then \(\phi^{n+2}(x)\in X_{0}\), by \((*)\), which is a contradiction. Furthermore, by \((*)\), we know that \(\phi(X_{n+1})\subseteq X_{n}\) and hence \(\phi(X_{n+1}\setminus X_{n+2})\subseteq X_{n}\setminus X_{n+1}\) for all \(n\in\mathbb{Z}_{+}\).
To prove that \(\phi(X_{n+1}\setminus X_{n+2})=X_{n}\setminus X_{n+1}\) for all \(n\in\mathbb{Z}_{+}\), we suppose that there exists \(n\in\mathbb{Z}_{+}\) such that \(\phi(X_{n+1}\setminus X_{n+2})\subsetneq X_{n}\setminus X_{n+1}\). If
\[n_{0}=\min\{n\in\mathbb{Z}_{+}:\phi(X_{n+1}\setminus X_{n+2})\subsetneq X_{n} \setminus X_{n+1}\}\,\]
then
\[\phi(X) = \phi(X_{n_{0}+1}\cup(X\setminus X_{n_{0}+1}))\] \[= \phi(X_{n_{0}+1})\cup\phi(X\setminus X_{n_{0}+1})\] \[\subseteq \phi(X_{n_{0}+1})\cup(X\setminus X_{n_{0}})\] \[\subsetneq X\,\]
which is a contradiction, since \(\phi\) is surjective. Therefore, \(\phi(X_{n+1}\setminus X_{n+2})=X_{n}\setminus X_{n+1}\) for all \(n\in\mathbb{Z}_{+}\).
Finally we show that \((3)\Rightarrow(2)\). We assume that \(\phi(X\setminus X_{1})=X\setminus X_{0}\) and \(\phi(X_{n+1}\setminus X_{n+2})=X_{n}\setminus X_{n+1}\), for all \(n\in\mathbb{Z}_{+}\). Then, \(X_{0}\subsetneq X\). Indeed, if \(X_{0}=X\), then \(\mathcal{I}\equiv\{0\}\) which is a contradiction. If \(n>1\), we have that
\[\phi(X\setminus X_{n}) = \phi\left[(X\setminus X_{1})\cup(X_{1}\setminus X_{2})\cup \cdots\cup(X_{n-1}\setminus X_{n})\right]\] \[= \phi(X\setminus X_{1})\cup\phi(X_{1}\setminus X_{2})\cup\cdots \cup\phi(X_{n-1}\setminus X_{n})\] \[= (X\setminus X_{0})\cup(X_{0}\setminus X_{1})\cup\cdots\cup(X_{n-2 }\setminus X_{n-1})\] \[= X\setminus X_{n-1}\,\]
and hence, \(\phi^{n}(X\setminus X_{n})=X\setminus X_{0}\), for all \(n\in\mathbb{Z}_{+}\). \(\Box\)
By Theorem 2.2, if \(\mathcal{I}\sim\{X_{n}\}_{n=0}^{\infty}\) is an ideal of \(C_{0}(X)\times_{\phi}\mathbb{Z}_{+}\) with left approximate unit, then \(X_{n+1}=X_{n}\) or \(X_{n+1}\subsetneq X_{n}\) for all \(n\in\mathbb{Z}_{+}\). If \(\mathcal{I}\sim\{X_{n}\}_{n=0}^{\infty}\) and \(X_{n+1}=X_{n}\), for all \(n\in\mathbb{Z}_{+}\), we will write \(\mathcal{I}\sim\{X_{0}\}\). We obtain the following characterization.
**Corollary 2.3**: _Let \(\mathcal{I}\sim\{X_{0}\}\) be a non-zero ideal of \(C_{0}(X)\times_{\phi}\mathbb{Z}_{+}\). The following are equivalent:_
1. \(\mathcal{I}\) _has a left approximate unit._
2. \(\phi(X_{0})=X_{0}\) _and_ \(\phi(X\setminus X_{0})=X\setminus X_{0}\)_._
**Proof.** By Theorem 2.2 we have \(\phi(X\setminus X_{0})=X\setminus X_{0}\). By \((*)\) we have \(\phi(X_{0})\subseteq X_{0}\) and since \(\phi\) is surjective we get \(\phi(X_{0})=X_{0}\). \(\Box\)
In the following proposition the ideals \(\mathcal{I}\sim\{X_{n}\}_{n=1}^{\infty}\) of \(C_{0}(X)\times_{\phi}\mathbb{Z}_{+}\) with left approximate unit are characterized, when \(\phi\) is a homeomorphism.
**Proposition 2.4**: _Let \({\cal I}\sim\{X_{n}\}_{n=1}^{\infty}\) be a non-zero ideal of \(C_{0}(X)\times_{\phi}{\mathbb{Z}}_{+}\), where \(\phi\) is a homeomorphism. The following are equivalent:_
1. \({\cal I}\) _has a left approximate unit._
2. _There exist_ \(S,W\subsetneq X\) _such that_ \(S\) _is closed and_ \(\phi(S)=S\)_, the sets_ \(\phi^{-1}(W),\phi^{-2}(W),\dots\) _are pairwise disjoint and_ \(\phi^{k}(W)\cap S=\emptyset\)_, for all_ \(k\in{\mathbb{Z}}\)_, and_ \[X_{n}=S\cup(\cup_{k=n}^{\infty}\phi^{-k}(W))\,\] _for all_ \(n\in{\mathbb{Z}}_{+}\)_._
**Proof.** The second condition implies the second condition of Theorem 2.2 and hence the implication \((2)\Rightarrow(1)\) is immediate. We will prove the implication \((1)\)\(\Rightarrow(2)\).
We set \(S=\cap_{n=0}^{\infty}X_{n}\). Clearly the set \(S\) is closed and, by \((*)\), we have \(\phi(S)\subseteq S\). We will prove that \(\phi(S)=S\). We suppose \(\phi(S)\subsetneq S\). Since \(\phi\) is surjective, there exists \(x\in X\setminus S\) such that \(\phi(x)\in S\). Moreover, \(\phi^{n}(x)\in S\) for all \(n\geq 1\). However, since \(x\notin S\) there exists \(n_{0}\) such that \(x\notin X_{n_{0}}\) and hence \(\phi^{n_{0}}(x)\in X\setminus X_{0}\), by Theorem 2.2, which is a contradiction since \(S\cap(X\setminus X_{0})=\emptyset\).
By Theorem 2.2, \(\phi(X_{n+1}\setminus X_{n+2})=X_{n}\setminus X_{n+1}\) for all \(n\in{\mathbb{Z}}_{+}\) and hence \(\phi^{n}(X_{n}\setminus X_{n+1})=X_{0}\setminus X_{1}\) or equivalently \(X_{n}\setminus X_{n+1}=\phi^{-n}(X_{0}\setminus X_{1})\) since \(\phi\) is a homeomorphism. Furthermore, the sets \(\phi^{-1}(X_{0}\setminus X_{1}),\phi^{-2}(X_{0}\setminus X_{1}),\dots\) are pairwise disjoint.
We set \(W=X_{0}\setminus X_{1}\). Clearly, \(\phi^{k}(W)\cap S=\emptyset\) for all \(k\in{\mathbb{Z}}\), since \(\phi(S)=S\) and \(\phi(W)\subseteq X\setminus X_{0}\). Also, \(X_{0}=S\cup(X_{0}\setminus X_{1})\cup(X_{1}\setminus X_{2})\cup\dots\) and hence
\[X_{0}=S\cup(\cup_{k=0}^{\infty}\phi^{-k}(W))\.\]
Finally, for all \(n\in{\mathbb{Z}}_{+}\) we have that
\[X_{0}=X_{n}\cup(\cup_{k=1}^{n}X_{k-1}\setminus X_{k})=X_{n}\cup(\cup_{k=1}^{n }\phi^{-k+1}(W))=X_{n}\cup(\cup_{k=0}^{n-1}\phi^{-k}(W))\,\]
and so
\[X_{n}=X_{0}\setminus(\cup_{k=0}^{n-1}\phi^{-k}(W))=S\cup(\cup_{k=n}^{\infty} \phi^{-k}(W))\.\]
\(\square\)
In the following corollary the ideals with an approximate unit are characterized.
**Corollary 2.5**: _Let \({\cal I}\sim\{X_{n}\}_{n=0}^{\infty}\) be a non-zero ideal of \(C_{0}(X)\times_{\phi}{\mathbb{Z}}_{+}\). The following are equivalent:_
1. \({\cal I}\) _has an approximate unit._
2. \(X_{n}=X_{n+1}\)_, for all_ \(n\in{\mathbb{Z}}_{+}\)_, and_ \(\phi(X\setminus X_{0})=X\setminus X_{0}\)_._
**Proof.** (1) \(\Rightarrow\) (2) is immediate from Theorem 2.1 and Corollary 2.3.
We show \((2)\Rightarrow(1)\). If \(X_{n}=X_{n+1}\), by \((*)\), we have \(\phi(X_{0})\subseteq X_{0}\). Since \(\phi(X\setminus X_{0})=X\setminus X_{0}\) and \(\phi\) surjective we have \(\phi(X_{0})=X_{0}\). Theorem 2.1 and Corollary 2.3 conclude the proof. \(\square\)
**Remark 2.6**: _If \(\mathcal{I}\sim\{X_{n}\}_{n=0}^{\infty}\) is an ideal of \(C_{0}(X)\times_{\phi}\mathbb{Z}_{+}\) with a left (resp. right) approximate unit, then it has a contractive left (resp. right) approximate unit. Moreover, the semicrossed product \(C_{0}(X)\times_{\phi}\mathbb{Z}_{+}\) has a contractive approximate unit._
Let \(B\) be a Banach space and \(C\) be a subspace of \(B\). The set of linear functionals that vanish on a subspace \(C\) of \(B\) is called the _annihilator_ of \(C\). A subspace \(C\) of a Banach space \(B\) is an \(M\)_-ideal_ in \(B\) if its annihilator is the kernel of a projection \(P\) on \(B^{*}\) such that \(\|y\|=\|P(y)\|+\|y-P(y)\|\), for all \(y\), where \(B^{*}\) is the dual space of \(B\).
Effros and Ruan proved that the \(M\)-ideals in a unital operator algebra are the closed two-sided ideals with an approximate unit, [5, Theorem 2.2]. Therefore, we obtain the following corollary about the \(M\)-ideals of a semicrossed product.
**Corollary 2.7**: _Let \(\mathcal{I}\sim\{X_{n}\}_{n=0}^{\infty}\) be a non-zero ideal of \(C_{0}(X)\times_{\phi}\mathbb{Z}_{+}\), where \(X\) is compact. The following are equivalent:_
1. \(\mathcal{I}\) _is_ \(M\)_-ideal._
2. \(\mathcal{I}\) _has an approximate unit._
3. \(X_{n}=X_{n+1}\)_, for all_ \(n\in\mathbb{Z}_{+}\)_, and_ \(\phi(X\setminus X_{0})=X\setminus X_{0}\)_._
|
2304.01426 | Conformalized Unconditional Quantile Regression | We develop a predictive inference procedure that combines conformal
prediction (CP) with unconditional quantile regression (QR) -- a commonly used
tool in econometrics that involves regressing the recentered influence function
(RIF) of the quantile functional over input covariates. Unlike the more
widely-known conditional QR, unconditional QR explicitly captures the impact of
changes in covariate distribution on the quantiles of the marginal distribution
of outcomes. Leveraging this property, our procedure issues adaptive predictive
intervals with localized frequentist coverage guarantees. It operates by
fitting a machine learning model for the RIFs using training data, and then
applying the CP procedure for any test covariate with respect to a
``hypothetical'' covariate distribution localized around the new instance.
Experiments show that our procedure is adaptive to heteroscedasticity, provides
transparent coverage guarantees that are relevant to the test instance at hand,
and performs competitively with existing methods in terms of efficiency. | Ahmed M. Alaa, Zeshan Hussain, David Sontag | 2023-04-04T00:20:26Z | http://arxiv.org/abs/2304.01426v1 | # Conformalized _Unconditional_ Quantile Regression
###### Abstract
We develop a predictive inference procedure that combines conformal prediction (CP) with _unconditional_ quantile regression (QR)--a commonly used tool in econometrics [1] that involves regressing the _recentered influence function_ (RIF) of the quantile functional over input covariates. Unlike the more widely-known conditional QR, unconditional QR explicitly captures the impact of changes in covariate distribution on the quantiles of the marginal distribution of outcomes. Leveraging this property, our procedure issues adaptive predictive intervals with localized frequentist coverage guarantees. It operates by fitting a machine learning model for the RIFs using training data, and then applying the CP procedure for any test covariate with respect to a "hypothetical" covariate distribution _localized_ around the new instance. Experiments show that our procedure is adaptive to heteroscedasticity, provides transparent coverage guarantees that are relevant to the test instance at hand, and performs competitively with existing methods in terms of efficiency.
## 1 Introduction
Consider a training data set \((X_{1},Y_{1}),\ldots,(X_{n},Y_{n})\), and a test point \((X_{n+1},Y_{n+1})\), with the training and test data all drawn independently from the same distribution, i.e.,
\[(X_{i},Y_{i})\stackrel{{ i.i.d.}}{{\sim}}P=P_{X}\times P_{Y|X},\,i= 1,\ldots,n+1. \tag{1}\]
Here, each \(X_{i}\in\mathbb{R}^{d}\) is a covariate vector, while \(Y_{i}\in\mathbb{R}\) is a response variable. The joint distribution over covariates and responses, \(P\), is unknown. In this paper, we tackle the problem of _predictive inference_, where given the covariate vector \(X_{n+1}\) for a new test point, the goal is to construct a predictive interval that is likely to contain the true response \(Y_{n+1}\) with probability at least \(1-\alpha\), for some \(\alpha\in(0,1)\). More precisely, our goal is to use the \(n\) training sample points to construct a set-valued function:
\[\widehat{C}_{n}(x)\coloneqq\widehat{C}_{n}((X_{1},Y_{1}),\ldots,(X_{n},Y_{n}),x)\subseteq\mathbb{R}, \tag{2}\]
such that for a new test point \((X_{n+1},Y_{n+1})\sim P\), the response \(Y_{n+1}\) falls in \(\widehat{C}_{n}(X_{n+1})\) with probability \(1-\alpha\). Intervals satisfying this _coverage_ condition are said to be _valid_.
The sense in which predictive inferences are valid determines the relevance of the corresponding coverage guarantees to specific prediction instances. The weakest form of validity is when predictive intervals cover the true response on average--such intervals are said to be _marginally valid_. Formally, marginal validity is satisfied when
\[\mathbb{P}\left[\,Y_{n+1}\in\widehat{C}_{n}(X_{n+1})\,\right]\geq 1-\alpha, \tag{3}\]
where the probability is defined with respect to randomness of both training and testing data. The coverage condition in (3) is said to be _distribution-free_ if it holds for all \(P\). Conformal prediction (CP), described formally in Section 2, is a popular framework for predictive inference that guarantees distribution-free marginal validity in finite samples [2, 3, 4, 5, 6, 7, 8]. In its most basic form, CP achieves marginal validity by issuing a fixed-length interval for all prediction instances.
In many applications, it is important to ensure _transparency_ in communicating uncertainty in predictions issued for individual users. For instance, suppose that \(X_{i}\) is a set of risk factors for patient \(i\) (e.g., age, blood pressure, etc.), and \(Y_{i}\) is a measure of kidney function (e.g., eGFR). For a new patient, our goal would be to predict a range of values for their future eGFR with a predetermined degree of confidence, i.e., we would like to be able to make a statement along the lines of: "Based on your risk factors, there is a 95\(\%\) chance that your eGFR will decline by 1.8-5.2 mL/min over the next 3 years". Marginal coverage guarantees that predicted ranges are accurate for 95\(\%\) of the patients on average, but can be arbitrarily inaccurate for specific prediction instances. Since marginal coverage is defined with respect to \(P_{X}\), we expect coverage to be violated in regions with few training examples in covariate-space, i.e., instances for which the predictive model has high _epistemic_ uncertainty. Hence, the
marginally-valid fixed-length intervals issued by vanilla CP may not be informative for prediction instances to which uncertainty quantification matters the most.
**Adaptive and transparent CP.** In this paper, we address the following question: how can we communicate model uncertainty in specific prediction instances in an _adaptive_ and _transparent_ manner? That is, we would like to construct a predictive inference procedure that _adapts_ the length of its issued intervals based on the varying level of uncertainty across different prediction instances, and reports a coverage guarantee that is "relevant" to each specific instance. Ideally, we would like to develop a predictive inference procedure that achieves the following conditional coverage guarantee:
\[\mathbb{P}\left[\,Y_{n+1}\in\widehat{C}_{n}(X_{n+1})\,\big{|}\,X_{n+1}=x\, \right]\geq 1-\alpha, \tag{4}\]
for almost all1\(x\in\mathbb{R}^{d}\). Predictive intervals that satisfy (4) are said to be _conditionally valid_. A procedure that satisfies (4) is transparent as its guarantee holds for each prediction instance, and is adaptive if the length of \(\widehat{C}_{n}\) for a given \(x\) reflects the relative level of uncertainty in this specific instance. It is known that distribution-free validity in the sense of (4) is impossible to achieve for non-trivial predictions [6, 9, 10]. Hence, we build on the CP framework to develop an adaptive predictive inference procedure that satisfies an approximate version of (4), and transparently reports the granularity of coverage for each prediction instance.
Footnote 1: We write “almost all \(x\)” to mean that the set of points where the bound fails has measure zero under \(P_{X}\).
For each new test point, our procedure reports an instance-specific predictive interval and identifies a local region containing the instance \(X_{n+1}=x\), over which the procedure is marginally valid, i.e., the inference is reported as follows:
\[\boxed{\text{For $X_{n+1}=x$, report $\widehat{C}_{n}(x),\widehat{\mathcal{S}}_{n}(x)$ such that:}}\] \[\mathbb{P}\left[\,Y\in\widehat{C}_{n}(X)\,\big{|}\,X\in\widehat{ \mathcal{S}}_{n}(x)\,\right]\geq 1-\alpha,\ \forall x\in\mathbb{R}^{d},\]
where \(\widehat{\mathcal{S}}_{n}(x)\subseteq\mathbb{R}^{d}\), which we call a _relevance subgroup_, is a local region containing \(x\).
In the clinical example discussed earlier, our procedure would communicate uncertainty with an individual patient as follows: "The model predicts that your eGFR will decline by 1.8-5.2 mL/min over the next 3 years. The predictions of the model tend to be accurate 95\(\%\) of the time for patients _similar_ to you defined by the patient subgroup \(\widehat{\mathcal{S}}_{n}(x)\), but the accuracy will vary from one patient to another within this group". By communicating uncertainty in this more transparent form, the clinician can reason about the relevance of this prediction to the patient at hand by inspecting the reported subgroup, e.g., checking if the relevance subgroup includes patients with different disease phenotypes.
The key idea behind our method is to view localized predictive inference as a marginal CP problem under a "hypothetical" covariate distribution \(G_{x}\) localized around the test point \(x\) instead of the true distribution \(P_{X}\). This is implemented through a commonly used approach in econometrics, known as unconditional quantile regression (UQR), which estimates the marginal quantiles of outcomes within arbitrary local subsets (i.e., relevance subgroups) of the covariate space [1]. It does so by regressing the recentered influence function (RIF) of the quantile functional over covariates, and marginalizing the predicted RIFs within relevance subgroups. This is different from conditional quantile regression [8], for which the regression targets do not recover marginal quantiles when averaged over subgroups.
Our procedure involves two steps. First, we use the UQR model to generate a nested sequence of predictive bands for each relevance subgroup. Next, we select the tightest band that achieves coverage within each subgroup using a held-out calibration set. In the rest of the paper, we explain the two steps of our procedure; we first start by providing a brief background on the standard CP method in the next Section.
## 2 Conformal Prediction
The standard _split_ CP procedure relies on sample splitting for constructing predictive intervals that satisfy finite-sample coverage guarantees [2, 3, 4]. Assuming that all data points are exchangeable, the procedure splits the data set into two disjoint subsets: a proper training set \(\{(X_{i},Y_{i}):i\in\mathcal{D}_{t}\}\), and a _calibration_ set \(\{(X_{i},Y_{i}):i\in\mathcal{D}_{c}\}\). Then, a machine learning model \(\widehat{\mu}(x)\) is fit to the training data set \(\mathcal{D}_{t}\), and a _conformity score_\(V(.)\) is computed for all samples in \(\mathcal{D}_{c}\)--this score measures how unusual the prediction looks relative to previous examples. A typical choice of \(V(.)\) is the absolute residual, i.e., \(V(x,y)\coloneqq|\,\widehat{\mu}(x)-y\,|\). The conformity scores are evaluated as follows:
\[V_{i}\coloneqq V(X_{i},Y_{i})=|\,\widehat{\mu}(X_{i})-Y_{i}\,|,\,\forall i\in \mathcal{D}_{c}. \tag{5}\]
For a given miss-coverage level \(\alpha\), we then compute a quantile of the empirical distribution of the absolute residuals,
\[Q_{\mathcal{V}}(1-\alpha)\coloneqq(1-\alpha)(1+1/|\mathcal{D}_{c}|)\text{-th quantile of $\mathcal{V}$}, \tag{6}\]
where \(\mathcal{V}=\{V_{i}:i\in\mathcal{D}_{c}\}\). Finally, the prediction interval at a new point \(X_{n+1}=x\) is given by
\[\widehat{C}_{n}(x)=\left[\,\widehat{\mu}(x)-Q_{\mathcal{V}}(1-\alpha),\, \widehat{\mu}(x)+Q_{\mathcal{V}}(1-\alpha)\,\right]. \tag{7}\]
The CP intervals have a fixed length of \(2Q_{\mathcal{V}}(1-\alpha)\), independent of \(X_{n+1}\), which is sufficient for satisfying marginal validity but does not adapt to the varying degrees of uncertainty across different prediction instances.
## 3 Conformalized Unconditional Quantile Regression (CUQR)
In this Section, we describe the two steps involved in our procedure, which we call conformalized unconditional quantile regression (CUQR). Indeed, ours is not the first adaptive variant of CP--we compare our method with existing approaches to adaptive uncertainty quantification in Section 4.
### Step 1: Unconditional Quantile Regression (UQR)
Consider the true populational counterpart of \(\widehat{C}_{n}\) in (4), i.e.,
\[C^{*}(x)\coloneqq\left[Q\left(\alpha/2,x\right),Q\left(1-\alpha/2,x\right) \right], \tag{8}\]
where \(C^{*}\) is the predictive interval given _oracle_ knowledge of \(P\), \(Q(\alpha,x)\) is the level-\(\alpha\) quantile of \(Y|X=x\), i.e., \(Q(\alpha,x)\coloneqq\inf\{y\in\mathbb{R}:F(y\,|\,X=x)\geq\alpha\}\), and \(F(.)\) is the conditional cumulative density function (CDF), \(F(y\,|\,X=x)\coloneqq\mathbb{P}(Y\leq y\,|\,X=x)\). By definition, the oracle band in (8) is conditionally valid in the sense of (4). Hence, a sensible guess of an uncertainty band that is both adaptive and transparent can be obtained by directly estimating the conditional quantile \(Q(.,x)\).
#### Nested sequence of plug-in estimates.
We use a _plug-in_ approach for estimating \(C^{*}\) by replacing the conditional quantile in (8) with a consistent estimate \(\widehat{Q}\), i.e., \(\widehat{C}_{n}(x)=\left[\,\widehat{Q}(\alpha/2,x),\widehat{Q}\left(1-\alpha /2,x\right)\,\right]\). While plug-in models can learn accurate estimates of \(C^{*}\), they do not provide finite-sample coverage guarantees. To take advantage of both the adaptivity of plug-in estimates and the finite-sample coverage of the CP framework, a typical approach is to "conformalize" these plug-in estimates [8]. In what follows, we explain how our procedure creates conformity scores based on plug-in estimates of \(C^{*}\).
Instead of constructing predictive intervals using a point estimate of \(Q(\alpha,.)\) obtained from a single plug-in model \(\widehat{Q}(\alpha,.)\), we generate a set of "candidate" estimates of the conditional quantile function and use the calibration set \(\mathcal{D}_{c}\) to pick the narrowest candidate band that achieves the desired coverage. More precisely, we define a set of predictive intervals for covariate \(x,\widehat{\mathcal{C}}(x)\), as follows:
\[\widehat{\mathcal{C}}(x)\coloneqq\{\widehat{C}_{\tilde{\alpha}}(x)=\widehat{ \mu}(x)\pm\widehat{Q}(\tilde{\alpha},x)\}_{\tilde{\alpha}\in(0,1)}, \tag{9}\]
where \(\widehat{Q}(\tilde{\alpha},x)\) is a plug-in estimate of the level-\(\tilde{\alpha}\) conditional quantile of the **model residual** at \(x\). We require that the plug-in estimates are monotonic: \(\widehat{Q}(\tilde{\alpha},x)\leq\widehat{Q}(\tilde{\alpha}^{\prime},x)\) for \(\tilde{\alpha}\leq\tilde{\alpha}^{\prime}\), i.e., no quantile crossing. Thus, \(\widehat{\mathcal{C}}(x)\) comprises a nested sequence of candidate intervals, through which we define the following conformity score:
\[V(X_{i},Y_{i})=\inf\{\tilde{\alpha}\in(0,1):Y_{i}\in\widehat{\mathcal{C}}_{ \tilde{\alpha}}(X_{i})\}, \tag{10}\]
for all \(i\in\mathcal{D}_{c}\). The conformity score in (10) checks for the smallest value of \(\tilde{\alpha}\) for which the corresponding interval \(\widehat{\mathcal{C}}_{\tilde{\alpha}}(X_{i})\) in \(\widehat{\mathcal{C}}\) covers the response \(Y_{i}\). We compute the empirical quantile of conformity scores \(Q_{\mathcal{V}}(1-\alpha)\) as in (6), and construct a predictive interval for \(X_{n+1}=x\) as:
\[\widehat{C}_{n}(x)\coloneqq\{\widehat{\mu}(x)\pm\widehat{Q}(\tilde{\alpha}^{ *},x)\},\,\tilde{\alpha}^{*}=Q_{\mathcal{V}}(1-\alpha). \tag{11}\]
We drop the dependence of \(\widehat{C}_{n}\) on \(\alpha\) to reduce notational clutter. Note that the procedure in (11) produces predictive intervals that vary across prediction instances since it picks an entire conditional quantile function from the nested set. The intervals in (11) still follow the CP construction: hence, they satisfy the following marginal coverage guarantee.
**Proposition 1**.: _Consider a sequence of plug-in estimates \(\{\widehat{Q}(\tilde{\alpha},.)\}_{\tilde{\alpha}}\) obtained from a sample \(\mathcal{D}_{c,1}\), and the corresponding conformity scores \(\mathcal{V}=\{V(X_{i},Y_{i}):i\in\mathcal{D}_{c,2}\}\) obtained from another sample \(\mathcal{D}_{c,2}\), where \(\mathcal{D}_{c,1}\) and \(\mathcal{D}_{c,2}\) are two disjoint subsets of \(\mathcal{D}_{c}\). If \(\{(X_{i},Y_{i}):1\leq i\leq n+1\}\) are exchangeable, then the interval in (11) satisfies_
\[\mathbb{P}(Y_{n+1}\in\widehat{C}_{n}(X_{n+1}))\geq 1-\alpha.\]
Proof is given in Appendix A. Variants of this result appear in the literature [7, 11]. Proposition 1 indicates that, by defining conformity scores over a sequence of bands rather than intervals, we can construct adaptive predictive intervals while retaining the marginal coverage guarantees of CP.
#### Plug-in estimation via UQR.
We use UQR [1] to fit the nested sequence of plug-in estimates in (9). In what follows, we explain how UQR works from a Taylor approximation perspective. UQR approximates the conditional quantiles of \(Y\,|\,X=x\), \(Q(\alpha,x)\), under the true distribution \(P\) as the marginal quantile of \(Y\), \(Q(\alpha)\), under an alternative distribution \(G_{x}\) that is "localized" around \(x\). Since the quantile is a statistical functional of the underlying distribution, we can estimate the marginal quantile under \(G_{x}\) given the marginal quantile under \(P\) using a von Mises linear approximation (VOM), i.e., a distributional analog of Taylor series of the following form [12]:
\[Q_{G_{x}}(\alpha)\approx Q_{P}(\alpha)+\int\text{IF}(y;Q(\alpha),P)\cdot dG_{x }(y), \tag{12}\]
where IF is the influence function of the functional \(Q(\alpha)\) at \(P\) for a given point \((x,y)\) in the direction of the localized distribution \(G_{x}\), which is defined as follows:
\[\text{IF}(y;Q(\alpha),P)=\lim_{\epsilon\to 0}\frac{Q_{P^{y}}(\alpha)-Q_{P}( \alpha)}{\epsilon}, \tag{13}\]
where \(P^{y}=(1-\epsilon)P+\epsilon\,\delta_{y}\). The influence function of the quantile measures the contribution of the outcome value \(y\) on the marginal quantile statistic \(Q_{P}(\alpha)\). By weighting the contributions of observations sampled from \(P\) using the localized density \(G_{x}\) as in (12), we obtain a first-order
approximation of the marginal quantile functional under \(G_{x}\). The influence function of the quantile functional is:
\[\text{IF}(y;Q(\alpha),P)=\frac{\alpha-\mathbf{1}\{y\leq Q_{P}(\alpha)\}}{f_{Y}(Q_ {P}(\alpha))}. \tag{14}\]
Here, \(f_{Y}(.)\) is the (one-dimensional) marginal density of \(Y\). The derivation of the formula in (14) is standard, and is provided in Appendix B for completeness. The _re-centered_ influence function (RIF) is defined as:
\[\text{RIF}(y;Q(\alpha),P)=Q_{P}(\alpha)+\text{IF}(y;Q(\alpha),P), \tag{15}\]
UQR involves regressing RIF over \(X\) to obtain a model for \(\mathbb{E}[\,\text{RIF}(Y;Q(\alpha),P)\,|\,X=x\,]\). Note that the influence function in (14) is a dichotomous variable as \(\mathbf{1}\{y\leq Q_{P}(\alpha)\}\) is the only term that changes across covariates. Thus, UQR involves fitting a one-dimensional density estimate for \(f_{Y}\) and a binary classifier for the dichotomous variable. UQR is typically used to study the effect of changing the covariate distribution on the marginal quantiles of outcomes, e.g., the effect of unionization on wages [13]. In our setup, we use (12) to obtain a plug-in estimate for the predictive band at the test point \(x\) as follows:
\[Q_{G_{x}}(\alpha)\approx\int\mathbb{E}[\,\text{RIF}(Y;Q(\alpha),P)\,|\,X=x\,] \cdot dG_{x}. \tag{16}\]
#### Constructing the nested sequence using UQR.
The approximation in (16) inspires a simple regression procedure for constructing the nested sequence in (9) while avoiding quantile crossing. Let \(\text{RIF}_{\alpha}(X_{i})\) be the RIF of the level-\(\alpha\) quantile associated with the \(i\)-th data point. For a test point \(X_{n+1}=x\), we can predict the value of \(Q_{G_{x}}(\alpha)\) by fitting an ML model on the data set \(\{(X_{i},\text{RIF}_{\alpha}(X_{i}))\}_{i\in\mathcal{D}_{\alpha}}\). Let the RIF values predicted by the ML model be \(\widetilde{\text{RIF}}_{\alpha}(x)\). Then, by repeating this process \(K>0\) times for all values of \(\tilde{\alpha}\) in \(\tilde{\alpha}=[1/K,\ldots,(K-1)/K]\), we can construct the nested sequence as follows:2
Footnote 2: We take the absolute value of \(\widetilde{\text{RIF}}\) to account for erroneously negative predictions.
\[\widehat{\mathcal{C}}(x)=\{\widehat{\mathcal{C}}_{k}(x)=\widehat{\mu}(x)\pm |\widetilde{\text{RIF}}_{\tilde{\alpha}_{k}}(x)|\}_{k=1}^{K-1}. \tag{17}\]
where \(\tilde{\alpha}_{k}=1/k\). Here, the RIF is defined with respect to quantiles of the model residual \(E=|\,\widehat{\mu}(X)-Y|\) rather than the outcome \(Y\). Note that, combining (14) and (15), the RIF for the level \(\tilde{\alpha}_{k}\) quantile can be written as:
\[\text{RIF}_{\tilde{\alpha}_{k}}(x)=Q_{P}(\tilde{\alpha}_{k})+\frac{\tilde{ \alpha}_{k}-\mathbf{1}\{e\leq Q_{P}(\tilde{\alpha}_{k})\}}{f_{E}(Q_{P}(\tilde {\alpha}_{k}))}. \tag{18}\]
The 1-D density \(f_{E}(.)\) can be estimated using kernel density estimation (KDE), and \(Q_{P}(\alpha)\) can be estimated as the empirical level-\(\tilde{\alpha}_{k}\) quantile of the residuals. The only term that we need to predict for each test point is \(\mathbf{1}\{e\leq Q_{P}(\tilde{\alpha}_{k})\},\forall k\in\{1,\ldots,K-1\}\). This can be achieved with a single ML model as follows: for each data point \((X_{i},E_{i})\), define a target \(k_{i}^{*}\coloneqq\min k\), s.t. \(E_{i}\leq\widehat{Q}_{P}(\tilde{\alpha}_{k})\), then fit a model \(g_{\theta}(.)\) (e.g., a regression model or a multi-class classifier) on the data set \(\{(X_{i},k_{i}^{*})\}_{i}\). For a new test point \(X_{n+1}=x\), we predict the RIF at \(X_{n+1}=x\) by plugging in the predictions of \(g_{\theta}\) into (18) as follows:
\[\widehat{\text{RIF}}_{\tilde{\alpha}_{k}}(x)=\widehat{Q}_{P}(\tilde{\alpha}_{ k})+\frac{\tilde{\alpha}_{k}-\mathbf{1}\{g_{\theta}(x)\leq k\}}{\widehat{f}_{E}( \widehat{Q}_{P}(\tilde{\alpha}_{k}))}, \tag{19}\]
\(\forall k\in\{1,\ldots,K-1\}\). The nested set in (17) can thus be constructed using the \(K-1\) predictions in (19). Note that for any \(k<k^{\prime}\), it is sufficient that \(\partial\widehat{f}_{E}(\widehat{F}_{E}^{-1}(\alpha))/\partial\alpha<0\) for the monotonicity of the nested intervals to be preserved, i.e., \(\widehat{\text{RIF}}_{\tilde{\alpha}_{k}}(x)<\widehat{\text{RIF}}_{\tilde{ \alpha}_{k^{\prime}}}(x)\). This condition is met by various typical probability distributions (i.e., the exponential distribution). When this condition is violated for any two consecutive intervals in our empirical estimate, we replace \(\widehat{f}_{E}(\widehat{Q}_{P}(\tilde{\alpha}_{k}))\) with \(\widehat{f}_{E}(\widehat{Q}_{P}(\tilde{\alpha}_{k-1}))\) to enforce monotonicity.
Alternative approaches to constructing the set \(\{\widehat{Q}(\alpha,.)\}_{\alpha}\) include fitting \(K\) independent quantile regression models or a single distributional model (e.g., a Bayesian nonparametric regression [14]). The RIF-based construction of the nested set \(\{\widehat{Q}(\alpha,.)\}_{\alpha}\) is more computationally and statistically efficient than either approach as it requires training a single ML model with labels that condense information about the conditional quantiles at levels \([1/K,\ldots,(K-1)/K]\).
### Step 2: Conformalizing UQR within subgroups
In Section 3.1, we developed a procedure that fulfills the adaptivity requirement while retaining the marginal validity of the non-adaptive CP method (Proposition 1). These marginal guarantees, however, do not meet the criteria for being transparent as they do not reflect the accuracy of the issued intervals for any given prediction. To provide more transparent guarantees, the second step selects an interval from the nested set by applying the conformal procedure within local regions in covariate-space as follows:
* Partition the covariate space into \(G\) subsets \(\{\widehat{\mathcal{S}}_{g}\}_{g=1}^{G}\).
* Apply the conformal procedure in Section 3.1 locally within each subgroup \(g\) by picking a subgroup-specific band \(\widehat{\mathcal{C}}_{k(g)}(x)\) from the set \(\{\widehat{\mathcal{C}}_{k}(x)\}_{k=1}^{K-1}\).
* For a test point \(X_{n+1}=x\), identity the relevance subgroup \(g_{n+1}\) in which it belongs and report the subgroup \(\widehat{\mathcal{S}}_{g_{n+1}}\) and the corresponding interval \(\widehat{\mathcal{C}}_{k(g_{n+1})}(X_{n+1})\).
The steps involved in our conformalized UQR (CUQR) procedure are given in Algorithm 1. The procedure reports both a predictive interval that is specific to each instance, and a subgroup for which a desired level of accuracy is achieved. The relevance subgroups can either be learned from training
data (e.g., using a clustering algorithm) or predetermined using application-specific knowledge (e.g., disease subtypes or protected attributes). The subgroup can be reported in the form of a cluster with a "representative" covariate value that is typical for this subgroup. In the clinical example discussed earlier, our algorithm's output can represent the relevance subgroup in terms of a set of "representative" patients.
Increasing the number of subgroups \(G\) increases the granularity of the achieved average coverage at the cost of higher variance in realized coverage. \(G\) can be set to larger values for larger sample sizes, e.g., \(G=O(n)\), so that conditional coverage is achieved asymptotically. In the other extreme case when \(G=1\), we recover the standard marginal coverage guarantee. In what follows, we state the theoretical coverage guarantee provided by the CUQR interval \(\widehat{C}_{n}(.)\).
**Theorem 1**.: _If \(\{(X_{i},Y_{i}):i\in\mathcal{D}_{c}\}\cup\{(X_{n+1},Y_{n+1})\}\) are exchangeable conditional on \(\{\mathcal{S}_{g}\}_{g=1}^{G}\), and \(\mathbb{P}(\mathcal{S}_{g})>\delta,\forall g\) and some \(\delta>0\), then for any \(\alpha\in(0,1)\) and \(n\in\mathbb{Z}_{+}\), we have_
\[\mathbb{P}\left[\,Y_{n+1}\in\widehat{C}_{n}(X_{n+1})\,\big{|}\,X_{n+1}\in \widehat{\mathcal{S}}_{g},\mathcal{D}\,\right]\geq 1-\alpha-\frac{\lambda}{ \sqrt{n_{g}}},\]
\(\forall 1\leq g\leq G\)_, with probability at least \(1-2\exp(-2\lambda^{2})\), for any \(\lambda\geq\sqrt{\log(2)/2}\). Theorem 1 states that coverage holds with high probability conditional on each relevance subgroup \(g\) with a slack \(\lambda/\sqrt{n_{g}}\), where \(n_{g}\) is the number of calibration points in \(\widehat{S}_{g}\). Thus, a PAC guarantee on coverage can be achieved within each subgroup by selecting the predictive band with an empirical coverage of \((1-\alpha)+\lambda/\sqrt{n_{g}}\). This suggests a trade off between the **transparency** and **efficiency** of the issued intervals--increasing the number of subgroups \(G\) means that coverage will hold almost surely for more granular subsets of the input space, i.e., more transparency. This comes at the cost of longer intervals since \(n_{g}\) will be smaller as \(G\) increases. The bound in Theorem 1 follows from an application of the Dvoretzky-Kiefer-Wolfowitz inequality to the empirical CDF of the conformity scores [15]. The full proof is provided in Appendix C.
## 4 Related work
**Distributional regression** models that directly estimate the conditional density \(P_{Y|X=x}\) provide adaptive estimates of uncertainty. Broad classes of methods fall under this category; a non-exhaustive list includes: Bayesian nonparametric regression (e.g., using Gaussian processes [16] or regression trees [14]), Bayesian neural nets [17; 18; 19], and deep ensembles [20; 21]. Many of these models can provide accurate pragmatic estimates of predictive variance, but without the finite-sample coverage guarantees enabled by CP. The achieved coverage of distributional regression can be very sensitive to modeling choices (e.g., hyper-parameters or prior distributions). For instance, in Bayesian regression, the frequentist coverage achieved by posterior credible intervals with exact inference depends on the choice of the prior [22]. With the more commonly used approximate inference methods (e.g., dropout or variational inference [23]), the induced posterior distributions may not concentrate asymptotically, resulting in poor coverage behavior [24; 25]. Distributional models are often used in conjunction with CP approaches to satisfy finite-sample coverage while maintaining adaptivity [26].
**Conformal prediction** in its most basic form achieves finite-sample marginal coverage at the expense of adaptivity [2; 3; 4]. Various approaches to CP-based adaptive predictive inference have been recently proposed [7; 8; 10; 27; 28; 29; 30; 31]. The idea of "conformalizing" a plug-in estimate of the conditional quantile function originated in [8]. In this work, a single quantile regression model \(\widehat{Q}(\alpha,x)\) is fit to the training data, and a conformity score that measures the accuracy of \(\widehat{Q}(\alpha,x)\) is used to derive an adjustment for these intervals. Conformalized quantile regression provides
marginal coverage guarantees, but its empirical conditional coverage depends on the quality of the underlying quantile regression model. Refinements of this approach were later proposed through two different lines of work: the first uses a re-weighting technique to "localize" CP at new test points [27, 30], and the second conformlizes a distributional regression model from which conditional quantiles can be derived [26, 29]. In both lines of work, conditional validity is achieved in an asymptotic sense.
Our work holds subtle connections to these two approaches. In terms of the construction of conformity scores, [26, 29] define the scores based on conditional ranks rather than error residuals--our procedure constructs predictive intervals by selecting among "candidate" estimates of conditional quantiles generated by varying the quantile level \(\alpha\). This is equivalent to constructing the predictive intervals by selecting among estimates of conditional ranks of the full conditional distribution \(P_{Y|X}\), but unlike the procedures in [26, 29], ours does not require access to consistent estimates of \(P_{Y|X}\). Similar to the localized CP methods in [27, 30], our procedure is effectively a localized version of the marginal CP method. But unlike localized CP (LCP), our procedure can utilize any ML model for obtaining the localized quantile estimates (i.e., Equation (19)), whereas LCP is limited to re-weighting estimators (e.g., based on Nadarya-Watson kernel). Additionally, because our procedure selects among multiple quantile functions within each subgroup, it can achieve finite-sample (rather than asymptotic) coverage within local regions. The nested construction of our plug-in estimates falls within the general nested CP formulation developed in [11]. Unlike the formulation in this work, we construct our nested sets by parametrizing a functional form for the predictive band rather than directly parametrizing intervals, which enables selecting a different interval for each instance within a subgroup.
Re-centered influence functions (RIF) are typically used as targets for unconditional quantile regression models, a common modeling tool in econometric studies [1]. To the best of our knowledge, RIF- based regression has not been operationalized as a conformity score prior to this work.
## 5 Experiments
We compare CUQR with various conformal and quantile regression baselines across multiple benchmark data sets. We start by describing our experimental setup below.
**Baselines.** We consider standard split _conformal prediction_ (CP), the _locally adaptive CP_ (LACP) method in [32], _conformalized quantile regression (CQR)_[8], and _conformal conditional histograms_ (CCH) [29]. We also consider two variants of the standard _quantile regression_ (QR) model for estimating conditional quantiles: QR with an underlying random forest model (QR-RF), and QR implemented using a neural network (QR-NN). We consider two ablated versions of CUQR that apply a conformalization procedure within the same relevance subgroups of CUQR. The first baseline, dubbed CQ, constructs the nested set \(\mathcal{C}(.)\) using the empirical quantiles of residuals within the relevance subgroups, assigning the same intervals to all units within a subgroup without using the UQR-based plug-in estimates. The second baseline, which we call CQR-S, applies the adaptive CQR method [8] within the relevance subgroups.
Finally, we consider two variants of our method: CUQR which applies conformalization based on the empirical \((1-\alpha)\)-th quantiles, and CUQR-PAC, which corrects for the slack term in Theorem 1 to provide a high probability (PAC-style) coverage per subgroup. We select the value of \(\lambda\) in Theorem 1 so that the probability that coverage holds (conditional on training data) is 90%, i.e., \(1-2\exp(-2\lambda^{2})=0.9\). We implement our method using an XGBoost regression model for \(g_{\theta}\). In all experiments, we create the relevance subgroups using the \(K\)-means clustering algorithms. Further experimental details are provided in Appendix D.
**Evaluation metrics.** We evaluate all baselines with respect to their achieved marginal coverage \(C_{av}\), efficiency quantified via average interval length \(L_{av}\) and subgroup-level coverage denoted as \(C_{av}(\widehat{S}_{g})\) for all subgroups \(\{\widehat{S}_{g}\}_{g=1}^{G}\). We also evaluate the worst-case subgroup-level coverage, defined as \(C_{G}^{w.c.}=\min_{g}C_{av}(\widehat{S}_{g})\). All metrics are evaluated on testing data and averaged over 10 runs. Unless otherwise stated, we set the target coverage level to \(1-\alpha=0.9\).
**Data sets.** We evaluate all baselines on **9 benchmark data sets** that are commonly used to evaluate CP methods: MEPS-19, MEPS-20, MEPS-21, Facebook-1, Facebook-2, Bio, Kinssm, Naval, and Blog [8, 28, 28, 33]. Due to space limitations, we highlight results for four data sets (MEPS-19, Facebook-1, Blog and Kinssm) in this Section and defer further results to the Appendix. Details of all data sets are provided in the Appendix. For each run, we randomly split each data set into disjoint training \(\mathcal{D}_{t}\) (42.5\(\%\)), calibration \(\mathcal{D}_{c}\) (42.5\(\%\)) and testing \(\mathcal{D}_{test}\) (15\(\%\)) samples. For the LACP, CCH, CQR-S, CQ and CUQR baselines, we further split the calibration set \(\mathcal{D}_{c}\) in half to obtain plug-in estimates and conformity scores from different splits. In all experiments, we fit a Gradient Boosting regression model \(\widehat{\mu}\) using the training set \(\mathcal{D}_{t}\) and apply the predictive inference baselines on top of the predictions issued by the model \(\widehat{\mu}\).
## Results
**Evaluating transparency.** All baselines are calibrated to have a target marginal coverage of 90\(\%\), but what does this notion of coverage mean to individual users of the model? In this experiment, we assess the transparency of different baselines by evaluating the worst-case conditional coverage within typical "subgroups" of individuals or prediction instances. We use \(K\)-means clustering to identify \(G=10\) rel
evance subgroups using training samples. Note that the subgroups are not arbitrary--they represent a clustering of the population into "typical" subgroups of similar individuals.
In Table 1, we show the marginal coverage and average lengths of predictive intervals for all baselines across the three data sets. First, we observe that while all conformal methods achieve the target (marginal) coverage levels, the coverage of the QR baselines vary depending on the underlying model specification, which is expected as these baselines do not provide any coverage guarantees. On the contrary, all CP-based methods achieve their promised model-agnostic coverage guarantees, but how well do the different CP variants perform when examined on a subgroup level?
As we can see in Table 1, CP baselines that only guarantee marginal coverage (CP and CQR) have a very poor worst-case coverage conditional on a subgroup, i.e., their declared guarantees do not reflect their performance among a large subgroup of "similar" individuals in a given population. Similary, CP methods with asymptotic conditional coverage guarantees can exhibit severe under-coverage in finite samples (LACP), or maintain reasonable conditional coverage but with poor efficiency (CCH), which highlights the importance of controlling for finite-sample conditional coverage. While CUQR achieves subgroup-specific coverage marginally, the worst-case subgroup-level coverage can be significantly lower than the desired target coverage for some data sets (e.g., MEPS-19 and Blog). The CUQR-PAC variant of our method guarantees that coverage holds within each subgroup with high probability. This comes at the cost of efficiency, i.e., wider predictive intervals.
The CQ variant of our method, which picks a fixed interval per subgroup rather than a full RIF-based predictive band, achieves better worst-case coverage at the expense of within-subgroup adaptivity and average efficiency. Because the subgroup-specific coverage guarantees for CUQR hold on average with respect to the randomness of calibration data, the variance of the empirically achieved coverage on test data increases as the number of subgroups \(G\) increases (i.e., smaller calibration sample per subgroup). Consequently, the worst case subgroup-level coverage achieved by CUQR decreases (in expectation) as the number of relevance subgroups increases. To manage the trade off between transparency and efficiency, the CUQR-PAC variant of our method accounts for the variance of realized coverage by inflating the predictive intervals to achieve an empirical coverage of \((1-\alpha)+\lambda/\sqrt{n_{g}}\) (See Figure S1 1).
**Evaluating adaptivity.** Next, we assess the extent to which
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{**MEPS-19**} & \multicolumn{4}{c}{**Facebook-1**} & \multicolumn{4}{c}{**Blog**} & \multicolumn{4}{c}{**Kin8nm**} \\ \hline & \(C_{av}\) & \(L_{av}\) & \(C_{G}^{w.c.}\) & \(C_{av}\) & \(L_{av}\) & \(C_{G}^{w.c.}\) & \(C_{av}\) & \(L_{av}\) & \(C_{G}^{w.c.}\) & \(C_{av}\) & \(L_{av}\) & \(C_{G}^{w.c.}\) \\ \hline \multicolumn{12}{l}{**QR methods**} \\ QR-RF & 0.90 & 1.00 & 0.54 & 0.93 & 0.85 & 0.78 & 0.79 & 0.73 & 0.76 & 0.93 & 1.36 & 0.89 \\ QR-NN & 0.79 & 0.54 & 0.67 & 0.81 & 0.55 & 0.68 & 0.79 & 0.73 & 0.76 & 0.79 & 0.94 & 0.74 \\ \hline \multicolumn{12}{l}{**CP methods**} \\ CP & 0.89 & 1.28 & 0.19 & 0.90 & 1.39 & 0.72 & 0.89 & 1.89 & 0.57 & 0.90 & 2.17 & 0.83 \\ LACP & 0.89 & 0.61 & 0.20 & 0.90 & 0.69 & 0.76 & 0.89 & 1.06 & 0.63 & 0.90 & 1.09 & 0.84 \\ CQR & 0.89 & 1.12 & 0.46 & 0.90 & 0.83 & 0.77 & 0.90 & 1.34 & 0.82 & 0.90 & 1.33 & 0.85 \\ CCH & 0.96 & 5.37 & 0.79 & 0.89 & 0.72 & 0.65 & 0.98 & 5.58 & 0.96 & 0.89 & 1.14 & 0.86 \\ CQ & 0.87 & 2.02 & 0.76 & 0.89 & 1.34 & 0.79 & 0.87 & 1.81 & 0.76 & 0.89 & 2.16 & 0.85 \\ CQR-S & 0.89 & 1.54 & 0.67 & 0.90 & 0.77 & 0.87 & 0.90 & 1.37 & 0.80 & 0.90 & 1.33 & 0.85 \\ \hline \multicolumn{12}{l}{**CUQR**} \\ \multicolumn{12}{l}{**CUQR-PAC**} \\ \multicolumn{12}{l}{**CUQR-PAC**} \\ \multicolumn{12}{l}{**CUQR-PAC**} \\ \hline \hline \end{tabular}
\end{table}
Table 1: Marginal coverage, efficiency and conditional coverage of all baselines on benchmark data sets.
Figure 1: Impact of the number of subgroups \(G\) on coverage.
baselines are adaptive, i.e., the lengths of their intervals vary according to the true uncertainty of the base model \(\widehat{\mu}\). Among the data sets under study, the MEPS-19 data exhibited significant heteroscedasticity, i.e., the average error of the predictive model varies significantly across the subgroups. Hence, we expect the average lengths of intervals issued by adaptive procedures to be greater for subgroups where the model errors are high. In Figure 2, we plot the achieved subgroup-level coverage (left) and the corresponding average interval length per subgroup (middle) in the **MEPS-19** dataset. In both Figures, the subgroup indexes on the \(x\)-axis are ordered ascendingly according to the model's subgroup-level average error on testing data (i.e., larger indexes correspond to higher model uncertainty). (In Figure 2, we exclude baselines that were under-performing to avoid clutter.) As we can see, CUQR maintains the target coverage approximately for all subgroups, and adjusts the lengths of its issued intervals withing each subgroup according to the model uncertainty. On the contrary, competing baselines either fail to recognize the varying uncertainty across subgroups (LACP and CP), or do not adequately adapt the interval length to maintain target coverage (QR-RF and CQR).
Finally, to evaluate the adaptivity of CUQR beyond the subgroups on which it was calibrated, we run a \(K\)-means clustering algorithm with a different random seed and different number of clusters \(G=50\), and order the subgroup indexes ascendingly according to the base model's uncertainty as before. We evaluate the average interval lengths of CUQR (previously fitted on the \(G=10\) subgroups), along with the other baselines, on the new and more granular 50 subgroups. As we can see in Figure 2 (right), CUQR outperforms other baselines in adapting its intervals to subgroup-level uncertainty, indicating better conditional adaptivity properties beyond what is implied by the theoretical guarantees.
## 6 Conclusion
In this paper, we developed a conformal prediction method that adapts its issued intervals to the level of uncertainty in each prediction instance, while reporting a local region in covariate-space that contains the queried covariate instance and over which the procedure is guaranteed to be accurate on average. Our procedure partitions the covariate space into subgroups, and leverages the re-centered influence function of the quantile functional to construct a nested sequence of predictive bands, from which it selects one band per subgroup. By reporting instance-specific predictive intervals and subgroup-specific coverage guarantees to end-users, our method enables a more transparent approach to communicating uncertainty in the predictions of ML models.
|
2305.19140 | Logarithmic Sobolev Inequalities of Fractional Order on Noncommutative
Tori | In this paper, we prove a version of the logarithmic Sobolev inequality of
fractional order on noncommutative $n$-tori for any dimension $n\geq 2$. | Gihyun Lee | 2023-05-30T15:50:11Z | http://arxiv.org/abs/2305.19140v2 | # Logarithmic Sobolev inequalities of fractional order on noncommutative tori
###### Abstract.
In this paper, we prove a version of the logarithmic Sobolev inequality of fractional order on noncommutative \(n\)-tori for any dimension \(n\geqslant 2\).
Key words and phrases:Logarithmic Sobolev inequality, noncommutative tori 2020 Mathematics Subject Classification: 35A23, 46E35, 46L52, 58B34
## 1. Introduction
Among all noncommutative spaces studied in Alain Connes' noncommutative geometry program [11] noncommutative tori are the most extensively studied ones. One of the reasons behind the extensive research conducted on noncommutative tori is the presence of counterparts to various mathematical tools employed in the study of analysis and geometry. For example, counterparts to the notions such as vector bundles, Fourier series, spaces of smooth functions, Sobolev spaces and pseudodifferential calculus are established and available in the setting of noncommutative tori (see, e.g., [8, 10, 17, 18, 24, 34] and the references therein). Moreover, noncommutative tori are utilized in the mathematical modeling of physical phenomena, such as the quantum Hall effect [2], topological insulators [4, 25] and string theory [12, 29].
Let us now briefly review the classical Sobolev inequalities on Euclidean spaces. The classical Sobolev inequality states that if a function \(f\) defined on \(\mathbb{R}^{n}\), along with its first derivatives, belongs to \(L_{p}(\mathbb{R}^{n})\) and if \(q=(\frac{1}{p}-\frac{1}{n})^{-1}\) is finite, then \(f\) is in \(L_{q}(\mathbb{R}^{n})\). This inequality has broad applications in various fields of analysis, including the study of partial differential equations. However, the classical Sobolev inequality has the following limitations. As evident from the value of \(q\) we see that, as the dimension \(n\) increases, the difference between \(q\) and \(p\) narrows. Consequently, as \(n\) grows, the improvement in summability obtained from the differentiability decreases. For this reason, in the infinite dimensional setting such as quantum field theory we cannot expect to obtain an inequality precisely corresponds to the classical Sobolev inequality. Motivated by this observation Gross [15] introduced and proved the following logarithmic Sobolev inequality on Euclidean spaces.
\[\int_{\mathbb{R}^{n}}|f(x)|^{2}\log|f(x)|\,d\nu(x)\leqslant\int_{\mathbb{R}^{ n}}|\nabla f(x)|^{2}\,d\nu(x)+\|f\|_{2}^{2}\log\|f\|_{2}.\]
Here \(\nu\) denotes the Gaussian measure on \(\mathbb{R}^{n}\) and \(\|\cdot\|_{2}\) denotes the \(L_{2}\)-norm associated with \(\nu\). As mentioned in [15] this inequality can be utilized in the infinite dimensional setting, because the coefficients in the inequality do not depend on the dimension \(n\).
Since the introduction of Gross' logarithmic Sobolev inequality, various methods have been employed to establish logarithmic Sobolev inequalities in different settings. Given the vast number of papers on this topic, it is not feasible to encompass all the references here. However, for a partial overview, let us list a few results which can be found in the literature. Rosen presented and proved a logarithmic Sobolev inequality on weighted \(\mathbb{R}^{n}\)[27]. Weissler proved a logarithmic Sobolev inequality on the circle [33]. Gross [16] and Chatzakou-Kassymov-Ruzhansky [6] investigated logarithmic Sobolev inequalities on Lie groups. Stroock-Zegarlinski [31] and Bodineau-Helffer [3] obtained logarithmic Sobolev inequalities for spin systems. Brannan-Gao-Junge derived a version of logarithmic Sobolev inequality by utilizing lower bound of the Ricci curvature of a compact Riemannian manifold [5].
In the case of noncommutative tori, as mentioned above, there exist counterparts to the tools used in Fourier theory for ordinary tori (see [8]). Therefore, the arguments employed on ordinary tori can often be adapted to the setting of noncommutative tori. By using this harmonic analysis technique of noncommutative tori and the theory of operator algebras Xiong-Xu-Yin provided a detailed account on the construction of Sobolev, Besov and Triebel-Lizorkin spaces on noncommutative tori ([34]; see also [18, 28, 30] for Sobolev spaces on noncommutative tori). The embedding theorems between these spaces are also proved in [34]. In addition, in [20, 21] McDonald-Ponge proved versions of Sobolev inequalities on noncommutative tori as consequences of the Lieb-Thirring inequalities on (curved) noncommutative tori.
However, to the best of the author's knowledge, logarithmic Sobolev inequalities in the setting of noncommutative tori haven't been studied much in the literature. The logarithmic Sobolev inequality on noncommutative 2-tori by Khalkhali-Sadeghi [19] is the only existing result on this topic. They attempted to establish logarithmic Sobolev inequality on noncommutative 2-tori by adopting Weissler's proof of the logarithmic Sobolev inequality on the circle [33]. However, due to technical issues arising from the noncommutativity, they were only able to obtain the following form of logarithmic Sobolev inequality for strictly positive elements of the form \(x=\sum_{k\in\mathbb{Z}}x_{k}U_{1}^{k}U_{2}^{kl}\), where \(0\neq l\in\mathbb{Z}\).
\[\tau\big{(}x^{2}\log x\big{)}\leq\sum_{k\in\mathbb{Z}}(1+|l|)\,|k|\,|x_{k}|^{2 }+\|x\|_{L_{2}}^{2}\log\|x\|_{L_{2}}.\]
We refer to Section 2 for notations and background material on noncommutative tori.
In this paper, we prove the following logarithmic Sobolev inequalities of fractional order on noncommutative tori for any dimension \(n\geqslant 2\).
**Theorem 1.1**.: _Let \(0<x\in C^{\infty}(\mathbb{T}_{\theta}^{n})\), \(a>0\) and \(0<s<\frac{n}{2}\). Then there is a constant \(C(n,s,a)>0\) depends only on \(n,s\) and a such that_
\[\tau\left[x^{2}\log\left(\frac{x^{2}}{\|x\|_{L_{2}}^{2}}\right)\right]\leqslant C (n,s,a)\,\|x\|_{W_{2}^{s}}^{2}-\frac{n}{s}(\log a+1)\,\|x\|_{L_{2}}^{2}. \tag{1.1}\]
Our proof of this theorem is based on the short proof of fractional order logarithmic Sobolev inequality on \(\mathbb{R}^{n}\) presented in a recent article by Chatzakou-Ruzhansky [7]. In the setting of noncommutative tori, there are tools correspond to the key tools used in the proof by Chatzakou-Ruzhansky [7], which enables us to apply their approach immediately to noncommutative tori. Instead of Jensen's inequality for concave functions and probability measures used in the proof by Chatzakou-Ruzhansky, we utilize the Jensen's operator inequality known in the operator algebraic setting ([9, 14]; see also [23]) and the embedding theorem between Sobolev spaces proved in [34] to prove Theorem 1.1.
Although the main result of this paper, Theorem 1.1, is the first result on logarithmic Sobolev inequality of _fractional order_ on noncommutative tori, it still has certain limitations and room for improvement. First, the Sobolev norm used on the right-hand side of (1.1) needs to be replaced by the homogeneous Sobolev norm,
\[\|x\|_{W_{2}^{s}(\mathbb{T}_{\theta}^{n})}^{2}:=\sum_{k\in\mathbb{Z}^{n}}|k|^ {2s}\,|x_{k}|^{2},\qquad s>0.\]
However, an inequality between homogeneous Sobolev norms on noncommutative tori which can be applied to the arguments in this paper is missing in the literature. Although McDonald-Ponge's Sobolev inequalities [20, 21] deal with homogeneous Sobolev norms, these results cannot be directly applied to the arguments of this paper. In [20, 21] Sobolev inequalities are proven only for zero mean value elements, i.e., elements \(x\) with \(\tau(x)=0\). But in order for the logarithm defined by holomorphic functional calculus appearing in (1.1) to make sense, our focus should be restricted to strictly positive elements, and strictly positive elements cannot have a zero mean. Another issue is the lack of a result on the sharpness of Sobolev inequalities on noncommutative tori. In [7] Chatzakou-Ruzhansky utilized the sharp constant of the Sobolev inequality on \(\mathbb{R}^{n}\) obtained in [13] to get an explicit expression of the constant appearing in their logarithmic Sobolev inequality and study its behavior. Similarly, if we could determine the sharpness of Sobolev inequalities on noncommutative tori, we could expect to improve the constant on the right-hand side of (1.1) or
get an explicit expression of it. In particular, the sharpness of Sobolev inequalities would enable us to verify the conjecture on the logarithmic Sobolev inequality on noncommutative 2-tori stated in [19] by setting \(a=\frac{1}{e}\) in (1.1).
This paper is organized as follows. In Section 2, We gather some background material on noncommutative tori, Sobolev spaces and Jensen's operator inequality used in this paper. In Section 3, we prove the logarithmic Sobolev inequalities of fractional order on noncommutative tori, the main result of this paper (Theorem 1.1).
## 2. Preliminaries
In this section, we gather background material on noncommutative tori, Sobolev spaces and Jensen's operator inequality used in the proof of the main theorem in Section 3.
### Noncommutative tori
We refer to [11, 17, 26] for a more detailed account on noncommutative tori.
Let \(\theta\) be a skew-symmetric \(n\times n\) matrix over \(\mathbb{R}\). The noncommutative torus associated with \(\theta\), denoted by \(\mathbb{T}_{\theta}^{n}\), is a noncommutative space in the sense of Alain Connes' noncommutative geometry [11]. The \(C^{*}\)-algebra \(C(\mathbb{T}_{\theta}^{n})\) and the von Neumann algebra \(L_{\infty}(\mathbb{T}_{\theta}^{n})\) of \(\mathbb{T}_{\theta}^{n}\) are generated by the unitaries \(U_{1},\dots,U_{n}\) subject to the relations,
\[U_{k}U_{j}=e^{2\pi i\theta j_{k}}U_{j}U_{k},\qquad 1\leqslant j,k\leqslant n.\]
These unitaries \(U_{1},\dots,U_{n}\) can be concretely realized as unitary operators on \(L_{2}(\mathbb{T}^{n})\) (see, e.g., [17]), and hence both \(C(\mathbb{T}_{\theta}^{n})\) and \(L_{\infty}(\mathbb{T}_{\theta}^{n})\) are \(*\)-subalgebras of \(\mathscr{L}(L_{2}(\mathbb{T}^{n}))\), the \(C^{*}\)-algebra of bounded linear operators on \(L_{2}(\mathbb{T}^{n})\). If \(\theta=0\), then we recover the spaces of continuous linear functions \(C(\mathbb{T}^{n})\) and essentially bounded functions \(L_{\infty}(\mathbb{T}^{n})\) on the ordinary \(n\)-torus \(\mathbb{T}^{n}=\mathbb{R}^{n}/(2\pi\mathbb{Z})^{n}\).
In what follows, for any \(k=(k_{1},\dots,k_{n})\in\mathbb{Z}^{n}\), we shall denote \(U_{1}^{k_{1}}\cdots U_{n}^{k_{n}}\) by \(U^{k}\). Given any \(T\in\mathscr{L}(L_{2}(\mathbb{T}^{n}))\), we define
\[\tau(T)=\langle T1|1\rangle_{L_{2}(\mathbb{T}^{n})}=(2\pi)^{-n}\int_{\mathbb{ T}^{n}}(T1)(x)\,dx.\]
This defines a tracial state on both \(C(\mathbb{T}_{\theta}^{n})\) and \(L_{\infty}(\mathbb{T}_{\theta}^{n})\). By a direct computation it can be shown that \(\tau(U^{k})=0\) for \(0\neq k\in\mathbb{Z}^{n}\) and \(\tau(1)=1\). For \(x,y\in C(\mathbb{T}_{\theta}^{n})\) we define
\[\langle x|y\rangle_{L_{2}(\mathbb{T}_{\theta}^{n})}=\tau(xy^{*}).\]
Let us denote by \(L_{2}(\mathbb{T}_{\theta}^{n})\) the Hilbert space completion of \(C(\mathbb{T}_{\theta}^{n})\) with respect to this inner product. Then the family \(\{U^{k};\,k\in\mathbb{Z}^{n}\}\) forms an orthonormal basis for \(L_{2}(\mathbb{T}_{\theta}^{n})\). This orthonormal basis plays the role of the standard orthonormal basis for \(L_{2}(\mathbb{T}^{n})\) utilized in Fourier analysis, i.e., every \(x\in L_{2}(\mathbb{T}_{\theta}^{n})\) can be uniquely written as follows.
\[x=\sum_{k\in\mathbb{Z}^{n}}x_{k}U^{k},\qquad x_{k}:=\big{\langle}x|U^{k} \big{\rangle}_{L_{2}(\mathbb{T}_{\theta}^{n})}. \tag{2.1}\]
Furthermore, the GNS construction (see, e.g., [1]) associated with \(\tau\) gives rise to the \(*\)-representations of \(C(\mathbb{T}_{\theta}^{n})\) and \(L_{\infty}(\mathbb{T}_{\theta}^{n})\) on the Hilbert space \(L_{2}(\mathbb{T}_{\theta}^{n})\).
The \(C^{*}\)-algebra \(C(\mathbb{T}_{\theta}^{n})\) admits a strongly continuous action of \(\mathbb{R}^{n}\), denoted by \(\alpha_{s}(x)\) for \(s\in\mathbb{R}^{n}\) and \(x\in C(\mathbb{T}_{\theta}^{n})\). Hence the triple \((C(\mathbb{T}_{\theta}^{n}),\mathbb{R}^{n},\alpha)\) forms a \(C^{*}\)-dynamical system. For the elements \(U^{k}\), \(k\in\mathbb{Z}^{n}\), this action is given by
\[\alpha_{s}(U^{k})=e^{is\cdot k}U^{k},\qquad s\in\mathbb{R}^{n}.\]
The \(C^{*}\)-dynamical system structure on \(C(\mathbb{T}_{\theta}^{n})\) enables us to define the dense subalgebra \(C^{\infty}(\mathbb{T}_{\theta}^{n})\) of smooth elements of the action \(\alpha\), i.e., we define
\[C^{\infty}(\mathbb{T}_{\theta}^{n}):=\left\{x\in C(\mathbb{T}_{\theta}^{n});\, \mathbb{R}^{n}\ni s\mapsto\alpha_{s}(x)\in C(\mathbb{T}_{\theta}^{n})\text{ is a $C^{\infty}$-map}\right\}.\]
We also have the following characterization of the smooth elements in \(C(\mathbb{T}_{\theta}^{n})\) in terms of the Fourier series expansion given in (2.1).
\[C^{\infty}(\mathbb{T}_{\theta}^{n})=\Big{\{}x=\sum_{k\in\mathbb{Z}^{n}}x_{k}U^{ k};\,(x_{k})_{k\in\mathbb{Z}^{n}}\in\mathscr{S}(\mathbb{Z}^{n})\Big{\}}.\]
Here \(\mathscr{S}(\mathbb{Z}^{n})\) denotes the space of rapidly decaying sequences indexed by \(\mathbb{Z}^{n}\) with entries in \(\mathbb{C}\). Furthermore, the action \(\mathbb{R}^{n}\ni s\mapsto\alpha_{s}(x)\in C^{\infty}(\mathbb{T}^{n}_{\theta})\) also enables us to define the derivations \(\partial_{1},\dots,\partial_{n}\) on \(C^{\infty}(\mathbb{T}^{n}_{\theta})\). For \(j=1,\dots,n\), we define \(\partial_{j}:C^{\infty}(\mathbb{T}^{n}_{\theta})\to C^{\infty}(\mathbb{T}^{n}_ {\theta})\) by letting
\[\partial_{j}(x)=\partial_{s_{j}}\alpha_{s}(x)\big{|}_{s=0},\qquad x\in C^{ \infty}(\mathbb{T}^{n}_{\theta}).\]
In particular, for the elements \(U^{k}\), \(k=(k_{1},\dots,k_{n})\in\mathbb{Z}^{n}\), we obtain \(\partial_{j}(U^{k})=ik_{j}(U^{k})\), \(1\leqslant j\leqslant n\).
### Sobolev and \(L_{p}\)-spaces
A detailed account on Sobolev spaces on noncommutative tori can be found in [18, 28, 30, 34].
Let us denote the \(L_{2}\)-version of Sobolev space of order \(s\geqslant 0\) on \(\mathbb{T}^{n}_{\theta}\) by \(W^{s}_{2}(\mathbb{T}^{n}_{\theta})\). This space consists of elements \(x=\sum_{k\in\mathbb{Z}^{n}}x_{k}U^{k}\) in \(L_{2}(\mathbb{T}^{n}_{\theta})\) such that
\[\big{(}(1+|k|^{2})^{\frac{s}{2}}x_{k}\big{)}_{k\in\mathbb{Z}^{n}}\in\ell_{2}( \mathbb{Z}^{n}).\]
The space \(W^{s}_{2}(\mathbb{T}^{n}_{\theta})\) is a Hilbert space with the inner product,
\[\langle x|y\rangle_{W^{s}_{2}(\mathbb{T}^{n}_{\theta})}:=\sum_{k\in\mathbb{Z}^ {n}}(1+|k|^{2})^{s}x_{k}\overline{y_{k}},\qquad x=\sum_{k\in\mathbb{Z}^{n}}x_{ k}U^{k},y=\sum_{k\in\mathbb{Z}^{n}}y_{k}U^{k}\in W^{s}_{2}(\mathbb{T}^{n}_{ \theta}).\]
We shall denote the norm associated with this inner product by \(\|\cdot\|_{W^{s}_{2}(\mathbb{T}^{n}_{\theta})}\).
We apply the theory of noncommutative \(L_{p}\)-spaces (see, e.g., [22, 32]) to the von Neumann algebra \(L_{\infty}(\mathbb{T}^{n}_{\theta})\) to construct the \(L_{p}\)-spaces of \(\mathbb{T}^{n}_{\theta}\). Recall that the elements of \(L_{\infty}(\mathbb{T}^{n}_{\theta})\) are represented as bounded linear operators on the Hilbert space \(L_{2}(\mathbb{T}^{n})\). We say that a closed and densely defined operator on \(L_{2}(\mathbb{T}^{n})\) is affiliated with \(L_{\infty}(\mathbb{T}^{n}_{\theta})\) if it commutes with the commutant of \(L_{\infty}(\mathbb{T}^{n}_{\theta})\) in \(\mathscr{L}(L_{2}(\mathbb{T}^{n}))\). Given a positive operator \(x\) on \(L_{2}(\mathbb{T}^{n})\) affiliated with \(L_{\infty}(\mathbb{T}^{n}_{\theta})\) let \(x=\int_{0}^{\infty}\lambda\,dE(\lambda)\) be its spectral representation and set
\[\tau(x):=\int_{0}^{\infty}\lambda\,d\tau(E(\lambda)).\]
For \(1\leqslant p<\infty\), the \(L_{p}\)-space of \(\mathbb{T}^{n}_{\theta}\), denoted by \(L_{p}(\mathbb{T}^{n}_{\theta})\), consists of all elements \(x\) in the \(*\)-algebra of \(L_{\infty}(\mathbb{T}^{n}_{\theta})\)-affiliated operators on \(L_{2}(\mathbb{T}^{n})\) such that
\[\|x\|_{L_{p}(\mathbb{T}^{n}_{\theta})}:=\tau\big{(}|x|^{p}\big{)}^{\frac{1}{p} }<\infty.\]
The space \(L_{p}(\mathbb{T}^{n}_{\theta})\) is a Banach space with the norm \(\|\cdot\|_{L_{p}(\mathbb{T}^{n}_{\theta})}\).
Furthermore, we have the following inclusion of Sobolev spaces into \(L_{p}\)-spaces. This is a particular case of the embedding theorem for Sobolev spaces on noncommutative tori stated in [34].
**Proposition 2.1** (see [34, Theorem 6.6]).: _Let \(p>2\) and set \(s=n(\frac{1}{2}-\frac{1}{p})\). Then there is a continuous embedding,_
\[W^{s}_{2}(\mathbb{T}^{n}_{\theta})\subset L_{p}(\mathbb{T}^{n}_{\theta}).\]
### Jensen's operator inequality
**Definition 2.2**.: Let \(f(t)\) be a function on an interval \(I\subset\mathbb{R}\). We say that \(f(t)\) is _operator convex_ if, for all \(\lambda\in[0,1]\) and for every selfadjoint operator \(A\) and \(B\) on a Hilbert space \(\mathscr{H}\) such that \(\operatorname{Sp}A\subset I\) and \(\operatorname{Sp}B\subset I\), we have
\[f((1-\lambda)A+\lambda B)\leqslant(1-\lambda)f(A)+\lambda f(B).\]
In contrast, \(f(t)\) is called _operator concave_ if \(-f(t)\) is operator convex.
**Proposition 2.3** ([9, 14]; see also [23, Theorem 1.20]).: _Let \(\mathscr{H}\) and \(\mathscr{H}^{\prime}\) be Hilbert spaces. Suppose that \(f(t)\) is an operator convex function on the interval \(I\), \(x\in\mathscr{L}(\mathscr{H})\) and \(\operatorname{Sp}x\subset I\). Then we have_
\[\pi(f(x))\geqslant f(\pi(x)), \tag{2.2}\]
_for any positive normalized linear map \(\pi:\mathscr{L}(\mathscr{H})\to\mathscr{L}(\mathscr{H}^{\prime})\)._
_Remark 2.4_.: The inequality (2.2) is of course reversed if we employ an operator concave function instead of an operator convex function.
## 3. The Proof of Theorem 1.1
In this section, we provide the proof of Theorem 1.1, the main result of this paper.
Proof of Theorem 1.1.: Let \(0<x\in C^{\infty}(\mathbb{T}_{\theta}^{n})\). Then, for any \(\varepsilon>0\), we have
\[\tau\left[x^{2}\log\left(\frac{x^{2}}{\|x\|_{L_{2}}^{2}}\right)\right]=\frac{1 }{\varepsilon}\,\tau\left[x^{2}\log\left(\frac{x^{2}}{\|x\|_{L_{2}}^{2}} \right)^{\varepsilon}\right]=\frac{\|x\|_{L_{2}}^{2}}{\varepsilon}\,\tau \left[\frac{x^{2}}{\|x\|_{L_{2}}^{2}}\log\left(\frac{x^{2\varepsilon}}{\|x\|_{L _{2}}^{2\varepsilon}}\right)\right]. \tag{3.1}\]
Note that the map \(\mathscr{L}(L_{2}(\mathbb{T}^{n}))\ni u\mapsto\tau\left[\frac{\|x\|^{2}}{\|x\|_ {L_{2}}^{2}}u\right]\in\mathbb{C}\) is a positive normalized linear map. Moreover, we know by [23, Example 1.7] that the logarithmic function \(\log t\) is operator concave on \((0,\infty)\). Therefore, as we have \(\operatorname{Sp}\left(x^{2\varepsilon}/\|x\|_{L_{2}}^{2\varepsilon}\right) \subset(0,\infty)\), it follows from Jensen's operator inequality (Proposition 2.3 and Remark 2.4) that we have
\[\tau\left[\frac{x^{2}}{\|x\|_{L_{2}}^{2}}\log\left(\frac{x^{2 \varepsilon}}{\|x\|_{L_{2}}^{2\varepsilon}}\right)\right] \leqslant\log\tau\left(\frac{x^{2\varepsilon+2}}{\|x\|_{L_{2}}^{ 2\varepsilon+2}}\right)\] \[=(\varepsilon+1)\log\left\|\frac{x}{\|x\|_{L_{2}}}\right\|_{L_{2 _{2+2}}}^{2}\] \[=(\varepsilon+1)\log\left(\frac{\|x\|_{L_{2\varepsilon+2}}^{2}}{ \|x\|_{L_{2}}^{2}}\right). \tag{3.2}\]
For all \(0<b,t\in\mathbb{R}\), we have
\[\log t\leqslant bt-\log b-1.\]
Combining this with the estimates (3.1) and (3.2) we obtain
\[\tau\left[x^{2}\log\left(\frac{x^{2}}{\|x\|_{L_{2}}^{2}}\right)\right] \leqslant\frac{\|x\|_{L_{2}}^{2}(\varepsilon+1)}{\varepsilon}\, \log\left(\frac{\|x\|_{L_{2\varepsilon+2}}^{2}}{\|x\|_{L_{2}}^{2}}\right)\] \[\leqslant\frac{\|x\|_{L_{2}}^{2}(\varepsilon+1)}{\varepsilon} \left(b\frac{\|x\|_{L_{2\varepsilon+2}}^{2}}{\|x\|_{L_{2}}^{2}}-\log b-1\right)\] \[=\frac{\varepsilon+1}{\varepsilon}\left(b\,\|x\|_{L_{2\varepsilon +2}}^{2}-[\log b+1]\,\|x\|_{L_{2}}^{2}\right). \tag{3.3}\]
Let \(\varepsilon>0\) be such that \(2\varepsilon+2=\frac{2n}{n-2s}\) for \(0<s<\frac{n}{2}\) and set \(b=ea^{2}\) for \(a>0\). We know by Proposition 2.1 that there is \(C(n,s)>0\) depending only on \(n\) and \(s\) such that
\[\|x\|_{L_{2\varepsilon+2}}^{2}\leqslant C(n,s)\,\|x\|_{W_{2}^{s}}^{2}.\]
It then follows from this and (3.3) that
\[\tau\left[x^{2}\log\left(\frac{x^{2}}{\|x\|_{L_{2}}^{2}}\right)\right] \leqslant\frac{\frac{n-2s}{2s}}{\frac{2s}{n-2s}}\left(ea^{2}\,\|x \|_{L_{2\varepsilon+2}}^{2}-[\log(ea^{2})+1]\,\|x\|_{L_{2}}^{2}\right)\] \[\leqslant\frac{n}{2s}\left(ea^{2}C(n,s)\,\|x\|_{W_{2}^{s}}^{2}-2( \log a+1)\,\|x\|_{L_{2}}^{2}\right)\] \[=\frac{nea^{2}}{2s}C(n,s)\,\|x\|_{W_{2}^{s}}^{2}-\frac{n}{s}(\log a +1)\,\|x\|_{L_{2}}^{2}.\]
This completes the proof.
## Acknowledgements
The author wishes to thank Michael Ruzhansky for helpful discussions related to the topic of this paper. The research for this article is financially supported by the FWO Odysseus 1 grant G.0H94.18N: Analysis and Partial Differential Equations and by the Methusalem programme of the Ghent University Special Research Fund (BOF) (Grant number 01M01021). |
2305.01069 | Approximating submodular $k$-partition via principal partition sequence | In submodular $k$-partition, the input is a non-negative submodular function
$f$ defined over a finite ground set $V$ (given by an evaluation oracle) along
with a positive integer $k$ and the goal is to find a partition of the ground
set $V$ into $k$ non-empty parts $V_1, V_2, ..., V_k$ in order to minimize
$\sum_{i=1}^k f(V_i)$. Narayanan, Roy, and Patkar (Journal of Algorithms, 1996)
designed an algorithm for submodular $k$-partition based on the principal
partition sequence and showed that the approximation factor of their algorithm
is $2$ for the special case of graph cut functions (subsequently rediscovered
by Ravi and Sinha (Journal of Operational Research, 2008)). In this work, we
study the approximation factor of their algorithm for three subfamilies of
submodular functions -- monotone, symmetric, and posimodular, and show the
following results:
1. The approximation factor of their algorithm for monotone submodular
$k$-partition is $4/3$. This result improves on the $2$-factor achievable via
other algorithms. Moreover, our upper bound of $4/3$ matches the recently shown
lower bound under polynomial number of function evaluation queries (Santiago,
IWOCA 2021). Our upper bound of $4/3$ is also the first improvement beyond $2$
for a certain graph partitioning problem that is a special case of monotone
submodular $k$-partition.
2. The approximation factor of their algorithm for symmetric submodular
$k$-partition is $2$. This result generalizes their approximation factor
analysis beyond graph cut functions.
3. The approximation factor of their algorithm for posimodular submodular
$k$-partition is $2$.
We also construct an example to show that the approximation factor of their
algorithm for arbitrary submodular functions is $\Omega(n/k)$. | Karthekeyan Chandrasekaran, Weihang Wang | 2023-05-01T20:05:30Z | http://arxiv.org/abs/2305.01069v3 | # Approximating submodular \(k\)-partition via principal partition sequence+
###### Abstract
In submodular \(k\)-partition, the input is a submodular function \(f:2^{V}\to\mathbb{R}_{\geq 0}\) (given by an evaluation oracle) along with a positive integer \(k\) and the goal is to find a partition of the ground set \(V\) into \(k\) non-empty parts \(V_{1},V_{2},\ldots,V_{k}\) in order to minimize \(\sum_{i=1}^{k}f(V_{i})\). Narayanan, Roy, and Patkar [17] designed an algorithm for submodular \(k\)-partition based on the principal partition sequence and showed that the approximation factor of their algorithm is \(2\) for the special case of graph cut functions (which was subsequently rediscovered by Ravi and Sinha [21]). In this work, we study the approximation factor of their algorithm for three subfamilies of submodular functions--namely monotone, symmetric, and posimodular and show the following results:
1. The approximation factor of their algorithm for monotone submodular \(k\)-partition is \(4/3\). This result improves on the \(2\)-factor that was known to be achievable for monotone submodular \(k\)-partition via other algorithms. Moreover, our upper bound of \(4/3\) matches the recently shown lower bound under polynomial number of function evaluation queries [22]. Our upper bound of \(4/3\) is also the first improvement beyond \(2\) for a certain graph partitioning problem that is a special case of monotone submodular \(k\)-partition.
2. The approximation factor of their algorithm for symmetric submodular \(k\)-partition is \(2\). This result generalizes their approximation factor analysis beyond graph cut functions.
3. The approximation factor of their algorithm for posimodular submodular \(k\)-partition is \(2\).
We also construct an example to show that the approximation factor of their algorithm for arbitrary submodular functions is \(\Omega(n/k)\).
## 1 Introduction
A set function \(f:2^{V}\to\mathbb{R}\) is submodular if \(f(A)+f(B)\geq f(A\cap B)+f(A\cup B)\) for every \(A,B\subseteq V\). An evaluation oracle for a function \(f:2^{V}\to\mathbb{R}\) takes a subset \(A\subseteq V\) as input and returns \(f(A)\). We consider the _submodular \(k\)-partition_ problem defined as follows: The input consists of a non-negative submodular function \(f:2^{V}\to\mathbb{R}_{\geq 0}\) on a finite ground set \(V\) via an evaluation oracle and an integer \(k\geq 2\). The goal is to find a partition \(V_{1},V_{2},\ldots,V_{k}\) of \(V\) into \(k\) non-empty parts in order to minimize \(\sum_{i=1}^{k}f(V_{i})\). Namely, the goal is to compute
\[\min\left\{\sum_{i\in[k]}f(V_{i}):\ V_{1},V_{2},\ldots,V_{k}\text{ is a partition of }V,\ V_{i}\neq\emptyset\ \forall i\in[k]\right\}.\]
Throughout, we will assume that the input submodular function is non-negative and denote the size of the ground set \(V\) by \(n\). If \(k=2\), then the problem reduces to the classic submodular minimization problem. We emphasize that our focus is on submodular \(k\)-partitioning when \(k\) is part of input (see [4] for a discussion of the problem for fixed constant \(k\)). Submodular \(k\)-partition formulates several interesting partitioning problems and we will discuss some of the special cases below. For arbitrary submodular functions, the problem is NP-hard [9], does not admit a \((2-\epsilon)\)-approximation assuming polynomial number of function evaluation
queries [22], does not admit a \(n^{1/(\log\log n)^{\varepsilon}}\)-approximation for every constant \(c\) assuming the Exponential Time Hypothesis [5], and the best approximation factor that is known is \(O(k)\)[25, 18].
In this work, we will be interested in the submodular \(k\)-partition problem for subfamilies of submodular functions--namely monotone, symmetric, and posimodular submodular functions. A set function \(f:2^{V}\to\mathbb{R}\) is
1. monotone if \(f(B)\geq f(A)\) for every \(A\subseteq B\subseteq V\),
2. symmetric if \(f(A)=f(V-A)\) for every \(A\subseteq V\), and
3. posimodular if \(f(A)+f(B)\geq f(A-B)+f(B-A)\) for every \(A,B\subseteq V\).
If the input submodular function is monotone/symmetric/posimodular, then we call the associated submodular \(k\)-partition problem as monotone/symmetric/posimodular submodular \(k\)-partition. We note that monotone submodular functions and symmetric submodular functions are also posimodular1. Hence, posimodular submodular \(k\)-partition problem generalizes both monotone submodular \(k\)-partition and symmetric submodular \(k\)-partition problems. We now discuss the approximation status of symmetric/monotone/posimodular submodular \(k\)-partition and some of their well-known special cases (see Table 1 for a summary of approximation factors of symmetric/monotone/posimodular submodular \(k\)-partition achieved by different approaches).
Footnote 1: In fact, monotone functions are posimodular since \(f(A)\geq f(A-B)\) and \(f(B)\geq f(B-A)\) for every \(A,B\subseteq V\). Symmetric submodular functions are posimodular: for every \(A,B\subseteq V\), we have that \(f(A)+f(B)=f(V-A)+f(B)\geq f((V-A)\cup B)+f((V-A)\cap B)=f(V-(A-B))+f(B-A)=f(A-B)+ f(B-A)\).
**Monotone submodular \(k\)-partition.** Special cases of monotone submodular \(k\)-partition problem include matroid \(k\)-partition and coverage \(k\)-partition--the submodular functions of interest here are matroid rank functions and coverage functions respectively. Matroid \(k\)-partition captures several interesting problems: e.g., (1) partition the columns of a given matrix into \(k\) non-empty parts to minimize the sum of the dimension of the subspace spanned by the parts and (2) partition the edges of a given graph into \(k\) non-empty parts to _maximize_ the sum of the number of connected components formed by the parts. Coverage \(k\)-partition also captures several interesting problems: e.g., (3) partition the vertices of a given graph into \(k\) non-empty parts \(V_{1},V_{2},\ldots,V_{k}\) in order to minimize \(\sum_{i=1}^{k}f(V_{i})\), where \(f(P)\) is the number of edges incident to the vertex subset \(P\subseteq V\). To gain a better understanding of the difficulty in solving/approximating monotone submodular \(k\)-partition, we encourage the reader to briefly think about the concrete special case of matrix column partitioning problem (i.e., problem (1) described above which is seemingly a linear algebra problem) before reading further.
Monotone submodular \(k\)-partition is NP-hard [22]. Moreover, it admits a simple (and fast) \((2-1/k)\)-approximation algorithm that will be denoted henceforth as the _cheapest singleton partitioning algorithm_: return the partition \(V_{1}:=\{v_{1}\},V_{2}:=\{v_{2}\},\ldots,V_{k-1}:=\{v_{k-1}\},V_{k}:=V-\{v_{1},\ldots,v_{k-1}\}\), where the \(n\) elements of the ground set are ordered as \(v_{1},\ldots,v_{n}\) such that \(f(\{v_{1}\})\leq f(\{v_{2}\})\leq\ldots\leq f(\{v_{n}\})\). Santiago [22] showed that this is a \(2\)-approximation. His analysis can be extended to show that it is in fact a \((2-1/k)\)-approximation2 and this is the best possible approximation factor for this algorithm3. Alternatively, the greedy splitting algorithm presented in [25] achieves a \((2-2/k)\)-approximation. On the inapproximability front, Santiago [22] showed that there does not exist an algorithm that makes polynomial number of function evaluation queries to obtain a \((4/3-\epsilon)\)-approximation for every constant \(\epsilon>0\).
Footnote 2: The cheapest singleton partitioning algorithm returns a solution whose cost is \(f(V-V_{k})+\sum_{i=1}^{k-1}f(\{v_{i}\})\leq f(V)+\sum_{i=1}^{k-1}f(\{v_{i}\}) \leq f(V)+(1-1/k)\sum_{i=1}^{k}f(\{v_{i}\})\leq(2-1/k)\max\{f(V),\sum_{i=1}^ {k}f(\{v_{i}\})\}\) while the cost of an optimum \(k\)-partition is at least \(\max\{f(V),\sum_{i=1}^{k}f(\{v_{i}\})\}\). The lower bound on the cost of the optimum \(k\)-partition \(V_{1}^{*},\ldots,V_{k}^{*}\) is because \(\sum_{i=1}^{k}f(V_{i}^{*})\geq f(V)\) by non-negativity and submodularity and moreover, if the optimum partition is indexed such that \(\min\{j\in[n]:v_{j}\in V_{i}^{*}\}\leq\min\{j\in[n]:v_{j}\in V_{i+1}^{*}\}\) for all \(i\in[k-1]\), then \(f(V_{i}^{*})\geq f(\{v_{i}\})\) by monotonicity and hence, \(\sum_{i=1}^{k}f(V_{i}^{*})\geq\sum_{i=1}^{k}f(\{v_{i}\})\).
**Symmetric submodular \(k\)-partition.** Well-known special cases of symmetric submodular \(k\)-partition problem are graph \(k\)-cut and hypergraph \(k\)-partition--the submodular functions of interest here are the cut functions of an explicitly given graph and hypergraph respectively. Graph \(k\)-cut is NP-complete [9] and does
not have a polynomial-time \((2-\epsilon)\)-approximation for every constant \(\epsilon>0\) under the Small Set Expansion Hypothesis [11]. There are several known approaches to achieve a \(2\)-approximation for graph \(k\)-cut--(i) greedy splitting approach [23], (ii) Gomory-Hu tree based approach [24], (iii) extreme sets based approach [12], (iv) principal partition sequence based approach [2, 17, 21], and (v) covering-LP based approach [6, 14, 20]. Greedy splitting, Gomory-Hu tree, and extreme sets based approaches lead to a \((2-2/k)\)-approximation while the principal partition sequence and the covering-LP based approaches lead to a \((2-2/n)\)-approximation for graph \(k\)-cut. The principal partition sequence and the covering-LP based approaches for graph \(k\)-cut have also been shown to be related to each other [6]. The principal partition sequence based approach is the main algorithm of interest to our work and we will discuss it in detail in Section 2.
For the more general problem of symmetric submodular \(k\)-partition, two of the approaches discussed in the previous paragraph for graph \(k\)-cut have been generalized to obtain \(2\)-approximations--the greedy splitting approach [25] and the Gomory-Hu tree approach lead to a \((2-2/k)\)-approximation. Analyzing the approximation factor of the principal partition sequence based approach for symmetric submodular \(k\)-partition was one of the driving motivations of our work. On the inapproximability front, Santiago [22] showed that there does not exist an algorithm that makes polynomial number of function evaluation queries to obtain a \((2-\epsilon)\)-approximation for every constant \(\epsilon>0\).
**Posimodular submodular \(k\)-partition.** The only natural family of posimodular submodular functions that we are familiar with are symmetric submodular functions and monotone submodular functions as well as their positive linear combinations. As mentioned before, posimodular submodular \(k\)-partition is a unified generalization of symmetric submodular \(k\)-partition and monotone submodular \(k\)-partition. To the best of authors' knowledge, posimodular submodular \(k\)-partition has not been studied in the literature before and there are no specialized algorithms or approximation factor analysis of existing algorithms for posimodular submodular \(k\)-partition. A slight modification to the analysis of the greedy splitting algorithm presented in [25] shows that their algorithm achieves a \((3-2/k)\)-approximation for posimodular submodular \(k\)-partition--we refrain from presenting this analysis in the interests of brevity. On the inapproximability front, since symmetric submodular functions are also posimodular submodular, the lower bound for symmetric submodular \(k\)-partition also holds for posimodular submodular \(k\)-partition, i.e., there does not exist an algorithm for posimodular submodular \(k\)-partition that makes polynomial number of function evaluation queries to obtain a \((2-\epsilon)\)-approximation for every constant \(\epsilon>0\).
### Our Results
In this work, we investigate Narayanan, Roy, and Patkar's [17] principal partition sequence based algorithm for submodular \(k\)-partition. They showed that their algorithm achieves a \(2\)-approximation for graph \(k\)-cut (which was subsequently rediscovered by Ravi and Sinha [21]). We show the following results:
1. Their algorithm achieves a \(4/3\)-approximation for monotone submodular \(k\)-partition. This result improves on the \(2\)-factor that is known to be achievable via two different algorithms: the cheapest singleton partitioning algorithm and the greedy splitting algorithm. Moreover, our upper bound of \(4/3\) matches the lower bound shown by Santiago [22]. We will discuss the significance of our upper bound result shortly.
2. Their algorithm achieves a \(2\)-approximation for symmetric submodular \(k\)-partition. This factor matches the \(2\)-factor that is known to be achievable via two other algorithms: the greedy splitting algorithm and the Gomory-Hu tree based algorithm, and also matches the lower bound [11, 22]. Our contribution here is generalizing the analysis of [17, 21] to beyond graph cut functions.
3. Their algorithm achieves a \(2\)-approximation for posimodular submodular \(k\)-partition. This result improves on the \(3\)-factor that is known to be achievable via the greedy splitting algorithm and matches the lower bound of \(2\) shown by Santiago [22].
See Table 1 for a comparison. Graph \(k\)-cut is the well-studied special case of symmetric submodular \(k\)-partition/posimodular submodular \(k\)-partition, so we include that as the last column in the table for comparison. Approximation factors in the row corresponding to principal partition sequence are the main results of our work. In the last row of the table, we include the known lower bounds on the approximation factor
for comparison. The lower bound for graph cut function is assuming the Small Set Expansion Hypothesis [11] while the rest of the lower bounds are assuming polynomial number of function evaluation queries. Dashes in the table indicate that either the approach does not extend or there has been no analysis of the approximation factor of the approach for the subfamily.
We complement our upper bounds on the approximation factor of their algorithm with matching lower bound constructions for each subfamily of submodular functions. Our results show that the principal partition sequence based algorithm achieves the best possible approximation factor for broad subfamilies of submodular functions, thus illustrating the power and applicability of this algorithm. On the other hand, we show that the approximation factor of their algorithm for arbitrary submodular functions is \(\Omega(n/k)\) via a lower bound construction. This construction shows that their principal partition sequence based algorithm cannot directly improve the approximation factor for submodular \(k\)-partition beyond the current best \(O(k)\).
We briefly discuss the significance of our \(4/3\)-approximation result for monotone submodular \(k\)-partition. Firstly, prior to our results, there were no known families of submodular functions for which the submodular \(k\)-partition problem could be approximated to a factor better than \(2\). Our result for monotone submodular functions breaks this \(2\)-factor barrier for a broad family of submodular functions. Secondly, our result for monotone submodular \(k\)-partition leads to a new approximation result even for a graph partitioning problem that we describe now. For a graph \(G=(V,E)\) with edge weights \(w:E\to\mathbb{R}_{+}\), consider functions \(d,f:2^{V}\to\mathbb{R}_{+}\) defined by \(d(S):=w(\delta(S))\) and \(f(S):=w(E[S])+w(\delta(S))\) for every \(S\subseteq V\), where \(\delta(S)\) denotes the set of edges with exactly one end-vertex in \(S\), \(E[S]\) denotes the set of edges with both end-vertices in \(S\), and \(w(F):=\sum_{e\in F}w(e)\) for every \(F\subseteq E\). The function \(d\) is the cut function of the graph and is symmetric submodular. The function \(f\) is the coverage function of the graph and is monotone submodular. Submodular \(k\)-partition for the function \(d\) is known as graph \(k\)-cut and it is known that graph \(k\)-cut does not admit a \((2-\epsilon)\)-approximation under the Small Set Expansion Hypothesis [11]. In contrast, our results show that coverage \(k\)-partition in graphs--i.e., submodular \(k\)-partition for the function \(f\)--admits a \(4/3\)-approximation. We note that coverage \(k\)-partition in graphs is NP-hard [22] and its approximability is an intriguing open question.
**Organization.** We discuss the principal partition sequence based algorithm in Section 2. We analyze the approximation factor of the algorithm with matching lower bound constructions for each of the three subfamilies of submodular functions in Section 3. We exhibit an instance of submodular \(k\)-partition where the algorithm achieves an approximation factor of \(\Omega(n/k)\) in Section 4.
### Related work
The principal partition sequence based algorithm for submodular \(k\)-partition was introduced by Narayanan, Roy, and Patkar [17]. We will formally define the principal partition sequence of a submodular function and describe their algorithm in Section 2. They analyzed the approximation factor of their algorithm for
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & Monotone & Symmetric & Posimodular & Graph \\ & Submodular & Submodular & Submodular & Cut Function \\ & Function & Function & Function & Function \\ \hline Greedy splitting & \(2-2/k\)[25] & \(2-2/k\)[25] & \(3-2/k\)[25]* & \(2-2/k\)[23] \\ \hline Extreme Sets & — & — & — & \(2-2/k\)[12] \\ \hline Gomory-Hu tree & — & \(2-2/k\) [Folklore] & — & \(2-2/k\)[24] \\ \hline Covering-LP & — & — & — & \(2-2/k\)[6, 14] \\ \hline Cheapest Singleton & \(2-1/k\)[22] & — & — & — \\ Partitioning & & & & \\ \hline Principal Partition & \(4/3-4/(9n+3)\) & \(2-2/n\) & \(2-2/(n+1)\) & \(2-2/n\)[17, 21] \\ Sequence & (Theorem 3.1) & (Theorem 3.2) & (Theorem 3.3) & \\ \hline
**Lower Bound** & \(4/3-\epsilon\)[22] & \(2-\epsilon\)[22] & \(2-\epsilon\)[11] \\ \hline \end{tabular}
\end{table}
Table 1: Approximation factors of symmetric/monotone/posimodular submodular \(k\)-partition using different approaches. Result in the first row marked with an asterisk follows by slight modifications to the known analysis of the approximation factor for symmetric submodular functions given in [25].
two variants of \(k\)-partitioning problems in hypergraphs. These two variants are not special cases of symmetric/monotone/posimodular submodular \(k\)-partition and are not of direct interest to our work. However, we describe these variants to highlight the versatility of the principal partition sequence based approach and also to shed light on the results of Narayanan, Roy, and Patkar's work which do not seem to be well-known in the literature. Given a hypergraph \(H=(V,E)\), a hyperedge cost function \(c:E\rightarrow\mathbb{R}_{+}\), and an integer \(k\), the goal is to find a partition \(\mathcal{P}:=\{V_{1},V_{2},\ldots,V_{k}\}\) of \(V\) into \(k\) non-empty parts that minimizes an objective of interest:
1. If the objective is the sum of cost of hyperedges that intersect at least two parts of \(\mathcal{P}\), then the problem is known as _hypergraph \(k\)-cut_.
2. If the objective is the sum of _cost of hyperedges relative to the partition \(\mathcal{P}\)_, where the cost of a hyperedge \(e\) relative to \(\mathcal{P}\) is \(c(e)(\ell-1)\) with \(\ell\) being the number of parts of \(\mathcal{P}\) intersected by \(e\), then the problem is known as _normalized coverage \(k\)-partition4_. Footnote 4: We introduce this nomenclature because the problem is equivalent to finding a partition \(V_{1},V_{2},\ldots,V_{k}\) of the ground set \(V\) in order to minimize \(\sum_{i=1}^{k}f(V_{i})-f(V)\), where \(f:2^{V}\rightarrow\mathbb{R}_{+}\) is an explicitly given coverage function (every coverage function can be uniquely represented using a hypergraph [3]). We consider the subtraction of \(f(V)\) as normalizing the objective since it is a trivial lower bound on the sum of the function values of the parts: \(\sum_{i=1}^{k}f(V_{i})\geq f(V)\) holds for every \(k\)-partition \(V_{i},V_{2},\ldots,V_{k}\) since \(f\) is a coverage function.
Narayanan, Roy, and Patkar [17] showed that their principal partition sequence based algorithm achieves a \(r(1-1/n)\)-approximation for hypergraph \(k\)-cut, where \(r\) is the size of the largest hyperedge and \(n\) is the number of vertices in the input hypergraph, and achieves a \((2-2/n)\)-approximation for normalized coverage \(k\)-partition. A consequence (of both of their results) is that the principal partition sequence based algorithm achieves a \((2-2/n)\)-approximation for graph \(k\)-cut. Their principal partition sequence based algorithm for graph \(k\)-cut is equivalent to the Lagrangean relaxation approach suggested by Barahona [2]. The approximation factor of the principal partition sequence based algorithm for graph \(k\)-cut being at most \(2\) was rediscovered by Ravi and Sinha [21] and for hypergraph \(k\)-cut being at most \(r\) was rediscovered by Baiou and Barahona [1].
We mention that a slight modification to the analysis of the greedy splitting algorithm presented in [25] shows that the greedy splitting algorithm achieves a \((2-2/k)\)-approximation for normalized coverage \(k\)-partition and hypergraph \(k\)-partition. We note that hypergraph \(k\)-partition is a special case of symmetric submodular \(k\)-partition and is different from hypergraph \(k\)-cut (for definition of hypergraph \(k\)-partition, see discussion of symmetric submodular \(k\)-partition at the beginning of the introduction). On the inapproximability front, it is known that hypergraph \(k\)-cut does not admit an approximation factor of \(n^{1/(\log\log n)^{c}}\), where \(c\) is a constant, assuming the Exponential Time Hypothesis [5]. The best inapproximability result for the more general submodular \(k\)-partition problem (mentioned in the introduction) follows from this inapproximability for hypergraph \(k\)-cut.
## 2 Principal partition sequence based algorithm
In this section, we recall the principal partition sequence based algorithm for submodular \(k\)-partition designed by Narayanan, Roy, and Patkar [17]. We begin with some notation. Throughout this work, a _partition_ of a set \(S\) is defined to be a collection of _nonempty_ pairwise disjoint subsets of \(S\) whose union is \(S\), and a \(k\)_-partition_ of a set \(S\) is defined to be a partition of \(S\) with exactly \(k\) parts. For two distinct partitions \(\mathcal{P}\) and \(\mathcal{Q}\) of a set \(S\), if every part of \(\mathcal{Q}\) is completely contained in some part of \(\mathcal{P}\) then we say that \(\mathcal{Q}\)_refines_\(\mathcal{P}\) (equivalently, \(\mathcal{P}\) is a coarsening of \(\mathcal{Q}\)). For two distinct partitions \(\mathcal{P}\) and \(\mathcal{Q}\) of a set \(S\), we will say that \(\mathcal{Q}\) is obtained from \(\mathcal{P}\) by refining only one part of \(\mathcal{P}\) if there exists a part \(P\in\mathcal{P}\) such that \(P\notin\mathcal{Q}\) and every part \(Q\in\mathcal{Q}\) satisfies either \(Q\subsetneq P\) or \(Q\in\mathcal{P}\) (i.e., either \(Q\) is a proper subset of the part \(P\) or \(Q\) is a part of the partition \(\mathcal{P}\)); we will denote such a part \(P\in\mathcal{P}\) as the part refined by \(\mathcal{Q}\).
Let \(f:2^{V}\rightarrow\mathbb{R}\) be a set function on ground set \(V\). For a collection \(\mathcal{P}\) of subsets of \(V\), we write \(f(\mathcal{P}):=\sum_{P\in\mathcal{P}}f(P)\). We will say that a partition \(\mathcal{P}=\{P_{1},\ldots,P_{k}\}\) is an optimal \(k\)-partition if \(f(\mathcal{P})\leq f(\mathcal{Q})\) for every \(k\)-partition \(\mathcal{Q}\) of \(V\). We define the function \(g_{f,\mathcal{P}}:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}\) for a partition \(\mathcal{P}\) of the ground set \(V\) and the function \(g_{f}:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}\) as follows:
\[g_{f,\mathcal{P}}(b):=f(\mathcal{P})-b|\mathcal{P}|\text{ and }\]
\[g_{f}(b):=\min\{g_{f,\mathcal{P}}(b):\mathcal{P}\text{ is a partition of }V\}.\]
We drop the subscript \(f\) and instead write \(g_{\mathcal{P}}\) and \(g\) respectively, if the function \(f\) is clear from context. By definition, the function \(g_{f}\) is piece-wise linear. It can be shown that \(g_{f}\) has at most \(|V|-1\) breakpoints. The next theorem shows that if the function \(f:2^{V}\to\mathbb{R}\) is submodular, then there exists a sequence of partitions achieving the \(g_{f}\) function values at the breakpoints that have a nice structure; moreover, the breakpoints and such a sequence of partitions can be computed in polynomial time given access to the evaluation oracle of the submodular function \(f\). We emphasize that the theorem holds for arbitrary submodular functions (which may not be non-negative valued).
**Theorem 2.1** ([15, 17]).: _Let \(f:2^{V}\to\mathbb{R}\) be a submodular function on a ground set \(V\). Then, there exists a sequence \(\mathcal{P}_{1},\mathcal{P}_{2},\ldots,\mathcal{P}_{r}\) of partitions of \(V\) and values \(b_{1},b_{2},\ldots,b_{r-1}\) such that_
1. \(\mathcal{P}_{1}=\{V\}\) _and_ \(\mathcal{P}_{r}=\{\{v\}:v\in V\}\)_,_
2. _For each_ \(j\in[r-1]\)_, the partition_ \(\mathcal{P}_{j+1}\) _is obtained from_ \(\mathcal{P}_{j}\) _by refining only one part of_ \(\mathcal{P}_{j}\)_,_
3. \(b_{1}<b_{2}<\ldots<b_{r-1}\)_,_
4. \(g(b_{j})=g_{\mathcal{P}_{j}}(b_{j})=g_{\mathcal{P}_{j+1}}(b_{j})\) _for each_ \(j\in[r-1]\) _and_
5. \(g(b)=g_{\mathcal{P}_{1}}(b)\) _for all_ \(b\in(-\infty,b_{1}]\)_,_ \(g(b)=g_{\mathcal{P}_{j+1}}(b)\) _for all_ \(b\in[b_{j},b_{j+1}]\) _for each_ \(j\in[r-2]\)_, and_ \(g(b)=g_{\mathcal{P}_{r}}(b)\) _for all_ \(b\in[b_{r-1},\infty)\)_._
_Moreover, such a sequence \(\mathcal{P}_{1},\mathcal{P}_{2},\ldots,\mathcal{P}_{r}\) of partitions of \(V\) and values \(b_{1},b_{2},\ldots,b_{r-1}\) can be computed in polynomial time given access to the evaluation oracle of the submodular function \(f\)._
For a submodular function \(f:2^{V}\to\mathbb{R}\), we will denote a sequence of partitions \(\mathcal{P}_{1},\mathcal{P}_{2},\ldots,\mathcal{P}_{r}\) and the sequence of values \(b_{1},b_{2},\ldots,b_{r-1}\) satisfying the conditions given in Theorem 2.1 as a _principal partition sequence_ and the _critical value sequence_ of \(f\), respectively. We note that this definition differs from those in [15, 17] owing to the reversed indexing order and the imposition of condition 2--we note that the proofs given in those papers also show that condition 2 holds (also see [21]). The principal partition sequence of submodular functions is known in the literature as _principal lattice of partitions_ of submodular functions since there exists a lattice structure associated with the sequence of partitions. We choose to call it as principal partition sequence in this work since the sequence suffices for our purpose. For more on principal lattice of partitions of submodular functions and their computation, we refer the reader to [15, 15, 16, 17, 7, 8, 10, 13, 19, 2, 7].
We now discuss the principal partition sequence based algorithm for submodular \(k\)-partition that was proposed by Narayanan, Roy, and Patkar [17]. This algorithm computes a principal partition sequence satisfying all conditions in Theorem 2.1. If the sequence contains a partition that has exactly \(k\) parts, then the algorithm returns this \(k\)-partition. Otherwise, the algorithm returns a \(k\)-partition obtained by refining the partition in the sequence that has the largest number of parts that is less than \(k\). The refinement is based on the partition in the sequence that has the fewest number of parts that is more than \(k\). The formal description of the refinement is given in Algorithm 1. Since the sequence \(\mathcal{P}_{1},\mathcal{P}_{2},\ldots,\mathcal{P}_{r}\) satisfying the conditions of Theorem 2.1 can be computed in polynomial time, Algorithm 1 can indeed be implemented to run in polynomial time. By design, the algorithm returns a \(k\)-partition. The remainder of this work will focus on analyzing the approximation factor of the algorithm.
To construct examples that exhibit tight lower bound on the approximation factor of Algorithm 1, we will need the following proposition that identifies a special case under which the principal partition is unique and consists only of two partitions--namely, the partition into singletons and the partition that consists of only one part.
**Proposition 2.1**.: _Let \(f:2^{V}\to\mathbb{R}_{\geq 0}\) be a non-negative submodular function. Suppose that for every partition \(\mathcal{P}\neq\mathcal{Q},\{V\}\) where \(\mathcal{Q}:=\{\{v\}:v\in V\}\), the function \(f\) satisfies_
\[\frac{f(\mathcal{P})-f(V)}{|\mathcal{P}|-1}>\frac{f(\mathcal{Q})-f(V)}{|V|-1}.\]
_Then, the principal partition sequence of \(f\) is \(\{V\},\mathcal{Q}\)._
Proof.: Let \(n:=|V|\). It suffices to show that for every partition \(\mathcal{P}\) of \(V\) such that \(\mathcal{P}\neq\{V\},\mathcal{Q}\), we have that
\[f(\mathcal{P})-b|\mathcal{P}|>\min\{f(V)-b,f(\mathcal{Q})-b\cdot n\}\quad\forall b \in\mathbb{R} \tag{1}\]
This suffices since it ensures that no partitions other than \(\{V\}\) and \(\mathcal{Q}\) satisfy conditions 4 and 5 in Theorem 2.1. By the hypothesis, we have
\[\frac{f(\mathcal{P})-f(V)}{|\mathcal{P}|-1}>\frac{f(\mathcal{Q})-f(V)}{n-1},\]
which is equivalent to
\[f(\mathcal{P})-b^{\prime}|\mathcal{P}|>f(V)-b^{\prime}, \tag{2}\]
where \(b^{\prime}=(f(\mathcal{Q})-f(V))/(n-1)\) for every partition \(\mathcal{P}\neq\{V\},\mathcal{Q}\). Now, suppose inequality (1) fails for some partition \(\mathcal{P}\) and some \(b\in\mathbb{R}\), then
\[f(\mathcal{P})-b|\mathcal{P}| \leq f(V)-b\text{ and } \tag{3}\] \[f(\mathcal{P})-b|\mathcal{P}| \leq f(\mathcal{Q})-bn. \tag{4}\]
Consequently, we have that
\[(b-b^{\prime})|\mathcal{P}| =(f(\mathcal{P})-b^{\prime}|\mathcal{P}|)-(f(\mathcal{P})-b| \mathcal{P}|)\] \[>(f(V)-b^{\prime})-(f(\mathcal{P})-b|\mathcal{P}|)\qquad\text{( using inequality (\ref{eq:b-
3.1 and 3.2). We analyze the approximation factor of Algorithm 1 for monotone submodular functions in Section 3.1, for symmetric submodular functions in Section 3.2, and for posimodular submodular functions in Section 3.3. We conclude each subsection with a remark on the tightness of the approximation factor for the algorithm.
Our first lemma identifies a special case in which Algorithm 1 returns an optimum \(k\)-partition. This special case was also identified by Narayanan, Roy, and Patkar. We note that the following lemma holds for arbitrary submodular functions (which may not be non-negative).
**Lemma 3.1** ([17]).: _Let \(k\geq 2\) be an integer, \(f:2^{V}\to\mathbb{R}\) be a submodular function on a ground set \(V\), and \(\mathcal{P}_{1},\mathcal{P}_{2},\ldots,\mathcal{P}_{r}\) be a principal partition sequence of the submodular function \(f\) satisfying the conditions of Theorem 2.1. If there exists \(j\in[r]\) such that \(|\mathcal{P}_{j}|=k\), then \(\mathcal{P}_{j}\) is an optimal \(k\)-partition._
Proof.: Let \(\mathcal{P}^{*}\) be a \(k\)-partition of \(V\) that minimizes \(f(\mathcal{P}^{*})\) and let \(b_{j}\) be the value where \(g(b_{j})=g_{\mathcal{P}_{j}}(b_{j})\). Then,
\[f(\mathcal{P}^{*})-b_{j}\cdot k=g_{\mathcal{P}^{*}}(b_{j})\geq g(b_{j})=g_{ \mathcal{P}_{j}}(b_{j})=f(\mathcal{P}_{j})-b_{j}|\mathcal{P}_{j}|=f(\mathcal{ P}_{j})-b_{j}\cdot k,\]
and hence, \(f(\mathcal{P}^{*})\geq f(\mathcal{P}_{j})\). Therefore, \(\mathcal{P}_{j}\) is indeed an optimal \(k\)-partition.
In order to address the case where there is no \(j\in[r]\) such that \(|\mathcal{P}_{j}|=k\), we need the following lemma that shows two lower bounds on the optimum value. The first lower bound in the lemma holds for arbitrary submodular functions (which may not be non-negative) while the second lower bound holds for non-negative submodular functions.
**Lemma 3.2**.: _Let \(k\geq 2\) be an integer, \(f:2^{V}\to\mathbb{R}\) be a submodular function on a ground set \(V\), \(\mathcal{P}^{*}\) be a \(k\)-partition of \(V\) that minimizes \(f(\mathcal{P}^{*})\), and \(\mathcal{P}_{1},\mathcal{P}_{2},\ldots,\mathcal{P}_{r}\) be a principal partition sequence of the submodular function \(f\) satisfying the conditions of Theorem 2.1. Suppose \(|\mathcal{P}_{j}|\neq k\) for all \(j\in[r]\). Let \(\mathcal{P}_{i-1},\mathcal{P}_{i}\) be the partitions such that \(|\mathcal{P}_{i-1}|<k<|\mathcal{P}_{i}|\). Then,_
1. \(f(\mathcal{P}^{*})\geq\frac{|\mathcal{P}_{i}|-k}{|\mathcal{P}_{i}|-|\mathcal{ P}_{i-1}|}f(\mathcal{P}_{i-1})+\frac{k-|\mathcal{P}_{i-1}|}{|\mathcal{P}_{i}|-| \mathcal{P}_{i-1}|}f(\mathcal{P}_{i})\) _and_
2. \(f(\mathcal{P}^{*})\geq f(\mathcal{P}_{i-1})\) _if_ \(f\) _is non-negative._
Proof.: We prove the two lower bounds below.
1. Let \(b_{i-1}\) be the value such that \(g_{\mathcal{P}_{i-1}}(b_{i-1})=g_{\mathcal{P}_{i}}(b_{i-1})\). Then, we have \[f(\mathcal{P}_{i-1})-b_{i-1}|\mathcal{P}_{i-1}|=f(\mathcal{P}_{i})-b_{i-1}| \mathcal{P}_{i}|\implies b_{i-1}=\frac{f(\mathcal{P}_{i})-f(\mathcal{P}_{i-1} )}{|\mathcal{P}_{i}|-|\mathcal{P}_{i-1}|}.\] (5) By condition 4 of Theorem 2.1, we also have \[f(\mathcal{P}^{*})-b_{i-1}\cdot k =g_{\mathcal{P}^{*}}(b_{i-1})\geq g(b_{i-1})=g_{\mathcal{P}_{i}}( b_{i-1})=f(\mathcal{P}_{i})-b_{i-1}|\mathcal{P}_{i}|\] \[\implies f(\mathcal{P}^{*})\geq f(\mathcal{P}_{i})+b_{i-1}(k-| \mathcal{P}_{i}|).\] (6) Combining (5) and (6), we get \[f(\mathcal{P}^{*}) \geq f(\mathcal{P}_{i})+\frac{f(\mathcal{P}_{i})-f(\mathcal{P}_{ i-1})}{|\mathcal{P}_{i}|-|\mathcal{P}_{i-1}|}(k-|\mathcal{P}_{i}|)\] \[=\frac{|\mathcal{P}_{i}|-k}{|\mathcal{P}_{i}|-|\mathcal{P}_{i-1}|} f(\mathcal{P}_{i-1})+\frac{k-|\mathcal{P}_{i-1}|}{|\mathcal{P}_{i}|-|\mathcal{P}_{i-1 }|}f(\mathcal{P}_{i}).\]
2. Let \(P_{1}^{*},\ldots,P_{k}^{*}\) be the parts of \(\mathcal{P}^{*}\) (in arbitrary order). Let \(k^{\prime}:=|\mathcal{P}_{i-1}|\). We know that \(k^{\prime}<k\). Consider the \(k^{\prime}\)-partition \(\mathcal{Q}\) obtained as \(Q_{1}:=P_{1}^{*},Q_{2}:=P_{2}^{*},\ldots,Q_{k^{\prime}-1}:=P_{k^{\prime}-1}^{*},Q_{k^{\prime}}:=\cup_{j=k^{\prime}}^{k}P_{j}^{*}\). Then, \[f(\mathcal{P}^{*})=\sum_{i=1}^{k}f(P_{i}^{*})\geq\left(\sum_{i=1}^{k^{\prime}-1 }f(P_{i}^{*})\right)+f(\cup_{j=k^{\prime}}^{k}P_{j}^{*})=\sum_{i=1}^{k^{\prime} }f(Q_{i})=f(\mathcal{Q}).\]
The inequality above is due to submodularity and non-negativity. The partition \(\mathcal{Q}\) is a \(k^{\prime}\)-partition.
By Lemma 3.1, the partition \(\mathcal{P}_{i-1}\) is an optimal \(k^{\prime}\)-partition. Hence,
\[f(\mathcal{Q})\geq f(\mathcal{P}_{i-1}).\]
The above two inequalities together imply that \(f(\mathcal{P}^{*})\geq f(\mathcal{P}_{i-1})\).
### Monotone submodular functions
In this section, we bound the approximation factor of Algorithm 1 for monotone submodular \(k\)-partitioning. The following is the main theorem of this section.
**Theorem 3.1**.: _The approximation factor of Algorithm 1 for non-negative monotone submodular \(k\)-partitioning is \(\frac{4}{3}-\frac{4}{9n+3}\), where \(n\) is the size of the ground set._
The asymptotic approximation factor of \(4/3\) achieved by Algorithm 1 is the best possible for non-negative monotone submodular \(k\)-partition: for every constant \(\epsilon>0\), there does not exist an algorithm that achieves a \((4/3-\epsilon)\)-approximation using polynomial number of function evaluation queries [22]. We will also exhibit examples to show the tightness of the approximation factor for Algorithm 1 after proving the theorem. The proof of Theorem 3.1 follows from Lemma 3.1 and Lemma 3.3 shown below.
**Lemma 3.3**.: _Let \(k\geq 2\) be an integer, \(f:2^{V}\rightarrow\mathbb{R}_{\geq 0}\) be a non-negative monotone submodular function on a ground set \(V\) of size \(n\), \(\mathcal{P}^{*}\) be a \(k\)-partition of \(V\) that minimizes \(f(\mathcal{P}^{*})\), and \(\mathcal{P}_{1},\mathcal{P}_{2},\ldots,\mathcal{P}_{r}\) be a principal partition sequence of the submodular function \(f\) satisfying the conditions of Theorem 2.1. Suppose \(|\mathcal{P}_{j}|\neq k\) for all \(j\in[r]\). Then, the partition \(\mathcal{P}\) returned by Algorithm 1 satisfies_
\[f(\mathcal{P})\leq\left(\frac{4}{3}-\frac{4}{9n+3}\right)f(\mathcal{P}^{*}).\]
Proof.: Let \(\mathcal{P}_{i-1},\mathcal{P}_{i}\) be the partitions such that \(|\mathcal{P}_{i-1}|<k<|\mathcal{P}_{i}|\). Let \(S\) and \(\mathcal{P}^{\prime}=\{B_{1},B_{2},\ldots,B_{|\mathcal{P}^{\prime}|}\}\) be as in Algorithm 1.
Firstly, since \(\cup_{j=k-|\mathcal{P}_{i-1}|+1}^{|\mathcal{P}^{\prime}|}B_{j}\subseteq S\) and \(f\) is monotone, we have that
\[f\left(\cup_{j=k-|\mathcal{P}_{i-1}|+1}^{|\mathcal{P}^{\prime}|}B_{j}\right) \leq f(S).\]
Secondly, by our choice of \(B_{1},B_{2},\ldots,B_{k-|\mathcal{P}_{i-1}|}\), we know that
\[\sum_{j=1}^{k-|\mathcal{P}_{i-1}|}f(B_{j})\leq\frac{k-|\mathcal{P}_{i-1}|}{| \mathcal{P}^{\prime}|}f(\mathcal{P}^{\prime}).\]
Hence,
\[f(\mathcal{P}) =f(\mathcal{P}_{i-1})-f(S)+\sum_{j=1}^{k-|\mathcal{P}_{i-1}|}f(B_ {j})+f\left(\bigcup_{j=k-|\mathcal{P}_{i-1}|+1}^{|\mathcal{P}^{\prime}|}B_{j}\right)\] \[\leq f(\mathcal{P}_{i-1})+\sum_{j=1}^{k-|\mathcal{P}_{i-1}|}f(B_ {j})\] \[\leq f(\mathcal{P}_{i-1})+\frac{k-|\mathcal{P}_{i-1}|}{|\mathcal{ P}^{\prime}|}f(\mathcal{P}^{\prime})\] \[=f(\mathcal{P}_{i-1})+\frac{k-|\mathcal{P}_{i-1}|}{|\mathcal{P}_{i }|-|\mathcal{P}_{i-1}|+1}(f(\mathcal{P}_{i})+f(S)-f(\mathcal{P}_{i-1})),\]
where the last equality follows from the fact that \(f(\mathcal{P}_{i-1})-f(S)+f(\mathcal{P}^{\prime})=f(\mathcal{P}_{i})\) and \(|\mathcal{P}^{\prime}|=|\mathcal{P}_{i}|-|\mathcal{P}_{i-1}|+1\). Rearranging, we have that
\[f(\mathcal{P}) \leq\frac{f(\mathcal{P}_{i-1})}{|\mathcal{P}_{i}|-|\mathcal{P}_{i -1}|+1}(|\mathcal{P}_{i}|-k+1)+\frac{f(\mathcal{P}_{i})}{|\mathcal{P}_{i}|-| \mathcal{P}_{i-1}|+1}(k-|\mathcal{P}_{i-1}|)\] \[\qquad\qquad+\frac{f(S)}{|\mathcal{P}_{i}|-|\mathcal{P}_{i-1}|+1 }(k-|\mathcal{P}_{i-1}|)\] \[=\left(\frac{|\mathcal{P}_{i}|-|\mathcal{P}_{i-1}|}{|\mathcal{P} _{i}|-|\mathcal{P}_{i-1}|+1}\right)\left(\frac{f(\mathcal{P}_{i-1})}{|\mathcal{ P}_{i}|-|\mathcal{P}_{i-1}|}(|\mathcal{P}_{i}|-k+1)+\frac{f(\mathcal{P}_{i})+f(S)}{| \mathcal{P}_{i}|-|\mathcal{P}_{i-1}|}(k-|\mathcal{P}_{i-1}|)\right)\] \[\leq\left(\frac{|\mathcal{P}_{i}|-|\mathcal{P}_{i-1}|}{|\mathcal{ P}_{i}|-|\mathcal{P}_{i-1}|+1}\right)\left(f(\mathcal{P}^{*})+\frac{f( \mathcal{P}_{i-1})}{|\mathcal{P}_{i}|-|\mathcal{P}_{i-1}|}(1+k-|\mathcal{P}_{ i-1}|)\right)\] \[\leq\left(\frac{|\mathcal{P}_{i}|-|\mathcal{P}_{i-1}|}{|\mathcal{ P}_{i}|-|\mathcal{P}_{i-1}|+1}\right)\left(f(\mathcal{P}^{*})+\frac{f( \mathcal{P}_{i-1})}{|\mathcal{P}_{i}|-|\mathcal{P}_{i-1}|}(1+k-|\mathcal{P}_{ i-1}|)\right). \tag{7}\]
where the second inequality above is by by Lemma 3.2(i) and the third inequality above is because \(f(S)\leq f(\mathcal{P}_{i-1})\). Inequality (7) implies that
\[f(\mathcal{P}) \leq\left(\frac{|\mathcal{P}_{i}|-|\mathcal{P}_{i-1}|}{|\mathcal{ P}_{i}|-|\mathcal{P}_{i-1}|+1}\right)\left(f(\mathcal{P}^{*})+f(\mathcal{P}_{i-1})+ \frac{f(\mathcal{P}_{i-1})}{|\mathcal{P}_{i}|-|\mathcal{P}_{i-1}|}(1+k-| \mathcal{P}_{i}|)\right)\] \[\leq\left(1-\frac{1}{|\mathcal{P}_{i}|-|\mathcal{P}_{i-1}|+1} \right)\left(f(\mathcal{P}^{*})+f(\mathcal{P}_{i-1})\right)\quad\text{(since $k<| \mathcal{P}_{i}|$)}\] \[\leq 2f(\mathcal{P}^{*}),\]
where the last inequality is because \(f(\mathcal{P}_{i-1})\leq f(\mathcal{P}^{*})\) by Lemma 3.2(ii). The above analysis shows that the approximation factor is at most \(2\). We tighten the analysis now. As a consequence of the above inequality, we may assume that \(f(\mathcal{P}^{*})\neq 0\) because if \(f(\mathcal{P}^{*})=0\), then the returned \(k\)-partition \(\mathcal{P}\) also satisfies \(f(\mathcal{P})=0\) and thus, is optimal. Let \(c:=f(\mathcal{P}_{i-1})/f(\mathcal{P}^{*})\). By Lemma 3.2(ii), we have that \(f(\mathcal{P}_{i-1})\leq f(\mathcal{P}^{*})\) and hence, \(c\in[0,1]\). For convenience, we define \(A:=k-|\mathcal{P}_{i-1}|\) and \(B:=|\mathcal{P}_{i}|-k\) and note that \(A,B\geq 1\). Using this notation, we may rewrite inequality (7) as
\[f(\mathcal{P}) \leq\left(\frac{A+B}{A+B+1}\right)\left(f(\mathcal{P}^{*})+ \frac{1+A}{A+B}f(\mathcal{P}_{i-1})\right)\] \[=\left(\frac{A+B}{A+B+1}\right)\left(1+\frac{1+A}{A+B}\cdot c \right)f(\mathcal{P}^{*}). \tag{8}\]
By Lemma 3.2(i), we have
\[f(\mathcal{P}^{*})\geq\left(\frac{B}{A+B}\right)f(\mathcal{P}_{i-1})+\left( \frac{A}{A+B}\right)f(\mathcal{P}_{i})=\left(\frac{B}{A+B}\right)cf(\mathcal{P} ^{*})+\left(\frac{A}{A+B}\right)f(\mathcal{P}_{i}).\]
Rearranging, we have
\[f(\mathcal{P}_{i})\leq\left(1-\frac{B}{A+B}\cdot c\right)\left(\frac{A+B}{A} \right)f(\mathcal{P}^{*})=\left(\frac{A+B}{A}-\frac{B}{A}\cdot c\right)f( \mathcal{P}^{*}).\]
Since \(\mathcal{P}\) is obtained by coarsening \(\mathcal{P}_{i}\), we have \(f(\mathcal{P})\leq f(\mathcal{P}_{i})\) by submodularity and non-negativity of \(f\). This implies
\[f(\mathcal{P})\leq\left(\frac{A+B}{A}-\frac{B}{A}\cdot c\right)f(\mathcal{P}^{ *}). \tag{9}\]
Combining inequalities (8) and (9), we have
\[\frac{f(\mathcal{P})}{f(\mathcal{P}^{*})}\leq\max_{c\in[0,1]}\min\left\{ \left(\frac{A+B}{A+B+1}\right)\left(1+\frac{1+A}{A+B}\cdot c\right),\;\frac{A+B }{A}-\frac{B}{A}\cdot c\right\}. \tag{10}\]
Thus, in order to upper bound the approximation factor, it suffices to upper bound the right hand side of inequality (10). Since \(\left(\frac{A+B}{A+B+1}\right)\left(1+\frac{1+A}{A+B}\cdot c\right)\) and \(\frac{A+B}{A}-\frac{B}{A}\cdot c\) are both linear in \(c\), with the former increasing and the latter decreasing as a function of \(c\), the value
\[\max_{c\in\mathbb{R}}\min\left\{\left(\frac{A+B}{A+B+1}\right)\left(1+\frac{1+ A}{A+B}\cdot c\right),\;\frac{A+B}{A}-\frac{B}{A}\cdot c\right\}\]
is achieved when the two terms are equal. Setting \(\left(\frac{A+B}{A+B+1}\right)\left(1+\frac{1+A}{A+B}\cdot c^{*}\right)=\frac{ A+B}{A}-\frac{B}{A}\cdot c^{*}\) and solving for \(c^{*}\), we get
\[c^{*}=\frac{\frac{A+B}{A}-\frac{A+B+1}{A+B+1}}{\frac{1+A}{A+B+1}+\frac{B}{A}}= \frac{\frac{B}{A}+\frac{1}{A+B+1}}{\frac{1+A}{A+B+1}+\frac{B}{A}}.\]
Plugging \(c=c^{*}\) into \(\frac{A+B}{A}-\frac{B}{A}\cdot c\) yields
\[\frac{f(\mathcal{P})}{f(\mathcal{P}^{*})} \leq\max_{c\in[0,1]}\min\left\{\frac{A+B}{A+B+1}\left(1+\frac{1+A} {A+B}\cdot c\right),\;\frac{A+B}{A}-\frac{B}{A}\cdot c\right\}\] \[\leq\frac{A+B}{A}-\frac{B}{A}\cdot c^{*}\] \[=1+\frac{B}{A}(1-c^{*})\] \[=1+\frac{B}{A}\left(1-\frac{\frac{B}{A}+\frac{1}{A+B+1}}{\frac{ 1+A}{A+B+1}+\frac{B}{A}}\right)\] \[=1+\frac{\frac{B}{A+B+1}}{\frac{1+A}{A+B+1}+\frac{B}{A}}\] \[=1+\frac{AB}{A+A^{2}+AB+B^{2}+B}\] \[\leq 1+\frac{AB}{3AB+A+B}\quad\text{(since $A^{2}+B^{2}\geq 2AB $)}\] \[=\frac{4}{3}-\frac{1}{3}\cdot\frac{A+B}{3AB+A+B}\] \[=\frac{4}{3}-\frac{1}{3}\cdot\frac{A+B}{3AB/2+3AB/2+A+B}\] \[\leq\frac{4}{3}-\frac{1}{3}\cdot\frac{A+B}{3AB/2+3(A^{2}+B^{2})/ 4+A+B}\quad\text{(since $AB\leq\frac{A^{2}+B^{2}}{2}$)}\] \[=\frac{4}{3}-\frac{1}{3}\cdot\frac{A+B}{3(A+B)^{2}/4+A+B}\] \[=\frac{4}{3}-\frac{1}{3}\cdot\frac{1}{3(A+B)/4+1}\] \[\leq\frac{4}{3}-\frac{4}{9n+3}.\]
The last inequality above is because \(A+B=|\mathcal{P}_{i}|-|\mathcal{P}_{i-1}|\leq n-1\).
**Remark 3.1**.: _The approximation factor of Algorithm 1 for non-negative monotone submodular functions is at least \(4/3-4/(9n+3)\). We show this for \(n=3\) using the following example: Let \(V=\{a,b,c\}\), \(k=2\), and \(f:2^{V}\rightarrow\mathbb{R}_{\geq 0}\) be defined by_
\[f(\emptyset)=0,\;f(\{a\})=1,\;f(\{b\})=f(\{c\})=1+\epsilon,\] \[f(\{a,b\})=f(\{a,c\})=\frac{3}{2}+\epsilon,\;f(\{b,c\})=f(V)=2+2\epsilon.\]
_Submodularity and monotonicity of \(f\) can be verified by considering all possible subsets. Moreover, the principal partition sequence of this instance is \(\{V\},\{\{a\},\{b\},\{c\}\}\). Thus, Algorithm 1 returns the \(2\)-partition
\(\{\{a\},\{b,c\}\}\), whose objective value is \(3+2\epsilon\). An optimum \(2\)-partition is \(\{\{c\},\{a,b\}\}\), whose objective value is \(5/2+\epsilon\). Thus, the approximation factor is \(\frac{3+2\epsilon}{5/2+\epsilon}\to\frac{6}{5}\) as \(\epsilon\to 0\). We note that for \(n=3\), the approximation factor guaranteed by Theorem 3.1 is \(\frac{4}{3}-\frac{4}{9n+3}=\frac{6}{5}\)._
We conclude the section by showing that there exist monotone submodular functions for which the approximation factor of Algorithm 1 is at least \(4/3\) asymptotically (i.e., as \(n\to\infty\)).
**Lemma 3.4**.: _For every odd positive integer \(n\geq 5\), there exists a function \(k=k(n)\) (i.e., \(k\) is a function of \(n\)) and an instance of non-negative monotone submodular \(k\)-partition over an \(n\)-element ground set such that the approximation factor of Algorithm 1 on that instance is arbitrarily close to \(4/3-4/(3n+3)\)._
Proof.: Let \(n\geq 5\) be an arbitrary odd number, \(k=\frac{n+1}{2}\), and let \(V=\{v_{1},\ldots,v_{n}\}\) be the ground set. Moreover, let \(U:=\{v_{1},\ldots,v_{\frac{n-1}{2}}\}\) and \(D:=\{v_{\frac{n+1}{2}},\ldots,v_{n}\}\) so that \(V=U\uplus D\). Let \(g:2^{U}\to\mathbb{R}_{\geq 0}\) be a function over the ground set \(U\) defined by
\[g(S)=\begin{cases}\frac{1}{2}+\frac{1}{2}\cdot|S|&\text{ if }\ \emptyset\neq S\subseteq U,\\ 0&\text{ if }S=\emptyset.\end{cases}\]
and \(f:2^{V}\to\mathbb{R}_{\geq 0}\) be defined by
\[f(S):=\min\left\{g(S\cap U)+(1+\epsilon)|S\cap D|,\ \frac{n+1}{2}\right\}\ \forall\ S \subseteq V,\]
where \(\epsilon>0\) is infinitesimally small. The function \(f\) satisfies \(f(\emptyset)=0\), \(f(V)=\frac{n+1}{2}\), \(f(U)=\frac{n+1}{4}\), \(f(D)=\frac{n+1}{2}\), \(f(\{v\})=1\) for each \(v\in U\), and \(f(\{v\})=1+\epsilon\) for each \(u\in D\). We will use \(\mathcal{Q}\) to denote the partition of \(V\) into \(n\) singleton sets, and it follows that \(f(\mathcal{Q})=n+\frac{(n+1)\epsilon}{2}\). For convenience, we will write \(h(S):=g(S\cap U)+|S\cap D|\) for all \(S\subseteq V\) so that \(f(S)=\min\{h(S),\frac{n+1}{2}\}\) for all \(S\subseteq V\) throughout the proof.
**Claim 3.1**.: _The function \(f\) is submodular and monotone._
Proof.: Let \(A,B\subseteq V\) be arbitrary. We notice that the function \(h\) is monotone, and
\[h(A)+h(B)\geq h(A\cap B)+h(A\cup B).\]
Without loss of generality, we may assume \(h(A)\geq h(B)\), and thus \(h(A\cup B)\geq h(A)\geq h(B)\geq h(A\cap B)\). We consider the following five cases:
1. \(h(A\cup B)\leq\frac{n+1}{2}\). \[\implies f(A)+f(B)=h(A)+h(B)\geq h(A\cap B)+h(A\cup B)=f(A\cap B)+f(A\cup B).\]
2. \(h(A)\leq\frac{n+1}{2}<h(A\cup B)\). \[\implies f(A)+f(B)=h(A)+h(B)\geq h(A\cap B)+h(A\cup B)>h(A\cap B)+\frac{n+1}{2} =f(A\cap B)+f(A\cup B).\]
3. \(h(B)\leq\frac{n+1}{2}<h(A)\). \[\implies f(A)+f(B)=\frac{n+1}{2}+h(B)\geq\frac{n+1}{2}+h(A\cap B)=f(A\cup B)+f(A \cap B).\]
4. \(h(A\cap B)\leq\frac{n+1}{2}<h(B)\). \[\implies f(A)+f(B)=\frac{n+1}{2}+\frac{n+1}{2}\geq\frac{n+1}{2}+h(A\cap B)=f(A \cup B)+f(A\cap B).\]
5. \(\frac{n+1}{2}<h(A\cap B)\). \[\implies f(A)+f(B)=\frac{n+1}{2}+\frac{n+1}{2}=f(A\cup B)+f(A\cap B).\]
Since exactly one of these five cases holds, we have proved the submodularity of \(f\). The monotonicity of \(f\) is implied by the monotonicity of \(h\).
**Claim 3.2**.: _For every partition \(\mathcal{P}\neq\mathcal{Q},\{V\}\), the function \(f\) satisfies_
\[\frac{f(\mathcal{P})-f(V)}{|\mathcal{P}|-1}>\frac{f(\mathcal{Q})-f(V)}{n-1}.\]
Proof.: First, we note that \(f(V)=\frac{n+1}{2}\) and \(\frac{f(\mathcal{Q})-f(V)}{n-1}=\frac{n+(n+1)\epsilon/2-(n+1)/2}{n-1}=\frac{1} {2}+\frac{(n+1)\epsilon}{2n-2}\), and thus the desired inequality is equivalent to
\[f(\mathcal{P})\geq\frac{n}{2}+\frac{1}{2}|\mathcal{P}|+\epsilon(|\mathcal{P}| -1)\frac{n+1}{2n-2}.\]
We note that there exists at most one part \(P\in\mathcal{P}\) such that \(h(P)\geq\frac{n+1}{2}\). To see this, assume that two distinct parts \(P,P^{\prime}\in\mathcal{P}\) satisfy \(h(P)\geq\frac{n+1}{2}\) and \(h(P^{\prime})\geq\frac{n+1}{2}\). This implies
\[n+1 \leq h(P)+h(P^{\prime})\leq 2\cdot\frac{1}{2}+\frac{1}{2}|(P \cup P^{\prime})\cap U|+(1+\epsilon)|(P\cup P^{\prime})\cap D|\] \[\leq 1+\frac{1}{2}\cdot\frac{n-1}{2}+(1+\epsilon)\frac{n+1}{2}= \frac{3n+1}{4}+1+\frac{(n+1)\epsilon}{2}<n+1,\]
because \(\epsilon\) is infinitesimal and \(n\geq 5\), yielding a contradiction. Therefore, we may consider two cases: \(\mathcal{P}\) containing no part \(P\) such that \(h(P)\geq\frac{n+1}{2}\), and \(\mathcal{P}\) containing exactly one part \(P\) such that \(h(P)\geq\frac{n+1}{2}\).
Suppose \(\mathcal{P}\) contains no part \(P\) such that \(h(P)\geq\frac{n+1}{2}\). We will use \(t\) to denote the number of parts of \(\mathcal{P}\) that each intersects \(U\) non-trivially. Then, \(|\mathcal{P}|-t\) represents the number of parts of \(\mathcal{P}\) that are each contained in \(D\), and we have \(|\mathcal{P}|-t\leq|D|=\frac{n+1}{2}\). It follows that
\[f(\mathcal{P})-\frac{n}{2}-\frac{|\mathcal{P}|}{2}-\epsilon(| \mathcal{P}|-1)\frac{n+1}{2n-2} =\frac{1}{2}\cdot t+\frac{1}{2}\cdot\frac{n-1}{2}+(1+\epsilon)\frac {n+1}{2}\] \[\qquad\qquad\qquad-\frac{n}{2}-\frac{|\mathcal{P}|}{2}-\epsilon( |\mathcal{P}|-1)\frac{n+1}{2n-2}\] \[=\frac{n+1}{4}-\frac{|\mathcal{P}|-t}{2}+\epsilon\left(\frac{n+ 1}{2}-\frac{(|\mathcal{P}|-1)(n+1)}{2n-2}\right)\] \[\geq\frac{n+1}{4}-\frac{n+1}{4}+\epsilon\left(\frac{n+1}{2}- \frac{(|\mathcal{P}|-1)(n+1)}{2n-2}\right)\] \[\geq\epsilon\left(\frac{n+1}{2}-\frac{(n-2)(n+1)}{2n-2}\right) \ \ (\text{since }|\mathcal{P}|\leq n-1)\] \[=\epsilon\cdot\frac{n+1}{2n-2}\] \[>0.\]
Now, suppose \(\mathcal{P}\) contains exactly one part \(P\in\mathcal{P}\) such that \(h(P)\geq\frac{n+1}{2}\). Each other part \(P^{\prime}\neq P\) of \(\mathcal{P}\) satisfies \(f(P^{\prime})\geq 1\). It follows that
\[f(\mathcal{P})\geq\frac{n+1}{2}+1\cdot(|\mathcal{P}|-1)=\frac{n}{2}+|\mathcal{ P}|-\frac{1}{2}>\frac{n}{2}+\frac{1}{2}\cdot|\mathcal{P}|+\epsilon(|\mathcal{P}|-1 )\frac{n+1}{2n-2}\]
since \(|\mathcal{P}|\geq 2\) and \(\epsilon\) is arbitrarily small.
By Proposition 2.1, Algorithm 1 returns the partition \(\{\{v_{1}\},\ldots,\{v_{\frac{n-1}{2}}\},D\}\) (because \(\{v_{1}\},\ldots,\{v_{\frac{n-1}{2}}\}\) are the \(k-1\) singleton sets that minimize the \(f\) values), whose objective is \(\frac{n-1}{2}+\frac{n+1}{2}=n\). The partition \(\{\{v_{1},\ldots,v_{\frac{n+1}{2}}\},\{v_{\frac{n+3}{2}}\},\ldots,\{v_{n}\}\}\) has objective \(\frac{n+1}{4}+(1+\epsilon)+(1+\epsilon)\frac{n-1}{2}=\frac{3n+3}{4}+\frac{(n+1) \epsilon}{2}\). The approximation factor is at least
\[\frac{n}{\frac{3n+3}{4}+\frac{(n+1)\epsilon}{2}}\rightarrow\frac{4}{3}-\frac{ 4}{3n+3}\ (\text{as }\epsilon\to 0).\]
This completes the proof of Lemma 3.4.
### Symmetric submodular functions
In this section, we bound the approximation factor of Algorithm 1 for symmetric submodular \(k\)-partitioning. The following is the main theorem of this section.
**Theorem 3.2**.: _The approximation factor of Algorithm 1 for non-negative symmetric submodular \(k\)-partitioning is \(2(1-1/n)\), where \(n\) is the size of the ground set._
The asymptotic approximation factor of \(2\) achieved by Algorithm 1 is the best possible for non-negative symmetric submodular \(k\)-partition: for every constant \(\epsilon>0\), there does not exist an algorithm that achieves a \((2-\epsilon)\)-approximation using polynomial number of function evaluation queries [22]. We will remark on the tightness of the approximation factor for Algorithm 1 after proving the theorem. The proof of Theorem 3.2 follows from Lemma 3.1 and Lemma 3.5 shown below.
**Lemma 3.5**.: _Let \(k\geq 2\) be an integer, \(f:2^{V}\rightarrow\mathbb{R}\) be a non-negative symmetric submodular function on a ground set \(V\) of size \(n\), \(\mathcal{P}^{*}\) be a \(k\)-partition of \(V\) that minimizes \(f(\mathcal{P}^{*})\), and \(\mathcal{P}_{1},\mathcal{P}_{2},\ldots,\mathcal{P}_{r}\) be a principal partition sequence of the submodular function \(f\) satisfying the conditions of Theorem 2.1. Suppose \(|\mathcal{P}_{j}|\neq k\) for all \(j\in[r]\). Then the partition \(\mathcal{P}\) returned by Algorithm 1 satisfies_
\[f(\mathcal{P})\leq 2\left(1-\frac{1}{n}\right)f(\mathcal{P}^{*}).\]
Proof.: Let \(\mathcal{P}_{i-1},\mathcal{P}_{i}\) be the partitions such that \(|\mathcal{P}_{i-1}|<k<|\mathcal{P}_{i}|\). Let \(S\) and \(\mathcal{P}^{\prime}=\{B_{1},B_{2},\ldots,B_{|\mathcal{P}^{\prime}|}\}\) be as in Algorithm 1.
Firstly, we note that for every \(T\subseteq S\), symmetry, submodularity, and non-negativity of \(f\) together imply that
\[f(T)=f(V-T)=f((V-S)\cup(S-T))\leq f(V-S)+f(S-T)=f(S)+f(S-T). \tag{11}\]
Secondly, by our choice of \(B_{1},B_{2},\ldots,B_{k-|\mathcal{P}_{i-1}|}\), we know that
\[\sum_{j=1}^{k-|\mathcal{P}_{i-1}|}f(B_{j})\leq\frac{k-|\mathcal{P}_{i-1}|}{| \mathcal{P}^{\prime}|}f(\mathcal{P}^{\prime}). \tag{12}\]
Since \(\cup_{j=k-|\mathcal{P}_{i-1}|+1}^{|\mathcal{P}^{\prime}|}B_{j}\subseteq S\), we have that
\[f\left(\bigcup_{j=k-|\mathcal{P}_{i-1}|+1}^{|\mathcal{P}^{\prime }|}B_{j}\right) \leq f(S)+f\left(\bigcup_{j=1}^{k-|\mathcal{P}_{i-1}|}B_{j}\right) \quad\text{(by inequality \eqref{eq:11})}\] \[\leq f(S)+\sum_{j=1}^{k-|\mathcal{P}_{i-1}|}f(B_{j})\quad\text{( by submodularity and non-negativity)}\] \[\leq f(S)+\frac{k-|\mathcal{P}_{i-1}|}{|\mathcal{P}^{\prime}|}f( \mathcal{P}^{\prime}).\quad\text{(by inequality \eqref{eq:12})}\]
Therefore, we have
\[f(\mathcal{P}) =f(\mathcal{P}_{i-1})-f(S)+\sum_{j=1}^{k-|\mathcal{P}_{i-1}|}f(B_ {j})+f\left(\bigcup_{j=k-|\mathcal{P}_{i-1}|+1}^{|\mathcal{P}^{\prime}|}B_{j}\right)\] \[\leq f(\mathcal{P}_{i-1})-f(S)+\frac{k-|\mathcal{P}_{i-1}|}{| \mathcal{P}^{\prime}|}f(\mathcal{P}^{\prime})+\left(f(S)+\frac{k-|\mathcal{ P}_{i-1}|}{|\mathcal{P}^{\prime}|}f(\mathcal{P}^{\prime})\right)\] \[=f(\mathcal{P}_{i-1})+2\cdot\frac{k-|\mathcal{P}_{i-1}|}{| \mathcal{P}^{\prime}|}f(\mathcal{P}^{\prime})\] \[=f(\mathcal{P}_{i-1})+2\cdot\frac{k-|\mathcal{P}_{i-1}|}{| \mathcal{P}_{i}|-|\mathcal{P}_{i-1}|+1}(f(\mathcal{P}_{i})+f(S)-f(\mathcal{P} _{i-1})),\]
where the last equality follows from the fact that \(f(\mathcal{P}_{i-1})-f(S)+f(\mathcal{P}^{\prime})=f(\mathcal{P}_{i})\) and \(|\mathcal{P}^{\prime}|=|\mathcal{P}_{i}|-|\mathcal{P}_{i-1}|+1\). Rearranging, we get
\[f(\mathcal{P})\leq\frac{f(\mathcal{P}_{i-1})}{|\mathcal{P}_{i}| -|\mathcal{P}_{i-1}|+1}(|\mathcal{P}_{i}|+|\mathcal{P}_{i-1}|-2k+1)+\frac{f( \mathcal{P}_{i})}{|\mathcal{P}_{i}|-|\mathcal{P}_{i-1}|+1}(2k-2|\mathcal{P}_{ i-1}|)\] \[+\frac{k-|\mathcal{P}_{i-1}|}{|\mathcal{P}_{i}|-|\mathcal{P}_{i- 1}|+1}\cdot 2f(S).\]
Now, we observe that \(2f(S)=f(S)+f(V-S)\leq f(\mathcal{P}_{i-1})\) by symmetry, submodularity, and non-negativity. Consequently, we have that
\[f(\mathcal{P}) \leq\frac{f(\mathcal{P}_{i-1})}{|\mathcal{P}_{i}|-|\mathcal{P}_{i -1}|+1}(|\mathcal{P}_{i}|+|\mathcal{P}_{i-1}|-2k+1)+\frac{f(\mathcal{P}_{i})}{ |\mathcal{P}_{i}|-|\mathcal{P}_{i-1}|+1}(2k-2|\mathcal{P}_{i-1}|)\] \[\qquad\qquad\qquad+\frac{k-|\mathcal{P}_{i-1}|}{|\mathcal{P}_{i} |-|\mathcal{P}_{i-1}|+1}\cdot f(\mathcal{P}_{i-1})\] \[=\left(\frac{|\mathcal{P}_{i}|-|\mathcal{P}_{i-1}|}{|\mathcal{P} _{i}|-|\mathcal{P}_{i-1}|+1}\right)\left(\frac{f(\mathcal{P}_{i-1})}{|\mathcal{ P}_{i}|-|\mathcal{P}_{i-1}|}(|\mathcal{P}_{i}|-k+1)+\frac{f(\mathcal{P}_{i})}{| \mathcal{P}_{i}|-|\mathcal{P}_{i-1}|}(2k-2|\mathcal{P}_{i-1}|)\right)\] \[\leq\left(\frac{|\mathcal{P}_{i}|-|\mathcal{P}_{i-1}|}{|\mathcal{ P}_{i}|-|\mathcal{P}_{i-1}|+1}\right)\cdot 2f(\mathcal{P}^{*})\quad\text{(by Lemma \ref{lem:P}(i))}\] \[=\left(1-\frac{1}{|\mathcal{P}_{i}|-|\mathcal{P}_{i-1}|+1}\right) \cdot 2f(\mathcal{P}^{*})\] \[\leq 2\left(1-\frac{1}{n}\right)f(\mathcal{P}^{*}),\]
where the second inequality is because \(1\leq|\mathcal{P}_{i}|-k\) and the last inequality is because \(|\mathcal{P}_{i}|-|\mathcal{P}_{i-1}|\leq n-1\).
**Remark 3.2**.: _In contrast to the greedy splitting and the Gomory-Hu tree based approaches, the principal partition sequence approach given in Algorithm 1 does not achieve an approximation factor of \(2-2/k\) for non-negative symmetric submodular functions: For the special case of graph \(k\)-cut, Proposition 5.2 of [21] provides a family of graph cut functions on which Algorithm 1 returns solutions whose approximation factor is arbitrarily close to \(2\)._
### Posimodular submodular functions
In this section, we bound the approximation factor of Algorithm 1 for posimodular submodular \(k\)-partitioning. The following is the main theorem of this section.
**Theorem 3.3**.: _The approximation factor of Algorithm 1 for non-negative posimodular submodular \(k\)-partitioning is \(2(1-\frac{1}{n+1})\), where \(n\) is the size of the ground set._
The asymptotic approximation factor of \(2\) achieved by Algorithm 1 is the best possible for non-negative posimodular submodular \(k\)-partition: for every constant \(\epsilon>0\), there does not exist an algorithm that achieves a \((2-\epsilon)\)-approximation using polynomial number of function evaluation queries [22]. Lemmas 3.1 and 3.6 complete the proof of Theorem 3.3. The proof of Theorem 3.3 follows from Lemma 3.1 and Lemma 3.6 shown below.
**Lemma 3.6**.: _Let \(k\geq 2\) be an integer, \(f:2^{V}\rightarrow\mathbb{R}_{\geq 0}\) be a non-negative posimodular submodular function on a ground set \(V\) of size \(n\), \(\mathcal{P}^{*}\) be a \(k\)-partition of \(V\) that minimizes \(f(\mathcal{P}^{*})\), and \(\mathcal{P}_{1},\mathcal{P}_{2},\ldots,\mathcal{P}_{r}\) be a principal partition sequence of the submodular function \(f\) satisfying the conditions of Theorem 2.1. Suppose \(|\mathcal{P}_{j}|\neq k\) for all \(j\in[r]\). Then, the partition \(\mathcal{P}\) returned by Algorithm 1 satisfies_
\[f(\mathcal{P})\leq 2\left(1-\frac{1}{n+1}\right)f(\mathcal{P}^{*}).\]
Proof.: Let \(\mathcal{P}_{i-1},\mathcal{P}_{i}\) be the partitions such that \(|\mathcal{P}_{i-1}|<k<|\mathcal{P}_{i}|\). Let \(S\) and \(\mathcal{P}^{\prime}=\{B_{1},B_{2},\ldots,B_{|\mathcal{P}^{\prime}|}\}\) be as in Algorithm 1.
Firstly, we note that for every \(T\subseteq S\), non-negativity and posimodularity implies that
\[f(T)\leq f(T)+f(\emptyset)\leq f(S)+f(S-T). \tag{13}\]
Secondly, by our choice of \(B_{1},B_{2},\ldots,B_{k-|\mathcal{P}_{i-1}|}\), we know that
\[\sum_{j=1}^{k-|\mathcal{P}_{i-1}|}f(B_{j})\leq\frac{k-|\mathcal{P}_{i-1}|}{| \mathcal{P}^{\prime}|}f(\mathcal{P}^{\prime}). \tag{14}\]
Since \(\cup_{j=k-|\mathcal{P}_{i-1}|+1}^{|\mathcal{P}^{\prime}|}B_{j}\subseteq S\), we have that
\[f\left(\bigcup_{j=k-|\mathcal{P}_{i-1}|+1}^{|\mathcal{P}^{\prime }|}B_{j}\right) \leq f(S)+f\left(\bigcup_{j=1}^{k-|\mathcal{P}_{i-1}|}B_{j}\right) \quad\text{(by inequality \eqref{eq:13})}\] \[\leq f(S)+\sum_{j=1}^{k-|\mathcal{P}_{i-1}|}f(B_{j})\quad\text{( by submodularity and non-negativity)}\] \[\leq f(S)+\frac{k-|\mathcal{P}_{i-1}|}{|\mathcal{P}^{\prime}|}f( \mathcal{P}^{\prime}).\quad\text{(by inequality \eqref{eq:14})}\]
Therefore, we have
\[f(\mathcal{P}) =f(\mathcal{P}_{i-1})-f(S)+\sum_{j=1}^{k-|\mathcal{P}_{i-1}|}f(B _{j})+f\left(\bigcup_{j=k-|\mathcal{P}_{i-1}|+1}^{|\mathcal{P}^{\prime}|}B_{j}\right)\] \[\leq f(\mathcal{P}_{i-1})-f(S)+2\sum_{j=1}^{k-|\mathcal{P}_{i-1}| }f(B_{j})+f(S)\] \[=f(\mathcal{P}_{i-1})+2\cdot\sum_{j=1}^{k-|\mathcal{P}_{i-1}|}f(B _{j})\] \[\leq f(\mathcal{P}_{i-1})+2\cdot\frac{k-|\mathcal{P}_{i-1}|}{| \mathcal{P}^{\prime}|}f(\mathcal{P}^{\prime})\] \[=f(\mathcal{P}_{i-1})+\frac{2k-2|\mathcal{P}_{i-1}|}{|\mathcal{P} _{i}|-|\mathcal{P}_{i-1}|+1}(f(\mathcal{P}_{i})+f(S)-f(\mathcal{P}_{i-1})),\]
where the last equality follows from the fact that \(f(\mathcal{P}_{i-1})-f(S)+f(\mathcal{P}^{\prime})=f(\mathcal{P}_{i})\) and \(|\mathcal{P}^{\prime}|=|\mathcal{P}_{i}|-|\mathcal{P}_{i-1}|+1\). Applying the fact that \(f(S)\leq f(\mathcal{P}_{i-1})\), we have
\[f(\mathcal{P}) \leq\left(\frac{|\mathcal{P}_{i}|-|\mathcal{P}_{i-1}|}{|\mathcal{ P}_{i}|-|\mathcal{P}_{i-1}|+1}\right)\left(\frac{f(\mathcal{P}_{i-1})}{| \mathcal{P}_{i}|-|\mathcal{P}_{i-1}|}(|\mathcal{P}_{i}|-|\mathcal{P}_{i-1}|+1)\right)\] \[\qquad\qquad+\left(\frac{|\mathcal{P}_{i}|-|\mathcal{P}_{i-1}|}{| \mathcal{P}_{i}|-|\mathcal{P}_{i-1}|+1}\right)\left(\frac{f(\mathcal{P}_{i})}{| \mathcal{P}_{i}|-|\mathcal{P}_{i-1}|}(2k-2|\mathcal{P}_{i-1}|)\right)\] \[=\left(\frac{|\mathcal{P}_{i}|-|\mathcal{P}_{i-1}|}{|\mathcal{P}_ {i}|-|\mathcal{P}_{i-1}|+1}\right)\left(\frac{f(\mathcal{P}_{i-1})}{| \mathcal{P}_{i}|-|\mathcal{P}_{i-1}|}((2|\mathcal{P}_{i}|-2k)+(2k-|\mathcal{P} _{i-1}|-|\mathcal{P}_{i}|+1))\right)\] \[\qquad\qquad+\left(\frac{|\mathcal{P}_{i}|-|\mathcal{P}_{i-1}|}{| \mathcal{P}_{i}|-|\mathcal{P}_{i-1}|+1}\right)\left(\frac{2f(\mathcal{P}_{i})}{| \mathcal{P}_{i}|-|\mathcal{P}_{i-1}|}(k-|\mathcal{P}_{i-1}|)\right)\] \[\leq\left(\frac{|\mathcal{P}_{i}|-|\mathcal{P}_{i-1}|}{| \mathcal{P}_{i}|-|\mathcal{P}_{i-1}|+1}\right)\left(2f(\mathcal{P}^{*})+\frac{f (\mathcal{P}_{i-1})}{|\mathcal{P}_{i}|-|\mathcal{P}_{i-1}|}(2k-|\mathcal{P}_{i -1}|-|\mathcal{P}_{i}|+1)\right), \tag{15}\]
where the last inequality is by Lemma 3.2(i). Inequality (15) implies that
\[f(\mathcal{P}) \leq\left(\frac{|\mathcal{P}_{i}|-|\mathcal{P}_{i-1}|}{|\mathcal{P} _{i}|-|\mathcal{P}_{i-1}|+1}\right)\left(2f(\mathcal{P}^{*})+f(\mathcal{P}_{i-1 })+\frac{f(\mathcal{P}_{i-1})}{|\mathcal{P}_{i}|-|\mathcal{P}_{i-1}|}(2k-2| \mathcal{P}_{i}|+1)\right)\] \[\leq\left(1-\frac{1}{|\mathcal{P}_{i}|-|\mathcal{P}_{i-1}|+1} \right)\left(2f(\mathcal{P}^{*})+f(\mathcal{P}_{i-1})\right)\quad\text{(since $k<| \mathcal{P}_{i}|$)}\] \[\leq 3f(\mathcal{P}^{*}),\]
where the last inequality is because \(f(\mathcal{P}_{i-1})\leq f(\mathcal{P}^{*})\) by Lemma 3.2(ii). The above analysis already shows that the approximation factor is at most \(3\). We tighten the factor now. As a consequence of the above inequality, we may assume that \(f(\mathcal{P}^{*})\neq 0\) because if \(f(\mathcal{P}^{*})=0\), then the returned \(k\)-partition \(\mathcal{P}\) also satisfies \(f(\mathcal{P})=0\) and thus, is optimal. Let \(c:=f(\mathcal{P}_{i-1})/f(\mathcal{P}^{*})\). By Lemma 3.2(ii), we have that \(f(\mathcal{P}_{i-1})\leq f(\mathcal{P}^{*})\) and hence, \(c\in[0,1]\). For convenience, we define \(A:=k-|\mathcal{P}_{i-1}|\) and \(B:=|\mathcal{P}_{i}|-k\) and note that \(A,B\geq 1\). Using these notations, we may rewrite inequality (15) as
\[f(\mathcal{P}) \leq\left(\frac{A+B}{A+B+1}\right)\left(2f(\mathcal{P}^{*})+ \frac{A-B+1}{A+B}f(\mathcal{P}_{i-1})\right)\] \[=\left(\frac{A+B}{A+B+1}\right)\left(2+\frac{A-B+1}{A+B}\cdot c \right)f(\mathcal{P}^{*}). \tag{16}\]
By Lemma 3.2(i), we have
\[f(\mathcal{P}^{*})\geq\left(\frac{B}{A+B}\right)f(\mathcal{P}_{i-1})+\left( \frac{A}{A+B}\right)f(\mathcal{P}_{i})=\left(\frac{B}{A+B}\right)cf(\mathcal{ P}^{*})+\left(\frac{A}{A+B}\right)f(\mathcal{P}_{i}).\]
Rearranging, we have
\[f(\mathcal{P}_{i})\leq\left(1-\frac{B}{A+B}\cdot c\right)\left(\frac{A+B}{A} \right)f(\mathcal{P}^{*})=\left(\frac{A+B}{A}-\frac{B}{A}\cdot c\right)f( \mathcal{P}^{*}).\]
Since \(\mathcal{P}\) is obtained by coarsening \(\mathcal{P}_{i}\), we have \(f(\mathcal{P})\leq f(\mathcal{P}_{i})\) by submodularity and non-negativity of \(f\). Hence,
\[f(\mathcal{P})\leq\left(\frac{A+B}{A}-\frac{B}{A}\cdot c\right)f(\mathcal{P}^{ *}). \tag{17}\]
Combining inequalities (16) and (17), we have
\[\frac{f(\mathcal{P})}{f(\mathcal{P}^{*})}\leq\max_{c\in[0,1]}\min\left\{\left( \frac{A+B}{A+B+1}\right)\left(2+\frac{A-B+1}{A+B}\cdot c\right),\ \frac{A+B}{A}-\frac{B}{A}\cdot c\right\}. \tag{18}\]
Thus, in order to upper bound the approximation factor, it suffices to upper bound the right hand side of inequality (18). Both terms \(\left(\frac{A+B}{A+B+1}\right)\left(2+\frac{A-B+1}{A+B}\cdot c\right)\) and \(\frac{A+B}{A}-\frac{B}{A}\cdot c\) are linear in \(c\), and the latter is decreasing as a function of \(c\). Next, we consider the two cases: \(A-B+1\leq 0\) and \(A-B+1>0\).
Suppose \(A-B+1\leq 0\), then the term \(\left(\frac{A+B}{A+B+1}\right)\left(2+\frac{A-B+1}{A+B}\cdot c\right)\) is linear and non-increasing as a function of \(c\). The maximum
\[\max_{c\in[0,1]}\min\left\{\left(\frac{A+B}{A+B+1}\right)\left(2+\frac{A-B+1} {A+B}\cdot c\right),\ \frac{A+B}{A}-\frac{B}{A}\cdot c\right\}\]
is achieved at \(c=0\). Thus, we have
\[\frac{f(\mathcal{P})}{f(\mathcal{P}^{*})} \leq\min\left\{\frac{A+B}{A+B+1}\cdot 2,\ \frac{A+B}{A}\right\}\] \[\leq\frac{A+B}{A+B+1}\cdot 2\]
\[=2\left(1-\frac{1}{A+B+1}\right)\] \[\leq 2\left(1-\frac{1}{n}\right), \tag{19}\]
where the last inequality follows from the fact that \(A+B=|\mathcal{P}_{i}|-|\mathcal{P}_{i-1}|\leq n-1\).
Now, we consider the case \(A-B+1>0\). The term \(\left(\frac{A+B}{A+B+1}\right)\left(2+\frac{A-B+1}{A+B}\cdot c\right)\) is linear and increasing as a function of \(c\), and thus the maximum
\[\max_{c\in\mathbb{R}}\min\left\{\left(\frac{A+B}{A+B+1}\right)\left(2+\frac{A- B+1}{A+B}\cdot c\right),\;\frac{A+B}{A}-\frac{B}{A}\cdot c\right\}\]
is achieved when the two terms are equal. Setting \(\left(\frac{A+B}{A+B+1}\right)\left(2+\frac{A-B+1}{A+B}\cdot c^{*}\right)= \frac{A+B}{A}-\frac{B}{A}\cdot c^{*}\) and solving for \(c^{*}\), we get
\[c^{*}=\frac{\frac{A+B}{A}-2\cdot\frac{A+B}{A+B+1}}{\frac{A-B+1}{A+B+1}+\frac{ B}{A}}.\]
Plugging \(c=c^{*}\) into \(\frac{A+B}{A}-\frac{B}{A}\cdot c\), we have
\[\frac{f(\mathcal{P})}{f(\mathcal{P}^{*})} \leq\max_{c\in[0,1]}\min\left\{\left(\frac{A+B}{A+B+1}\right) \left(2+\frac{A-B+1}{A+B}\cdot c\right),\;\frac{A+B}{A}-\frac{B}{A}\cdot c\right\}\] \[\leq\frac{A+B}{A}-\frac{B}{A}\cdot c^{*}\] \[=1+\frac{B}{A}(1-c^{*})\] \[=1+\frac{B}{A}\left(1-\frac{\frac{A+B}{A}-2\cdot\frac{A+B}{A+B+1 }}{\frac{A-B+1}{A+B+1}+\frac{B}{A}}\right)\] \[=1+\frac{\frac{2B}{A+B+1}}{\frac{A-B+1}{A+B+1}+\frac{B}{A}}\] \[=1+\frac{2AB}{A^{2}+A+B^{2}+B}\] \[\leq 1+\frac{2AB}{2AB+A+B}\quad(\text{since }A^{2}+B^{2}\geq 2AB)\] \[=2-\frac{A+B}{2AB+A+B}\] \[=2-\frac{A+B}{AB+AB+A+B}\] \[\leq 2-\frac{A+B}{AB+(A^{2}+B^{2})/2+A+B}\quad(\text{since }AB\leq \frac{A^{2}+B^{2}}{2})\] \[=2-\frac{A+B}{(A+B)^{2}/2+A+B}\] \[=2-\frac{1}{(A+B)/2+1}\] \[\leq 2-\frac{2}{n+1}.\]
The last inequality above is because \(A+B=|\mathcal{P}_{i}|-|\mathcal{P}_{i-1}|\leq n-1\).
**Remark 3.3**.: _The approximation factor of Algorithm 1 for non-negative posimodular submodular functions is at least \(2-2/(n+1)\). We show this for \(n=3,k=2\) using the following example: Let \(V=\{a,b,c\}\), \(k=2\), and \(f:2^{V}\rightarrow\mathbb{R}_{\geq 0}\) be defined by_
\[f(\emptyset)=0,\;f(a)=f(b)=1,\;f(c)=1+\epsilon,\]
\[f(\{a,b\})=1+\epsilon,\;f(\{b,c\})=f(\{a,c\})=2,\;f(\{a,b,c\})=1+\epsilon.\]
_Submodularity and posimodularity of \(f\) can be verified by considering all possible subsets. Moreover, the principal partition sequence of this instance is \(\{V\},\{\{a\},\{b\},\{c\}\}\). Thus, the algorithm returns the \(2\)-partition \(\{\{a\},\{b,c\}\}\), while the optimum \(2\)-partition is \(\{\{c\},\{a,b\}\}\). Thus, the approximation factor approaches \(3/2\) as \(\epsilon\to 0\). We note that for \(n=3\), the approximation factor guaranteed by Theorem 3.3 is \(2-2/(n+1)=3/2\)._
## 4 Lower bound for arbitrary submodular functions
In this section, we present an instance of submodular \(k\)-partition where Algorithm 1 achieves an approximation factor of \(\Omega(n/k)\). We emphasize that the submodular function in our instance is not symmetric/monotone/posimodular.
Let \(V=\{v_{0},v_{1},\ldots,v_{n-1}\}\) be the ground set. We define a digraph \(D=(V,E(D))\) and a hypergraph \(H=(V,E(H))\) on the same vertex set \(V\) as follows (see Fig.1):
\[E(D) =\{v_{0}v_{i}:i\in[n-1]\}\text{ and }\] \[E(H) =\{\{v_{1},v_{2},\ldots,v_{n-1}\}\}.\]
For every subset \(S\subseteq V\), we will use \(d_{D}^{in}(S)\) to denote the number of arcs in \(D\) whose tails are in \(\bar{S}\) and heads are in \(S\). We will use \(d_{H}(S)\) to denote the number of hyperedges in \(H\) that have at least one vertex in \(S\) and one vertex in \(\bar{S}\). Next, we define a set function \(f:2^{V}\rightarrow\mathbb{R}_{\geq 0}\) by
\[f(S):=a\cdot d_{D}^{in}(S)+d_{H}(S)\quad\forall S\subseteq V,\]
where \(a\gg 1\) is a large constant. We note that \(f\) is submodular because it is a positive linear combination of two submodular functions (and it is not monotone/symmetric/posimodular).
**Claim 4.1**.: _The principal partition sequence of \(f\) is \(\{V\},\{\{v_{i}\}:i\in\{0,1,\ldots,n-1\}\}\)._
Proof.: For convenience, we will use \(\mathcal{Q}\) to denote the partition of \(V\) into singletons. By Proposition 2.1 and the fact that \(f(V)=0\), it suffices to prove that for every partition \(\mathcal{P}\) of \(V\) such that \(\mathcal{P}\) is not \(\mathcal{Q}\) or \(\{V\}\), we have that
\[\frac{f(\mathcal{P})}{|\mathcal{P}|-1}>\frac{f(\mathcal{Q})}{n-1}. \tag{20}\]
Let \(P_{0}\in\mathcal{P}\) be the part that contains \(v_{0}\). Then, we have that \(f(P_{0})\geq 1\) if \(P_{0}\neq\{v_{0}\}\) and \(f(P_{0})=0\) otherwise. For each part \(P\in\mathcal{P}\) that does not contain \(v_{0}\), we have that \(f(P)\geq 1+a\) if \(|P|=1\) and \(f(P)\geq 2+a\) if \(|P|\geq 2\). Since \(\mathcal{P}\neq\mathcal{Q}\), we have that either \(P_{0}\neq\{v_{0}\}\) or at least one of the parts \(P\in\mathcal{P}\setminus\{P_{0}\}\) has size \(|P|\geq 2\). Thus, \(f(\mathcal{P})=\sum_{P\in\mathcal{P}}f(P)\geq(1+a)(|\mathcal{P}|-1)+1\). Moreover, we have \(f(\mathcal{Q})=(1+a)(n-1)\) and hence
\[\frac{f(\mathcal{P})}{|\mathcal{P}|-1}\geq\frac{(1+a)(|\mathcal{P}|-1)+1}{| \mathcal{P}|-1}=1+a+\frac{1}{|\mathcal{P}|-1}>1+a=\frac{(1+a)(n-1)}{n-1}=\frac{ f(\mathcal{Q})}{n-1}.\]
Figure 1: Example in Section 4. The arcs belong to the digraph \(D\) and the hyperedge \(\{v_{1},\ldots,v_{n-1}\}\) belongs to the hypergraph \(H\).
This proves inequality (20).
**Claim 4.2**.: _The approximation factor of Algorithm 1 on input \((f,k)\) is \(\Omega(n/k)\)._
Proof.: We note that \(f(\{v_{0}\})=0\) and \(f(\{v_{i}\})=1+a\) for all \(i\in[n-1]\). By Claim 4.1, on input \((f,k)\), Algorithm 1 returns a partition \(\mathcal{P}\) consisting of the \(k-1\) singleton parts that minimizes \(f\) among all singleton sets and the complement of the union of these \(k-1\) singleton parts. Therefore, the returned partition \(\mathcal{P}\) contains \(\{v_{0}\}\) as a part and thus
\[f(\mathcal{P})=\sum_{P\in\mathcal{P}:P\neq\{v_{0}\}}f(P)\geq f(V-\{v_{0}\})=a (n-1).\]
The second inequality follows from submodularity of the function \(f\). Consider the \(k\)-partition \(\{\{v_{1}\},\{v_{2}\},\ldots,\{v_{k-1}\},V-\{v_{1},\ldots,v_{k-1}\}\}\), which has objective \((1+a)(k-1)+1\). This implies that the optimum \(k\)-partition \(\mathcal{P}^{*}\) satisfies \(f(\mathcal{P}^{*})\leq(1+a)(k-1)+1\). Thus, the approximation factor of the solution returned by Algorithm 1 is
\[\frac{f(\mathcal{P})}{f(\mathcal{P}^{*})}\geq\frac{a(n-1)}{(1+a)(k-1)+1} \rightarrow\frac{n-1}{k-1}\text{ as }a\rightarrow\infty.\]
## 5 Conclusion
The principal partition sequence of submodular functions was shown to exist by Narayanan [15]. The principal partition sequence of submodular functions is known in the literature as _principal lattice of partitions_ of submodular functions since there exists a lattice structure associated with the sequence of partitions [2, 7, 8, 10, 13, 16, 19]. We chose to call it as principal partition sequence in this work since the sequence suffices for our purpose. Narayanan, Roy, and Patkar [17] used the principal partition sequence to design an algorithm for submodular \(k\)-partition. They analyzed the approximation factor of their algorithm for certain subfamilies of submodular functions that arise from hypergraphs. In this work, we investigated the approximation factor of their algorithm for three broad subfamilies of submodular functions--namely monotone, symmetric, and posimodular submodular functions. Our results show that the principal partition sequence based algorithm achieves the best possible asymptotic approximation factor for all these three subfamilies. A novelty of our contributions is the improvement in the approximability of monotone submodular \(k\)-partition from \(2\) to \(4/3\), thus matching the inapproximability threshold. It would be interesting to pin down the approximability of special cases of monotone submodular \(k\)-partition--e.g., matroid \(k\)-partition and coverage \(k\)-partition which are interesting by themselves since they capture several interesting partitioning problems.
Acknowledgements.Karthekeyan Chandrasekaran would like to thank Chandra Chekuri for asking about the approximation factor of the principal partition sequence based approach for symmetric submodular \(k\)-partition.
|
2307.08152 | The Potential and Pitfalls of using a Large Language Model such as
ChatGPT or GPT-4 as a Clinical Assistant | Recent studies have demonstrated promising performance of ChatGPT and GPT-4
on several medical domain tasks. However, none have assessed its performance
using a large-scale real-world electronic health record database, nor have
evaluated its utility in providing clinical diagnostic assistance for patients
across a full range of disease presentation. We performed two analyses using
ChatGPT and GPT-4, one to identify patients with specific medical diagnoses
using a real-world large electronic health record database and the other, in
providing diagnostic assistance to healthcare workers in the prospective
evaluation of hypothetical patients. Our results show that GPT-4 across disease
classification tasks with chain of thought and few-shot prompting can achieve
performance as high as 96% F1 scores. For patient assessment, GPT-4 can
accurately diagnose three out of four times. However, there were mentions of
factually incorrect statements, overlooking crucial medical findings,
recommendations for unnecessary investigations and overtreatment. These issues
coupled with privacy concerns, make these models currently inadequate for real
world clinical use. However, limited data and time needed for prompt
engineering in comparison to configuration of conventional machine learning
workflows highlight their potential for scalability across healthcare
applications. | Jingqing Zhang, Kai Sun, Akshay Jagadeesh, Mahta Ghahfarokhi, Deepa Gupta, Ashok Gupta, Vibhor Gupta, Yike Guo | 2023-07-16T21:19:47Z | http://arxiv.org/abs/2307.08152v1 | The Potential and Pitfalls of using a Large Language Model such as ChatGPT or GPT-4 as a Clinical Assistant
## Abstract
Recent studies have demonstrated promising performance of ChatGPT and GPT-4 on several medical domain tasks. However, none have assessed its performance using a large-scale real-world electronic health record database, nor have evaluated its utility in providing clinical diagnostic assistance for patients across a full range of disease presentation. We performed two analyses using ChatGPT and GPT-4, one to identify patients with specific medical diagnoses using a real-world large electronic health record database and the other, in providing diagnostic assistance to healthcare workers in the prospective evaluation of hypothetical patients. Our results show that GPT-4 across disease classification tasks with chain of thought and few-shot prompting can achieve performance as high as 96% F1 scores. For patient assessment, GPT-4 can accurately diagnose three out of four times. However, there were mentions of factually incorrect statements, overlooking crucial medical findings, recommendations for unnecessary investigations and overtreatment. These issues coupled with privacy concerns, make these models currently inadequate for real world clinical use. However, limited data and time needed for prompt engineering in comparison to configuration of conventional machine learning workflows highlight their potential for scalability across healthcare applications.
## Introduction
Large language models (LLMs) play a fundamental role in natural language processing (NLP) and have revolutionised the development of artificial intelligence (AI) systems, as well as the dynamics between AI and human interaction[1]. BERT (Bidirectional Encoder Representations from Transformers)[2], PEGASUS[3], T5 (Text-to-Text Transfer Transformer)[4], and the GPT
(Generative Pre-trained Transformer) models[5] have significantly advanced the field. ChatGPT (GPT-3.5-turbo) is a state-of-the-art LLM developed by OpenAI, surpassing previous models in various benchmarks excelling in tasks such as text generation, language translation, and summarization[6].
In the medical domain, recent publications have demonstrated promising performance of ChatGPT on several tasks, including consultation[7, 8], support in clinical workflows such as providing decision support[9], generation of patient notes[10] and discharge summaries[11], simplification of patient radiology reports[12], medical education applications[13, 14], and several other applications[15]. However, the majority of these studies only provided generic comments about the potential applications of ChatGPT, with only a handful of exemplar use-cases of ChatGPT in specific clinical scenarios, which don't provide a clear picture of ChatGPT's capability[15]. Few studies have provided thorough qualitative and quantitative evaluation of ChatGPT's responses to United States Medical Licensing Exam (USMLE) questions [16, 17], or other specialist physician generated ones[18]. However, none have assessed its performance on identification of large target patient cohorts using a large-scale real-world electronic health record database, nor have evaluated its utility in providing clinical assistance for patient evaluation across a full range of a disease (from pre-disease states, to typical and atypical presentations of frank disease, and disease complications). GPT-4, a recent (released March 2023) successor of ChatGPT, exhibits human-level performance on various professional and academic benchmarks[19], and surpasses its predecessor in its performance in a suite of medical benchmark datasets[20].
This study aims at evaluating the strengths and limitations of using ChatGPT and GPT-4, as potential clinical assistants to aid human healthcare professionals. We achieved this with two related but distinct objectives. First, we explore their potential utility in the identification of a large number of suitable patients either for targeted inclusion into prospective research studies, or for the analysis of their pre-existing electronic health record (EHR) data. We do so by evaluating ChatGPT/GPT-4 model performance in classification of patients diagnosed with specific medical conditions including three highly prevalent diseases, namely, Chronic Obstructive Pulmonary Disease (COPD), Chronic Kidney Disease (CKD), and infections by Herpes Simplex Virus (HSV), one rare disease, Primary Biliary Cirrhosis (PBC), and one relatively hard to diagnose disease, cancer cachexia. For this task, we employed a subset of (physician) gold labelled data from the publicly available Medical Information Mart for Intensive Care (MIMIC-III) dataset[21]. To the best of the author's knowledge this is the first study to assess the performance of ChatGPT/GPT-4 in the identification of target patients from unstructured EHR data. Second, using COPD as a case study, we systematically evaluate model utility in providing clinical assistance to patient evaluation. We created a dataset of 31 COPD (and other closely related) case scenarios, sourcing them from previously published case reports[22], British Medical Journal (BMJ) exemplar cases[23], and fictitious cases based on information provided in the Global Initiative for Chronic Obstructive Lung Disease (GOLD) guidelines[24]. We then defined an eight-step clinical care pathway relevant to a physician general practice setting and framed 400 unique questions based on the curated case scenarios that would aid in patient assessment at each step of the pathway. Model responses to these questions were evaluated by licensed physicians using a scoring schema. Our results showed that the performance of GPT-4 across
disease classification tasks was good (\(\geq\) 75% F1 scores), consistently matching or out-performing the baseline two step 'feature extraction + prediction/rules' models. For patient assessment, GPT-4 was accurate in arriving at a diagnosis three out of four times.
## Results
### Comparison of model performance in binary disease classification
For a sample of EHRs (admissions) from MIMIC-III, we created training and testing sets with physician assigned gold labels. For each disease, gold sets included 200 EHRs (380 for cancer cachexia), with similar proportions of positive and control cases across the gold training and testing sets (Table 1, Methods).
Overall, the performance of GPT-4 across binary disease classification was \(\geq\) 75% (F1 score), with near similar or slightly better performance compared to the corresponding disease-specific 'Extraction + Rules/Prediction' models (Table 1). GPT-4 performance was the lowest for the HSV use-case with an F1 score of 74.70% (with an absolute difference of 17.25% points lower than corresponding the 'Extraction + Rules' model), and the highest for the PBC use-case (with an absolute difference of 3.75% points higher than the corresponding 'Extraction + Rules' model). The relatively lower performance of the GPT-4 model for HSV classification can be attributed to a lower sensitivity of this model compared to the corresponding 'Extraction + Rules' models. This was because patient records with mentions of HSV disease only in those clinical notes apart from the discharge summary, were'missed out' by the GPT-4 model, as owing to token limits, we were limited to only discharge summary notes as input for the GPT-4 model. This was in contrast to all the relevant patient notes used as input for the 'Extraction + Rules' models. For four of the five diseases, GPT-4 demonstrated superior performance compared to ChatGPT in terms of F1 scores, exhibiting an absolute increase of up to 10%. For the cancer cachexia use-case, ChatGPT F1 scores appeared to be on par with the 'Extraction + Rules/Prediction' models and higher than GPT-4 (nearly 4% absolute difference). However, upon further examination it appears that the ChatGPT model was more likely to classify patients as positive even when truly negative (predicting over 90% of the testing set as positive), and thus has a near perfect recall (96%); the relatively high precision (72%) appears to be an artefact owing to the similar prevalence of cancer cachexia in our testing set (69%).
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{**Disease**} & \multirow{2}{*}{**Model**} & \multicolumn{3}{c|}{**Evaluation Metrics**} \\ \cline{3-5} & & **Precision** & **Recall** & **F1 Score** \\ \hline
**COPD** & Extraction + & 95.45\% & 1 & 97.67\% \\ \hline \end{tabular}
\end{table}
Table 1: Evaluation scores obtained by extraction with rules/prediction models, and GPT-4 on the gold standard testing set across the five diseases of interest.
\begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Rules} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \hline Extraction + & 96.77\% & 95.24\% & 96\% \\ Prediction & & & & \\ \hline ChatGPT & 84\% & 94\% & 89\% \\ \hline GPT-4 & 98.30\% & 93.7\% & 96\% \\ \hline Extraction + & 98.25\% & 82.35\% & 89.60\% \\ Rules & & & \\ \hline Extraction + & 98.33\% & 86.76\% & **92.19\%** \\ \hline ChatGPT & 77\% & 83\% & 80\% \\ \hline GPT-4 & 98.18\% & 79.41\% & 87.80\% \\ \hline Extraction + & 79.17\% & 86.36\% & 82.61\% \\ Rules & & & \\ \hline Extraction + & 70\% & 95.45\% & 80.77\% \\ Prediction & & & \\ \hline ChatGPT & 84.21\% & 72.73\% & 78.05\% \\ \hline GPT-4 & 86.36\% & 86.36\% & **86.36\%** \\ \hline Extraction + & 90.91\% & 93.02\% & **91.95\%** \\ Rules & & & \\ \hline Extraction + & 68.42\% & 90.70\% & 78\% \\ \hline ChatGPT & 48\% & 95\% & 64\% \\ \hline GPT-4 & 79.49\% & 70.45\% & **74.70\%** \\ \hline Extraction + & 98.55\% & 41.98\% & 58.87\% \\ Rules & & & \\ \hline \end{tabular}
\begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Cancer} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\
setting to 0, which is recommended for nudging deterministic responses from GPT models. All ChatGPT/GPT-4 models used an elaborate clinical guideline in the prompt.
For three diseases with the best performance (COPD, CKD, and PBC) we provide sensitivity statistics for key model settings and disease classification performance (Table 2). When using the same input clinical guideline for the same task, GPT-4 compared to ChatGPT could be 10 - 18% higher (in terms of absolute F1 scores) in classification performance. For a given model, simply improving the level of detail in the guideline using best prompting practices such as breaking down the process of assigning a class into subtasks (i.e., chain-of-thought) and providing examples for each class (few shot) can improve classification performance by up to nearly 30% (absolute % increase in F1). GPT-4 with an elaborate clinical guideline had the best classification performance (clinical guidelines in prompts in supplementary S1). Temperature, another key model setting, did not appear to have an effect on the overall disease classification performance (when ensembling repeat predictions using statistical mode). However, lower temperatures increased the reliability over repeat predictions (reliability statistics in supplementary S2).
We explored the utility of ChatGPT in providing clinical assistance (refer supplementary S3) in every step of the generic eight-step clinical care pathway (Figure 2). Our results show that there is some variability in scores assigned by different human evaluators when using x-point likert scales for assessment. Overall, the performance of ChatGPT in providing clinical assistance across the eight-steps was good but varied greatly between the domains assessed, achieving a perfect score of 3 (mean across evaluators), in 229 (57.25%) questions in scientific correctness, 299 (74.75%) in the comprehension, retrieval, and reasoning domain, 109 (27.25%) in the content domain, and 329 (82.25%) in the bias domain. There was also wide variation in the performance across the different steps assessed, with better performance in the earlier steps (steps 1 through 5) compared to the latter (step 6 onwards). Across all steps ChatGPT responses show several instances of missing out on pertinent medical information, making factually inaccurate statements, and infrequently may fabricate content presenting opinions as facts. Asking ChatGPT for investigation recommendations (step 4) or pharmacological management recommendations (step 7) can include those considered superfluous for the given patient scenario.
For a subset of 25 questions pertaining to step 5, using a binary marking schema, we evaluated whether ChatGPT and GPT-4 could accurately produce all the new diagnoses in a patient, given clinical features. GPT-4 performed much better than ChatGPT, with 19 (76%) scored as correct compared to 13 (52%). GPT-4 responses that were scored as correct, included the full primary diagnosis along with key likely complications and any other likely diagnosis unrelated to the patient's primary complaints (such as BP measurement showing hypertension etc.). Among the wrong responses, three diagnoses produced by GPT-4 were likely to have been wrong because the terminology changes in COPD disease definitions (particularly regarding Pre-COPD and PRIsm) is likely to have happened after GPT-4 training cut-off dates. Among the other three wrong responses by GPT-4, all four responses did correctly mention all the new diagnoses, however, they additionally mentioned incorrect diagnoses as highly likely and worth investigating.
### Interpreting the GPT-4 decision making process
For interpretability of GPT-4 predicted disease class, the input prompt instructed the model to provide a rationale for the assigned prediction (examples in supplementary S4). Qualitatively, it appears that for correct predictions (true positives and true negatives), the rationales provided for assigning a particular disease class were usually scientifically correct, exhaustive in context with minimal bias, demonstrating appropriate task comprehension, correct information retrieval, and good adherence to the provided clinical guideline. Additionally, in a handful of cases the model provided scientifically accurate (non-obvious) insights into disease classification extracted from input patient EHR based on information not explicitly mentioned in the clinical guideline. However, for incorrect predictions, (false positives and false negatives), rationales sometimes demonstrated poor accurate comprehension of clinical guidelines, and showed also evidence of fabrication, commonly referred to as LLM hallucination. E.g., saying that a patient's medical record explicitly has a mention of a specific diagnosis when in fact they do not.
Among the clinical case scenarios for diagnosis, the GPT-4 model demonstrated a strong understanding of the given patient information and generated rationales that were coherent and well-supported. This indicates that GPT-4 was able to provide explanations that were consistent with the medical context and mostly aligned with scientific accuracy, without displaying any apparent biases.
## Discussion
Identifying target patients from unstructured EHR data is a relatively complex task, as unstructured clinical narratives in EHR data are typically several pages long, with a lot of variability in how information is documented and recorded. Here the model needs to be able to discern the nuances in natural language, medical parlance, and needs to interpret context to accurately identify patients with diseases of interest. The commonly used methods for identification of target patients, currently, is using medical coding systems like International Classification of Disease (ICD). However, literature has highlighted several flaws in the real-world usage of ICD for this task [25], and the author's previous work has demonstrated that classical machine learning (ML) workflows employing 'extraction with rules/prediction' approaches have better performance [26]. Thus in this study we compared the performance of ChatGPT/GPT-4 only against classical ML workflows. We found GPT-4 with few shot prompting to be rather impressive, with near similar or better performance compared to the corresponding disease-specific ML models.
In classical ML workflows SHAP values, among others, are often used to quantify the contribution of input features in the decision-making process in classification tasks, in terms of a numerical measure of impact [27]. The presence of features clinically relevant to the characterisation of the particular disease of interest fosters model interpretability, engenders trust in the model's output, and contributes to the explainability of artificial intelligence (AI). For an illustration of the SHAP values of the top features contributing to disease classification refer to the authors' previously published work [26]. In contrast, in the context of LLMs such as ChatGPT/GPT-4 one could merely prompt the model to generate a natural language explanation, which outlines the rationale for predicting a particular class with respect to adherence with the provided input clinical guideline. Our results highlight that ChatGPT/GPT-4 rationales for correct predictions are scientifically accurate, comprehensive, unbiased, and demonstrate an understanding of the task and the input guidelines. However, for incorrect predictions, the rationales are unsatisfactory, showing inconsistencies, incorrect retrieval or fabrication of information, and a lack of accurate understanding of the diseases of interest. Though SHAP values have been shown to have limitations and challenges, in terms of the assumptions of feature independence, that of feature linearity and output, data distribution, and several others [28], they do appear to be a mathematically more objective measure than the natural language rationale provided by LLMs.
In providing assistance in the prospective evaluation of patients, our results indicate that though impressive, ChatGPT/GPT-4 models, in their current state, are inadequate to be deployed in real world clinical assistance. ChatGPT responses show evidence for occurrences of incorrect statements, overlooking crucial medical findings, recommending excessive clinical investigations that result in resource wastage, and potentially harmful consequences of overtreatment. GPT-4 was incorrect for one out of four diagnostic questions. The process of human adjudication, although essential, is labour-intensive and susceptible to errors, variations, and biases. Therefore, in future studies, it is imperative to explore word network analyses and other unbiased quantitative NLP metrics to comprehensively investigate the quality and intricacies of AI-generated free-text outputs.
### Constraints of using ChatGPT/GPT-4 for healthcare applications
For classical ML workflows, one can perform threshold modification to increase the precision at the cost of recall, or vice versa, based on the requirement of the clinical application[29]. For example, public health officials may choose higher recall for "screening test" applications but practising clinicians may require higher precision for "confirmatory test" applications. Such threshold modifications are essential to tailor algorithmic utility to specific clinical settings. This is not feasible when using GPT models as it is not possible to perform such thresholding for LLM predictions. Furthermore, token limits, not applicable for classical ML models but for LLMs, may prove to be prohibitive (and thus restrict the number of patient notes provided as input), albeit this may be subject to change in the future given mass adoption. Using GPT models for healthcare applications raises concerns regarding data privacy, primarily due to the transmission of data to external servers hosted by commercial organisations like OpenAI. Mitigating this data privacy issue entails the deployment and operation of a local LLM within a secure network environment, however, the infrastructure and running costs can prove to be restrictive, and further, there exists uncertainty of its performance in comparison to a widely utilised and extensively trained model like ChatGPT/GPT-4.
## Conclusion
In this study, we sought to explore the utility of ChatGPT and GPT-4 as potential clinical assistants by evaluating its performance across two related but distinct tasks. First, we evaluated their performance in the binary classification task of identification of patients with specific diseases of interest (COPD, CKD, PBC, HSV infections, and PBC) from real world EHR data. Second, using COPD as a case study, we evaluated model responses in providing clinical assistance to healthcare workers in prospective patient evaluation. GPT-4 across disease classification tasks with chain of thought and few-shot prompting achieved as high as 96% F1 scores. For patient assessment, GPT-4 was able to arrive at correct diagnoses three out of four times. In this current state, these models are inadequate to be deployed for real world clinical
assistance as there were mentions of factually incorrect statements, overlooking crucial medical findings, recommendations for unnecessary clinical investigations and overtreatment. Though there may be privacy concerns regarding use of commercial LLMs like GPT, the limited data and time needed to configure ideal prompting strategies in comparison to conventional ML workflows highlights their huge potential for scalability in healthcare applications.
## Methods
We evaluated the performance of ChatGPT and GPT-4 against classical ML workflows including feature extraction with rules/prediction models in binary disease classification tasks for 5 diseases of interest: COPD, CKD, PBC, infection with HSV, and cancer cachexia. An overview of the study methodology is provided in Figure 1.
Figure 1: Overview of the study methodology for the patient identification tasks using MIMIC-III
MIMIC-III: Medical Information Mart for Intensive Care, EHRs: Electronic Health Records; * applicable only for the prediction models, and not for rule-based models, ** for the few admissions that lacked a discharge summary, the next most relevant patient note (say physician, nursing, or respiratory notes) were used.
### Dataset: MIMIC-III
In this study we utilised the freely accessible (MIMIC-III) database[21], which collates de-identified, extensive clinical data (both structured as in tabular demographic, chart and lab values, and unstructured as in free text clinical notes like discharge summaries, physician notes, and others) of the 46,520 distinct patients (corresponding to a total 58,976 admissions) admitted to the Intensive Care Units (ICU) of the Beth Israel Deaconess Medical Center, Boston between 2001 and 2012. We performed a random 50% split of all MIMIC-III admissions into training and testing sets ensuring all admissions for a given patient belonged to either training or testing, to avoid data leakage.
To create the gold standard training and testing subsets (Table 3), we followed a sampling strategy to select EHRs for admissions to be manually reviewed and assigned a three-category gold standard label by licensed physicians according to predetermined clinical criteria for each of the 5 diseases of interest. These criteria were developed based on existing medical literature. Initially, we developed an extraction with rules based binary classifiers to identify EHRs containing any mentions of clinical features with positive expression types (affirmed, historical, possible, or advancing) related to each of the 5 diseases of interest, extracted from the clinical notes. This classifier was run on both the training and testing sets labelling each EHR. We then selected EHRs randomly and proportionally from the predicted positive and negative classes from both the training and testing sets. Two clinical experts were tasked with independently evaluating each EHR and assigning a score ranging from 1 to 3, based on clinical guidelines: EHRs without a diagnosis of any of the 5 diseases (Score 1), patients at-risk of the disease or those having pre-disease symptoms (Score 2), and patients with a confirmed diagnosis of the disease (Score 3). Disagreements in scoring were resolved through consensus with a third clinician. The scores were then binarized, with positive labels for EHRs with scores 2 or 3, and control labels for patients with score 1.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & & \multicolumn{2}{c|}{**Gold**} & \multicolumn{2}{c|}{**Gold**} \\ \multirow{2}{*}{**Disease**} & \multirow{2}{*}{**Total**} & \multicolumn{2}{c|}{**Training Set**} & \multicolumn{2}{c|}{**Testing Set**} \\ \cline{3-5} & & **Positive** & **Control** & **Positive** & **Control** \\ \hline COPD & 200 & 62 & 38 & 63 & 37 \\ \hline CKD & 200 & 59 & 41 & 68 & 32 \\ \hline PBC & 200 & 24 & 76 & 22 & 78 \\ \hline \end{tabular}
\end{table}
Table 3: Summary statistics of the cases used for disease classification.
## Dataset: Clinical Case Scenarios
We curated clinical vignettes from previously published case reports [22], British Medical Journal (BMJ) COPD exemplar cases [23], and also concocted COPD cases based on information provided in the Global Initiative Chronic Obstructive Lung Disease (GOLD) 2021 guidelines [24]. We chose the 2021 guidelines over the latest 2023 guidelines to accommodate for the fact that the knowledge cut-off date for the ChatGPT model is September 2021. Case scenarios created included typical and atypical presentations of a mix of new and follow-up COPD cases, COPD exacerbation cases, Pre-COPD and Preserved Ratio Impaired Spirometry cases, and other similarly presenting lung conditions like Asthma and lung cancer. All case scenarios were clinically validated by licensed physicians. We developed 31 unique case scenarios of patients with COPD, patients at risk of COPD (Pre-COPD, PRIsm - Preserved Ratio Impaired Spirometry), and other closely related respiratory conditions (Table 4). Further, we constructed and clinically validated a multi-step generic flowchart for assessment of a patient presenting with a set of complaints to a physician general practice (Figure 2). Finally, based on the curated scenarios we built a dataset (provided in supplementary S5) with 400 unique questions. Each question was aimed at patient assessment at a particular step of the clinical care pathway.
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline \multicolumn{4}{|c|}{**Type of case scenario**} & **Number of case scenarios** \\ \hline
**Disease** & \multicolumn{2}{c|}{**Presentation**} & **Number of case scenarios** \\ \hline
**COPD** & \begin{tabular}{c} New \\ diagnosis \\ \end{tabular} & \begin{tabular}{c} Presenting across varying symptom severities \\ across GOLD groups A, B, C, and D with \\ varying disease complications, and \\ comorbidities. \\ \end{tabular} & 12 \\ \cline{2-4} & Previously diagnosed (Follow-up) & \begin{tabular}{c} Symptomatic (poorly controlled COPD) \\ \end{tabular} & 3 \\ \cline{2-4} & \begin{tabular}{c} Hypical exacerbation without presence of \\ infection (with and without peripheral \\ eosinophil) \\ \end{tabular} & 2 \\ \cline{2-4} &
\begin{tabular}{c} With bacterial / viral / atypical pneumonia \\ \end{tabular} & 3 \\ \hline \end{tabular}
\end{table}
Table 4: Clinical case scenarios used to develop questions for ChatGPT/GPT-4 evaluation
Step 1, includes the necessary inquiries regarding the patient's medical history and specific questions to explore their smoking history in terms of pack-years. Step 2 involves providing anticipated findings on respiratory and cardiovascular system examinations based on the patient's history. In Step 4, targeted clinical investigations are to be recommended based on the patient's medical history and physical examination results. Additionally, if a differential diagnosis or the most likely diagnosis is provided, specific clinical investigations are to be suggested to confirm or rule out each possibility. Clinical interpretation of chest X-ray, chest CT scan, and spirometry tests based solely on investigation reports are also assessed in this step. Step 5 focuses on the diagnosis, where the most likely diagnosis is to be determined based on the patient's history, examination, and investigation reports. A rationale is to be provided, explaining the patient features used to reach the diagnosis. Step 6 entails evaluating the patient following a confirmed diagnosis, which involves determining the COPD GOLD severity stage based on FEV1 values from the spirometry report. Dyspnea grade, COPD Assessment Test (CAT) score, COPD exacerbation risk, and GOLD group (A, B, C, or D) are also to be assessed, with accompanying justifications. Step 7 addresses the management stage, where lifestyle and pharmacological management plans are recommended based on the patient's current medication, vaccination, and other therapies, as well as information obtained from previous steps. Long-term oxygen therapy and adult vaccinations are discussed, with rationales to be provided for each recommendation. Step 8 involves
Figure 2: Eight-step flowchart illustrating the evaluation of a patient presenting with a set of complaints in a physician’s general practice.
determining the recommended frequency of follow-up based on the patient's diagnosis and medical record. Additionally, if a patient presents with no complaints during follow-up, recommended clinical investigations are to be outlined along with justifications.
### Baseline Classifiers: Extraction with Rules/Prediction
We used a workflow to identify patients with specific diseases from EHR data as described by Zhang et. al. in 2022[26]. Briefly, it uses unstructured textual data and structured data for each EHR. It leverages a phenotyping NLP algorithm (based on clinical BERT)[30] to capture explicit mentions of clinical concepts (say 'Hypertension'), their synonyms ('high blood Pressure'), abbreviations ('HTN'), numeric values ('BP > 140/90 mm Hg'), and contextual synonyms (for e.g., 'a rise in blood pressure', 'blood pressure above normal range') to annotate clinical concepts in unstructured text by mapping the concept to the Human Phenotype Ontology (HPO). Concepts beyond the scope of the HPO ontology are captured by the Medical Concept Annotation Tool[31], and mapped to the SNOMED-CT (Systematised Nomenclature of Medicine - Clinical Terms) ontology. Additional concepts are identified and annotated using regular expression techniques. The annotated concepts are then assigned an appropriate expression type (for e.g., 'Affirmed', 'Negation', 'Possible') by a context aware classify expression algorithm.
The extracted features (along with the structured data only for the prediction models) form the input to the binary classifier models. The rule-based models use a rule to identify an EHR as positive if they contain any of the relevant extracted clinical concepts for a given disease having a positive expression type. For prediction models, we explored five different machine learning models namely Random Forest, Gradient Boosting, Logistic Regression, Support Vector Machine, and Multi-Layer Perceptron building binary classifiers for the diseases of interest. The best performing model was identified by optimising for the F-1 score, with feature, classifier type selection and hyperparameter optimisation performed using an Automated ML framework[26].
### GPT model configuration
For patient identification task using MIMIC-III, the input prompt to ChatGPT/GPT-4 included the unstructured textual data pertaining to each EHRs' discharge summaries only (truncated to 3800 tokens), along with an abridged disease-specific (baseline) clinical guideline (clinical guidelines in prompts in supplementary S1) and an instruction to assign a 0-1 score along with a rationale for the same. Given the current token limits to ChatGPT/GPT-4 we could not employ any clinical notes apart from discharge summaries. Specific disease use-cases required the assignment of gold standard labels based on relevant clinical notes (discharge summary along with physician, nursing, respiratory, and/or radiology). While we supplied all these pertinent notes as input to (baseline) extraction with rules/prediction models, due to token limitations, we could only provide a discharge summary (or the next most relevant clinical note in few cases where a discharge summary was not available) to ChatGPT/GPT-4. Consequently, the performance of GPT models were inherently limited for EHRs in which crucial information for identifying a patient with a particular disease (say a diagnosis) resided in a note that was not
included in the model's input. For patient evaluation across each step of the clinical care pathway, the specific question served as the input prompt to the model.
## Sensitivity Analysis
### Choice of GPT model:
GPT-4 is the latest LLM (released March 14th, 2023) said to supersede its predecessor ChatGPT (GPT-3.5-turbo) in its advanced reasoning capabilities. To assess this, we compare their performance across the disease use-cases, and questions that evaluated the step 5 of the patient clinical care pathway.
### Level of detail in the clinical guideline provided in the input prompt to ChatGPT/GPT-4:
Given that ChatGPT/GPT-4 uses the input prompt as the basis for generating a response, the quality and detail of the prompt can have a significant influence on the response generated by the model. Detailed prompts with specific keywords, facts, or examples are thought to generate more specific and accurate responses while vague or ambiguous prompts may produce poorer responses. To assess the influence of the level of detail in the input prompt on ChatGPT/GPT-4 model responses, using the COPD, CKD, PBC use-cases, we evaluated the model scores using both, a baseline and elaborate clinical guidelines provided in the input prompt. The baseline clinical guideline was a type of zero-shot prompting, as it contained no examples. The elaborate clinical guideline prompt was a type of few-shot prompting, as it contained a few examples (2 to 3) of an excerpt of a hypothetical medical record and expected score, for each of the disease classes. These examples were contrived and not based on any actual medical record in the MIMIC-III dataset. Further, the elaborate guideline was structured into a series of sequential sub-tasks (chain of thought prompting) to be performed.
### Reliability in ChatGPT responses:
In the context of GPT models, temperature is a hyperparameter that controls the randomness or creativity of the generated text. A higher temperature results in more diverse and unpredictable outputs, while a lower temperature leads to more reliable or consistent outputs. To assess the diversity in the score predictions for a given discharge summary and clinical guideline, using the COPD disease use-case, we checked score predictions with a temperature of 0 against 0.2 and 1. To describe the reliability of ChatGPT's score predictions, for each disease use-case, we report the number and proportion of the EHRs that received two or more unique predictions on repeat testing (5 times) given the same clinical guideline and instruction in the prompt, when using the model's default temperature of 1.
### Evaluation
The outputs from extraction with rules/prediction models, and ChatGPT/GPT-4 binary disease class predictions were evaluated against clinician assigned gold standard labels by computing the standard metrics used in classification tasks: precision, recall, and F1 scores.
ChatGPT responses to each question were scored by three licensed physicians independently using an evaluation framework (Table 5). Our framework was a simplified version of that previously published for evaluating LLM generative outputs[32]. For each of the four domains evaluated could be scored on a Likert scale of 1-3. Each response was scored, and then aggregated using mean to produce a final score for a particular domain. Additionally, we also provided a binary correct or wrong based on the answer to the most likely diagnosis question assessed in step 5 of the care pathway. Responses were marked as correct, if the full primary diagnosis (along with relevant complications) and any secondary diagnoses were mentioned.
\begin{table}
\begin{tabular}{|c|c|l|} \hline \multicolumn{1}{|c|}{**Domain**} & \multicolumn{1}{c|}{**Scoring Criteria**} \\ \hline
**Scientific correctness** & 1 & Overall information provided is scientifically incorrect \\ \cline{2-3} & 2 & Overall information provided is scientifically correct, however, \\ & & may have minor scientific inaccuracies or information may not be up to date with the latest available society guidelines \\ \cline{2-3} & 3 & Overall information provided is scientifically correct, without any inaccuracies, up to date with latest available society guidelines \\ \hline
**Comprehension** & 1 & 2 or more of comprehension, retrieval, or reasoning are wrong \\ (understanding), & 2 & Any 1 of comprehension, retrieval, or reasoning are wrong \\
**Retrieval** (whether retrieves the required information), and & 3 & All of comprehension, retrieval, and reasoning are correct \\
**Reasoning** (whether the provided rationale is appropriate) & & \\ \hline
**Content** & 1 & Presence of highly inappropriate, harmful, or incorrect content, or Absence of highly relevant content \\ \cline{2-3} & 2 & No inappropriate, harmful, or incorrect content, however, may be missing some (less important but still) relevant content \\ \cline{2-3} & 3 & No inappropriate or incorrect content, all relevant content is provided (answer is comprehensive) \\ \hline
**Bias** & 1 & Strong evidence of Bias \\ \cline{2-3} & 2 & May have some evidence of Bias \\ \cline{2-3} & 3 & No evidence of Bias \\ \hline \end{tabular}
\end{table}
Table 5: Criteria used to evaluate ChatGPT responses to the questions asked. |
2307.07261 | Numerical evaluation of oscillatory integrals via automated steepest
descent contour deformation | Steepest descent methods combining complex contour deformation with numerical
quadrature provide an efficient and accurate approach for the evaluation of
highly oscillatory integrals. However, unless the phase function governing the
oscillation is particularly simple, their application requires a significant
amount of a priori analysis and expert user input, to determine the appropriate
contour deformation, and to deal with the non-uniformity in the accuracy of
standard quadrature techniques associated with the coalescence of stationary
points (saddle points) with each other, or with the endpoints of the original
integration contour. In this paper we present a novel algorithm for the
numerical evaluation of oscillatory integrals with general polynomial phase
functions, which automates the contour deformation process and avoids the
difficulties typically encountered with coalescing stationary points and
endpoints. The inputs to the algorithm are simply the phase and amplitude
functions, the endpoints and orientation of the original integration contour,
and a small number of numerical parameters. By a series of numerical
experiments we demonstrate that the algorithm is accurate and efficient over a
large range of frequencies, even for examples with a large number of coalescing
stationary points and with endpoints at infinity. As a particular application,
we use our algorithm to evaluate cuspoid canonical integrals from scattering
theory. A Matlab implementation of the algorithm is made available and is
called PathFinder. | A. Gibbs, D. P. Hewett, D. Huybrechs | 2023-07-14T10:26:03Z | http://arxiv.org/abs/2307.07261v2 | # Numerical evaluation of oscillatory integrals via automated steepest descent contour deformation
###### Abstract
Steepest descent methods combining complex contour deformation with numerical quadrature provide an efficient and accurate approach for the evaluation of highly oscillatory integrals. However, unless the phase function governing the oscillation is particularly simple, their application requires a significant amount of a priori analysis and expert user input, to determine the appropriate contour deformation, and to deal with the non-uniformity in the accuracy of standard quadrature techniques associated with the coalescence of stationary points (saddle points) with each other, or with the endpoints of the original integration contour. In this paper we present a novel algorithm for the numerical evaluation of oscillatory integrals with general polynomial phase functions, which automates the contour deformation process and avoids the difficulties typically encountered with coalescing stationary points and endpoints. The inputs to the algorithm are simply the phase and amplitude functions, the endpoints and orientation of the original integration contour, and a small number of numerical parameters. By a series of numerical experiments we demonstrate that the algorithm is accurate and efficient over a large range of frequencies, even for examples with a large number of coalescing stationary points and with endpoints at infinity. As a particular application, we use our algorithm to evaluate cusoid canonical integrals from scattering theory. A Matlab implementation of the algorithm is made available and is called PathFinder.
## 1 Introduction
In this paper we consider numerical evaluation of the integral
\[I=\int_{\Gamma}f(z)\mathrm{e}^{\mathrm{i}\omega g(z)}\,\mathrm{d}z, \tag{1}\]
where \(\Gamma\) is a contour in \(\mathbb{C}\), possibly starting and/or ending at infinity, \(f\) and \(g\) are functions of a complex variable, and \(\omega>0\) is a frequency parameter. Such integrals arise in numerous application areas, particularly in wave phenomena and quantum mechanics, and are generally challenging to evaluate numerically, especially when \(\omega\) is large, because the presence of the exponential factor \(\mathrm{e}^{\mathrm{i}\omega g(z)}\) means that the integrand may undergo rapid oscillations and/or significant variations in amplitude along the integration contour.
When \(f\) and \(g\) are analytic, Cauchy's theorem provides the possibility of deforming the integration contour so as to make numerical evaluation easier. This is the basis of _steepest descent (SD) methods_, in which one aims to deform \(\Gamma\) onto a contour, or, more typically, a union of contours, which we term the _steepest descent (SD) deformation_, on which \(\Re[g(z)]\) is constant, so that the exponential factor \(\mathrm{e}^{\mathrm{i}\omega g(z)}\) is no longer oscillatory. By the Cauchy-Riemann equations, these contours coincide with the steepest descent curves of \(-\Im[g(z)]\), and they connect endpoints of the original integration contour, valleys at infinity (sectors in which the integrand decays rapidly as \(|z|\to\infty\)), and _stationary points_ of \(g\), which are points \(\xi\in\mathbb{C}\) at which \(g^{\prime}(\xi)=0\). 1 Along each SD contour, away from stationary points the integrand typically decays exponentially, with the rate of decay increasing with increasing \(\omega\), and as \(\omega\to\infty\) the value of the integral is dominated by local contributions close to the endpoints of \(\Gamma\) and any stationary points traversed by the SD deformation. In the _asymptotic_ steepest descent method (described e.g. in [3, 19]), one exploits this to obtain an asymptotic expansion for the integral, valid as \(\omega\to\infty\), by performing a local Taylor expansion of the integrand around the endpoints and relevant stationary points, and reducing the local integrals along the SD contours to simpler integrals that can be expressed in terms of known special functions.
Footnote 1: Stationary points are often referred to as “saddle points” because they are saddle points of the functions \(\Im[g(z)]\) and \(\Re[g(z)]\), which cannot possess local maxima or minima by the maximum modulus principle.
In the _numerical_ steepest descent (NSD) method (described e.g. in [7, SS5]) one evaluates the integrals along the SD contours numerically. This involves numerically tracing an appropriate segment of each SD contour in the SD deformation and applying suitable numerical quadrature rules to evaluate the associated contributions to the integral.
In principle, NSD is a highly accurate and efficient method for evaluating integrals of the form (1) for moderate or large \(\omega\). Indeed, under appropriate assumptions, the NSD method outputs approximations which, for a fixed number of quadrature points \(N\), are asymptotic to (1) as \(\omega\to\infty\), with the asymptotic accuracy improving with increasing \(N\) (see, e.g., [7, Thm 5.7]). Furthermore, if \(f\) and \(g\) are sufficiently well behaved it can also be the case that the NSD approximation converges to (1) as \(N\to\infty\), for fixed \(\omega>0\), with a cost that remains bounded as \(\omega\to\infty\).
In practice, however, applying the NSD method to an integral of the form (1) often requires significant expert user input. This is because:
* Determining the SD contour deformation corresponding to a given \(g\) and \(\Gamma\) requires careful a priori analysis.
* Parametrizing SD contours from or near stationary points, and evaluating integrals along them, is fraught with numerical difficulties, especially when stationary points are close to other stationary points or endpoints of \(\Gamma\).
The issues described in (P1) and (P2) are particularly troublesome when one wishes to evaluate (1) for multiple instances of a phase function \(g(z)=g(z,\mathbf{c})\) depending on a set of parameters \(\mathbf{c}\in\mathbb{C}^{q}\). This is because, firstly, the number and location of the stationary points, and the nature of the SD deformation, have to be determined for each different value of \(\mathbf{c}\), and, secondly, stationary
points may coalesce as \(\mathbf{c}\) approaches certain regions in parameter space, leading to a non-uniformity in the accuracy of the resulting NSD approximations.
The problem of stationary point coalescence in the context of NSD was studied in detail in [10] in the special case of the cubic phase function \(g(z,c)=\frac{z^{3}}{3}-cz\), for \(c\in\mathbb{C}\), which for \(c\neq 0\) has a pair of order one stationary points which coalesce as \(c\to 0\) (at \(z=0\)) into a single stationary point of order two for \(c=0\).2 In this case, the SD deformation and contour parametrization were carried out manually by analytically inverting the phase (illustrating (P1)), but the resulting integrals were found to be nearly singular for small \(c\), leading to poor accuracy of standard NSD approximations (illustrating (P2)). It was shown in [10] how to construct a family of non-standard quadrature rules for this integral which perform uniformly well for \(c\approx 0\) using complex-valued Gaussian quadrature, producing quadrature nodes that in general lie off the SD deformation. In principle, similar rules could be developed for more complicated coalescences involving higher order stationary points and/or endpoints of \(\Gamma\). However, for each type of coalescence a bespoke quadrature rule would have to be developed, and a general catalogue of such rules is not yet available in the literature.
Footnote 2: The _order_ of a stationary point \(\xi\) is the multiplicity of \(\xi\) as a root of \(g^{\prime}\).
In contrast to [10], our aim is not to develop an optimized method for a specific instance of (1), but rather to present a relatively simple algorithm that can evaluate (1) accurately, for a general class of \(f\) and \(g\), without the need for expert user input or a priori analysis, even in the case of coalescing stationary points, thus addressing problems (P1) and (P2). Our specific focus in this paper is on the case where \(f\) is entire and \(g\) is a polynomial. The extension of our approach to more general cases where \(f\) and/or \(g\) have pole or branch point singularities is the subject of ongoing research. Necessarily, in aiming for generality and robustness we will sacrifice some efficiency. However, our method is designed to be rapidly convergent as \(N\to\infty\) with approximately \(\omega\)-independent cost, and the fact that this is realised in practice is demonstrated by extensive numerical experiments in SS5.
Our algorithm follows the basic principles of NSD, combining complex contour deformation with numerical quadrature. However, in contrast to standard NSD our algorithm does not trace SD contours directly from stationary points. Instead, stationary points are enclosed in a bounded "non-oscillatory region" within which the integrand is guaranteed to undergo at most a fixed number of oscillations. The original contour \(\Gamma\) is replaced by a "quasi-SD deformation" comprising a union of straight-line contours in the non-oscillatory region, for which numerical quadrature is straightforward, and SD contours outside the non-oscillatory region, on which standard NSD quadrature techniques can be applied. By excluding a neighbourhood of the stationary points from the region in which SD contours are traced, our algorithm avoids the problems mentioned in (P2) associated with stationary-point/stationary-point and/or stationary-point/endpoint coalescence. This not only "uniformizes" the accuracy of our algorithm compared to standard NSD, but it also enables us to tackle the problem (P1) by automating the contour deformation step. For the latter, we first perform low accuracy SD contour tracing outside the non-oscillatory region to build a graph describing the global connections (via SD contours) between the endpoints of \(\Gamma\), the different components of the non-oscillatory region, and the valleys at infinity, and then determine the quasi-SD deformation using a short
est path algorithm, before refining the accuracy of the SD contour tracing at the quadrature stage.
One other problem with standard NSD is that it typically degenerates as \(\omega\to 0\), because the quadrature points diverge to infinity [7, SS5.2.4]. This issue has been addressed in the special case \(g(z)=z\) for bounded \(\Gamma\) in [2, 6]; however, it remains an open problem for general \(g(z)\). Our algorithm is well-behaved in the limit as \(\omega\to 0\) for general polynomial \(g(z)\), since it reduces to standard non-oscillatory quadrature for sufficiently small \(\omega\) for any bounded \(\Gamma\).
Our algorithm is implemented in the open-source Matlab code "PathFinder", available at github.com/AndrewGibbs/PathFinder[8]. The basic user input to the code is a function handle for the amplitude f, the coefficients of the polynomial phase g, endpoints a and b (complex numbers, or angles in the case of infinite endpoints), the frequency parameter omega, and a parameter N controlling the number of quadrature points to be used. Approximating the integral (1) using PathFinder can be done with the following Matlab command:
\[\texttt{PathFinder(a,b,f,g,omega,N,'infcontour',[A B])} \tag{2}\]
Here 'infcontour' is an optional input for which the user should supply a Boolean array [A B] (whose default value is [false false]) such that A (respectively B) is true if the endpoint a (resp. b) is infinite and false if it is finite. Examples of PathFinder code will be given in SS5. Advanced users can also adjust a small number of other tuning parameters, whose role will be discussed during the presentation of our algorithm.
An outline of the paper is as follows. In SS2 we provide a detailed description of our algorithm, first presenting an overview of the main steps, and then providing details of how each step is realised in PathFinder. In SS3 we present some theoretical results underpinning our approach. In SS4 we discuss some further implementation details, and in SS5 we exhibit numerical results demonstrating the performance of our algorithm on a range of challenging integrals with large \(\omega\) and complicated stationary point configurations.
We end this introduction by remarking that integrals with coalescing stationary points are of fundamental importance in numerous applications, including the study of optics and high frequency (short wavelength) acoustics, where they describe the wave field in the vicinity of geometrical singularities (or "catastrophes") in the classical ray-theoretic framework, Kelvin's celebrated ship-wave problem, and the theory of molecular collisions in quantum mechanics and theoretical chemistry. A catalogue of such integrals, along with links to relevant literature, can be found in [1, SS36]. In SS5.5 we show how PathFinder can be applied to accurately calculate these types of integrals.
## 2 Algorithm description
In this section we present our algorithm for the numerical approximation of (1) when \(f\) is entire3 and \(g\) is a polynomial.
We start with some definitions and basic facts. Let
\[g(z)=\sum_{j=0}^{J}\alpha_{j}z^{j}, \tag{3}\]
for some \(J\in\mathbb{N}\), \(J\geq 1\), and \(\alpha_{j}\in\mathbb{C}\), \(j=0,\ldots,J\), with \(\alpha_{J}\neq 0\). Then \(g\) has at most \(J-1\)_stationary points_, which are the solutions of
\[g^{\prime}(z)=\sum_{j=1}^{J}j\alpha_{j}z^{j-1}=0. \tag{4}\]
We denote the set of all stationary points by \(\mathcal{P}_{\rm stat}\). We define the _valleys_ at infinity to be the sectors of angular width \(\pi/J\) centred on the angles
\[\mathcal{V}:=\bigg{\{}\frac{(2(m-1)+1/2)\pi-\arg\left(\alpha_{J}\right)}{J}: \quad m=1,\ldots,J\bigg{\}}. \tag{5}\]
These have the property that if \(z=r{\rm e}^{{\rm i}\theta}\) with \(\theta\in(v-\pi/(2J),v+\pi/(2J))\) for some \(v\in\mathcal{V}\) then \({\rm e}^{{\rm i}\omega g(z)}\to 0\) as \(r\to\infty\). For each \(\eta\in\mathbb{C}\setminus\mathcal{P}_{\rm stat}\) there exists a unique SD contour \(\gamma_{\eta}\) beginning at \(\eta\) and ending either at a stationary point \(\xi\in\mathcal{P}_{\rm stat}\) or at a valley \(v\in\mathcal{V}\), on which \(\Re g(z)=\Re g(\eta)\) for \(z\in\gamma_{\eta}\) (see, e.g., [4]).
We let \(\mathcal{P}_{\rm endp}\) denote the set of finite endpoints of the integration contour \(\Gamma\), which could have zero, one or two elements. We assume for now that any infinite endpoint of \(\Gamma\) is at one of the valleys \(v\in\mathcal{V}\); see SS4.4 for extensions.
We now provide a high-level overview of our algorithm. The following steps will be explained in more detail in sections 2.1-2.6.
1. Compute the set of stationary points \(\mathcal{P}_{\rm stat}\) (the solutions of (4)).
2. For each \(\xi\in\mathcal{P}_{\rm stat}\), select a radius \(r_{\xi}>0\) for which the function \({\rm e}^{{\rm i}\omega g(z)}\) is considered "non-oscillatory" on the closed ball \(\Omega_{\xi}\) of radius \(r_{\xi}\) centred at \(\xi\). These balls may overlap. However, if two balls overlap significantly, indicating near coalescence, one of the stationary points (along with its associated ball) is removed from the set \(\mathcal{P}_{\rm stat}\). This removal process continues recursively until no pair of balls is judged to overlap too much. We call \(\{\Omega_{\xi}\}_{\xi\in\mathcal{P}_{\rm stat}}\) the _non-oscillatory balls_, and their union \[\Omega:=\bigcup_{\xi\in\mathcal{P}_{\rm stat}}\Omega_{\xi}\] (6) the _non-oscillatory region_.
3. Find the local minima of \(|{\rm e}^{{\rm i}\omega g(z)}|\) on the boundary of the non-oscillatory region \(\Omega\). We call these points _exits_, and denote by \(\mathcal{P}_{\rm exit}\) the set of all exits.
4. For each \(\eta\in\mathcal{P}_{\rm exit}\cup(\mathcal{P}_{\rm endp}\setminus\Omega)\), trace the SD contour \(\gamma_{\eta}\) from \(\eta\), and determine whether * (i) \(\gamma_{\eta}\) enters \(\Omega\) at some point \(z\in\partial\Omega\setminus\{\eta\}\), or * (ii) \(\gamma_{\eta}\) converges towards a valley \(v\in\mathcal{V}\) without entering \(\Omega\).
We call points \(z\in\partial\Omega\) determined in case (i) _entrances_, and denote by \(\mathcal{P}_{\mathrm{entr}}\) the set of all entrances.
5. Construct a graph \(G\) with a vertex for each of the elements of \(\mathcal{P}_{\mathrm{stat}}\), \(\mathcal{P}_{\mathrm{endp}}\), \(\mathcal{P}_{\mathrm{exit}}\), \(\mathcal{P}_{\mathrm{entr}}\) and \(\mathcal{V}\). Add edges between the vertices of \(G\) as follows: * For each \(\xi\in\mathcal{P}_{\mathrm{stat}}\), add an edge between each pair of elements of \((\mathcal{P}_{\mathrm{stat}}\cup\mathcal{P}_{\mathrm{endp}}\cup\mathcal{P}_{ \mathrm{exit}}\cup\mathcal{P}_{\mathrm{entr}})\cap\Omega_{\xi}\). * For each pair \(\xi,\xi^{\prime}\in\mathcal{P}_{\mathrm{stat}}\), \(\xi\neq\xi^{\prime}\), for which \(\Omega_{\xi}\cap\Omega_{\xi^{\prime}}\neq\emptyset\), add an edge between \(\xi\) and \(\xi^{\prime}\), if not already added in the previous step. * For each \(\eta\in\mathcal{P}_{\mathrm{exit}}\cup(\mathcal{P}_{\mathrm{endp}}\setminus\Omega)\), add an edge between \(\eta\) and the entrance \(z\in\mathcal{P}_{\mathrm{entr}}\) or the valley \(v\in\mathcal{V}\) to which the SD contour \(\gamma_{\eta}\) leads. Find the shortest path (in the graph-theoretic sense) between the vertices corresponding to the endpoints of \(\Gamma\).
6. Generate quadrature nodes and weights for the evaluation of each of the contour integrals corresponding to the edges in the shortest path. For an edge between two points in the non-oscillatory region, use a straight-line contour. For an edge between an exit or an endpoint of \(\Gamma\) to an entrance or a valley, use a refined version of the SD contour traced in step 4. The union of all the contours corresponding to the edges of the shortest path defines the "quasi-SD deformation" of the original integration contour. Finally, use the quadrature nodes and weights to approximate the integrals over the contours in the quasi-SD deformation and sum them to obtain an approximation of the original integral (1).
In Figures 2.1 and 2.2 we illustrate the outcome of the above steps for the particular choice of phase function
\[\begin{split} g(z)=&\frac{z^{7}}{7}+z^{6}\,\left( \frac{7}{20}+\frac{13}{30}\mathrm{i}\right)+z^{5}\,\left(-\frac{1047}{2000}+ \frac{543}{1000}\mathrm{i}\right)+z^{4}\,\left(-\frac{4409}{8000}-\frac{5077}{8 000}\mathrm{i}\right)\\ &+z^{3}\,\left(\frac{711}{2000}-\frac{4441}{6000}\mathrm{i} \right)+z^{2}\,\left(\frac{237}{800}-\frac{207}{800}\mathrm{i}\right)+z\,\left( \frac{63}{1000}-\frac{77}{2000}\mathrm{i}\right)\end{split} \tag{7}\]
and the parameters \(\omega=40\), \(a=-1.5\), \(b=2\), \(N=10\), using the default parameter set for PathFinder (see Table 4.1). For this choice of \(g\) there is one order 2 stationary point and 4 order one stationary points. In Figure 2.1 we plot these stationary points, along with their non-oscillatory balls, and the SD contours traced from the exits. Such plots can be generated in PathFinder by adding the optional 'plot' flag. The ball centred at the stationary point \(\xi=-\mathrm{i}\) contains two entrances, reached by SD contours from the balls above. In Figure 2.2 we plot the graph \(G\), using the optional PathFinder input flag 'plot graph'. This graph, in addition to edges corresponding to the SD contours shown in Figure 2.1, contains edges corresponding to contours between points in the non-oscillatory region, including connections within the two overlapping balls. The shortest path between \(a\) and \(b\), which is highlighted with thick lines in Figure 2.2, corresponds to the quasi-SD deformation, the integral over which is equal to (1) by Cauchy's Theorem. The integral is discretised using \(N\) quadrature
points on each contour in the quasi-SD deformation that makes a non-negligible contribution to the integral (see SS2.6) - these points are plotted in Figure 2.1 in red.
The process of computing all the SD contours and the selection of a subset thereof via the shortest path algorithm addresses problem (P1). Surrounding stationary points by balls, and only tracing SD contours outside the balls, means that we avoid having to determine the local structure of the SD contours and compute integrals along them near stationary points, addressing problem (P2).
In the following subsections we provide further details of how we carry out the steps outlined above in PathFinder.
### Step 1 - Computing stationary points
Computing the stationary points of \(g\) (the roots of \(g^{\prime}(z)\)) requires us to find the complex roots of the polynomial (4). In our implementation we compute stationary points using the Matlab roots command, which applies a companion matrix approach. We note that obtaining highly accurate values for the positions of stationary points is not critical to our algorithm, since the stationary points are enclosed in the non-oscillatory region and we never trace SD contours from them. Indeed, the difficulty in distinguishing numerically between multiple roots and roots of higher order contributes to the motivation for considering such non-oscillatory regions.
### Step 2 - Defining the non-oscillatory region
The non-oscillatory region \(\Omega\) was defined in (6) to be a union of balls centred at the elements of \(\mathcal{P}_{\rm stat}\). We choose the radii of the balls as follows. First fix some user-defined constant \(C_{\rm ball}>0\). Then, given \(\xi\in\mathcal{P}_{\rm stat}\), define
\[r_{\xi}:=\max\{r>0:|z-\xi|\leq r\Rightarrow\omega|g(z)-g(\xi)|\leq C_{\rm ball }\}. \tag{8}\]
This definition enforces an upper bound on the number of oscillations within each ball. Accordingly, the region \(\Omega\) shrinks to the stationary points as \(\omega\to\infty\) and expands to fill the whole complex plane as \(\omega\to 0\).
In our implementation we approximate \(r_{\xi}\) numerically as follows. Let \(N_{\rm ball}\in\mathbb{N}\) be a user-defined parameter. For each \(n\in\{1,\ldots,N_{\rm ball}\}\) we consider the ray \(\{z=\xi+r{\rm e}^{{\rm i}2\pi n/N_{\rm ball}},\,r>0\}\), and compute the smallest positive root \(r_{n}>0\) of the function \(u_{n}(r):=\omega^{2}|g(\xi+r{\rm e}^{{\rm i}2\pi n/N_{\rm ball}})-g(\xi)|^{2} -C_{\rm ball}^{2}\), which is a polynomial in \(r\) of degree \(2J\). For this root-finding problem we use the Matlab roots command; in case this command produces no positive real roots (because of stability issues) we resort to a bisection approach instead. We then take as our approximation to \(r_{\xi}\) the positive number \(\max_{n\in\{1,\ldots,N_{\rm ball}\}}r_{n}\).
When elements of \(\mathcal{P}_{\rm stat}\) are close it is natural to amalgamate their respective non-oscillatory balls. To do this systematically we adopt an iterative approach. Let \(\delta_{\rm ball}>0\) be a user-defined parameter.
* For each pair \(\xi_{1},\xi_{2}\in\mathcal{P}_{\rm stat}\) compute \[d_{\xi_{1},\xi_{2}}:=|\xi_{1}-\xi_{2}|/\max(r_{\xi_{1}},r_{\xi_{2}}).\]
Figure 2.1: Output of algorithm when applied with phase (7), \(\omega=40\), \(a=-1.5\), \(b=2\), \(N=10\), and the default parameter set (see Table 4.1). Here we observe stationary points (black stars) surrounded by balls (grey), SD contours traced from exits and finite endpoints (black lines), and quadrature points allocated along the appropriate contours in the quasi-SD deformation (red points). The “region of no return” (see §3.2) around the valleys at infinity is also shaded grey.
Figure 2.2: The graph \(G\) corresponding to the problem considered in Figure 2.1. The thick line represents the shortest path between the endpoints, which in this case are both finite. The balls (shaded grey) are included for ease of comparison with Figure 2.1. The lower figure zooms in on the centre of the upper figure, showing the multiple edges that are constructed inside the balls.
* If \(\min_{\xi_{1},\xi_{2}}d_{\xi_{1},\xi_{2}}<\delta_{\rm ball}\) let \(\xi_{1},\xi_{2}\) be a pair realising the minimum. Remove from \(\mathcal{P}_{\rm stat}\) whichever of \(\xi_{1},\xi_{2}\) has the smaller associated ball radius (or choose arbitrarily between them if \(r_{\xi_{1}}=r_{\xi_{2}}\)).
* Repeat the previous step until either \(\min_{\xi_{1},\xi_{2}}d_{\xi_{1},\xi_{2}}\geq\delta_{\rm ball}\), or there is only one element of \(\mathcal{P}_{\rm stat}\) remaining.
### Step 3 - Determining the exits
The exits associated with each \(\xi\in\mathcal{P}_{\rm stat}\) are defined to be the local minima on \(\partial\Omega_{\xi}\setminus\bigcup_{\xi^{\prime}\in\mathcal{P}_{\rm stat}, \xi^{\prime}\neq\xi}\Omega_{\xi^{\prime}}^{\circ}\) of the function \(|{\rm e}^{{\rm i}\omega g(z)}|\), equivalently of the function \(-\Im g(z)\).
For each \(\xi\in\mathcal{P}_{\rm stat}\) the function \(-\Im g(z)\) restricted to the \(\partial\Omega_{\xi}\) is a trigonometric polynomial. Using this fact, in our implementation we determine the local minima of \(-\Im g(z)\) on \(\partial\Omega_{\xi}\) by finding the roots of the derivative of \(-\Im g(z)\) in the angular direction (which is also a trigonometric polynomial) by the companion matrix approach of [5, SS2.2], and keep only the real roots corresponding to local minima. We discard all those minima corresponding to points inside \(\bigcup_{\xi^{\prime}\in\mathcal{P}_{\rm stat},\xi^{\prime}\neq\xi}\Omega_{\xi ^{\prime}}^{\circ}\), and add the remaining minima to the set \(\mathcal{P}_{\rm exit}\).
### Step 4 - Tracing the SD contours
Given \(\eta\in\mathcal{P}_{\rm exit}\cup(\mathcal{P}_{\rm endp}\setminus\Omega)\), the SD contour \(\gamma_{\eta}\) beginning at \(\eta\) is the unique curve on which \(\Re g(z)\) is constant, with \(-\Im g(z)\) decreasing along \(\gamma_{\eta}\). It can be parametrized in terms of a parameter \(p\geq 0\) as \(z=h_{\eta}(p)\), where \(h_{\eta}(p)\) is defined implicitly by
\[g(h_{\eta}(p))=g(\eta)+{\rm i}p,\qquad h_{\eta}(0)=\eta. \tag{9}\]
Differentiating (9) with respect to \(p\) gives
\[h_{\eta}^{\prime}(p)=\frac{{\rm i}}{g^{\prime}(h_{\eta}(p))}=:F(h_{\eta}(p)), \qquad h_{\eta}(0)=\eta, \tag{10}\]
which is a first order ODE initial value problem for \(h_{\eta}(p)\). By solving (10) numerically one can trace the contour \(\gamma_{\eta}\) until it either (i) enters the non-oscillatory region \(\Omega\), or (ii) one can be sure that it will tend to a valley \(v\in\mathcal{V}\), without entering \(\Omega\). For (ii) we appeal to the theoretical result in Theorem 3.3, which provides a "region of no return" \(R_{v}\) associated with each valley \(v\in\mathcal{V}\) for which it is guaranteed that if an SD contour enters \(R_{v}\) it will never leave \(R_{v}\), and will converge to \(v\).
Staying away from stationary points ensures that the factor \(1/g^{\prime}\) in the right-hand side of (10) does not get too large.
In our implementation we trace the SD contour using a predictor-corrector approach, combining a forward Euler step for (10) and a Newton iteration for (9), to generate approximations \(h_{\eta}^{(n)}\approx h_{\eta}(p_{n})\) on a mesh \(0=p_{0}<p_{1}<p_{2}<\ldots<p_{n_{\rm max}}\), where the total number of steps \(n_{\rm max}\) is determined as part of the algorithm, as discussed below.
As the initial value we take \(h_{\eta}^{(0)}=\eta\). Then, given \(h_{\eta}^{(n)}\), to compute \(h_{\eta}^{(n+1)}\) we first apply a forward Euler step for the ODE (10), with adaptive step length
\[p_{n+1}-p_{n}=\delta_{\rm ODE}\min\left(2\frac{|g^{\prime}(h_{\eta}^{(n)})|^{2 }}{|g^{\prime\prime}(h_{\eta}^{(n)})|},|g^{\prime}(h_{\eta}^{(n)})|\,{\rm dist}( h_{\eta}^{(n)},\mathcal{P}_{\rm stat})\right),\]
where \(\delta_{\rm ODE}\in(0,1)\) is a user-specified parameter. The first argument of the minimum is included to ensure stability of the solver - note that \(F^{\prime}(h)=-\frac{{\rm i}g^{\prime\prime}(h)}{(g^{\prime}(h))^{2}}\) and we might expect instability if the local step length were as large as \(2/|F^{\prime}(h)|=2\frac{|g^{\prime}(h)|^{2}}{|g^{\prime\prime}(h)|}\). The second argument is included to ensure that the solver "slows down" as it approaches the non-oscillatory region \(\Omega\), so that we can detect whether \(\gamma_{\eta}\) enters \(\Omega\) or not. To ensure that \(|h_{\eta}^{(n+1)}-h_{\eta}^{(n)}|\leq\delta_{\rm ODE}d\), where \(d:={\rm dist}(h_{\eta}^{(n)},{\cal P}_{\rm stat})=\min_{\xi\in{\cal P}_{\rm stat }}|h_{\eta}^{(n)}-\xi|\), we require that \(p_{n+1}-p_{n}\leq\frac{\delta_{\rm ODE}d}{|F(h_{\eta}^{(n)})|}=\delta_{\rm ODE} d|g^{\prime}(h_{\eta}^{(n)})|\). This also ensures that \(h_{\eta}^{(n+1)}\) remains far enough from \({\cal P}_{\rm stat}\), so that (10) doesn't get too large.
After each forward Euler step, we correct \(h_{\eta}^{(n+1)}\) by running a Newton iteration to enforce (9) (with \(p=p_{n+1}\) fixed), until the Newton step size \(|\frac{g(h_{\eta}^{(n+1)})-g(\eta)-ip_{n+1}}{g^{\prime}(h_{\eta}^{(n+1)})}|\) is smaller than \(\delta_{\rm coarse}\,{\rm dist}(h_{\eta}^{(n+1)},{\cal P}_{\rm stat})\), for some user-specified tolerance \(\delta_{\rm coarse}>0\).
We repeat this process for \(n=0,1,2,\ldots\) until either
* \(h_{\eta}^{(n)}\in\Omega_{\xi}\) for some \(\xi\in{\cal P}_{\rm stat}\), in which case we add \(z=h_{\eta}^{(n)}\) to the set \({\cal P}_{\rm entr}\) of entrances. Note that in general the point \(z=h_{\eta}^{(n)}\) will lie inside \(\Omega_{\xi}^{o}\) rather than on \(\partial\Omega_{\xi}\), but will be closer to \(\partial\Omega_{\xi}\) the smaller \(\delta_{\rm ODE}\) is; or
* \(h_{\eta}^{(n)}\in R_{v}\) for some \(v\in{\cal V}\), in which case, by Theorem 3.3, \(\gamma_{\eta}\) converges to the valley \(v\). Here the "region of no return" \(R_{v}\) is defined by \[R_{v}:=\{z\in\mathbb{C}:|\arg z-v|_{2\pi}<\pi/(2J)\mbox{ and }G(|z|,|\arg z-v|_{2\pi})>0\},\] (11) where \[|\theta|_{2\pi}:=\min_{m\in\mathbb{Z}}|\theta-2\pi m|,\] (12) and, for \(r>0\) and \(\theta\in(0,\pi/(2J))\), \[G(r,\theta):=J|\alpha_{J}|r^{J-1}\min\Big{(}1/\sqrt{2},\cos J\theta\Big{)}- \sum_{j=1}^{J-1}j|\alpha_{j}|r^{j-1}.\] (13) For further explanation of the meaning of \(R_{v}\) see SS3.2 below. A necessary condition for a point \(z\) to lie in \(R_{v}\) is that \(|z|\geq r_{*}\), where \(r_{*}>0\) is the unique positive solution of the polynomial equation \(G(r_{*},\pi/(4J))=0\), i.e. the solution of \[\frac{J|\alpha_{J}|r_{*}^{J-1}}{\sqrt{2}}=\sum_{j=1}^{J-1}j|\alpha_{j}|r_{*}^{ j-1}.\] Having found \(r_{*}\) once and for all (using the Matlab roots commnad), to check that a point \(z\) lies in \(R_{v}\) we first check that \(|z|\geq r_{*}\). If so, we then check that \(|\arg z-v|_{2\pi}<\pi/(2J)\). If so, we then check that \(G(|z|,|\arg z-v|_{2\pi})>0\). The point of introducing \(r_{*}\) is so that we don't compute \(G(|z|,|\arg z-v|_{2\pi})\) unless absolutely necessary.
In either case, tracing of the SD contour stops and we set \(n_{\max}=n\) for this contour.
### Step 5 - Finding the shortest path
The construction of the graph \(G\) requires no further explanation. To find the shortest path in \(G\) between the endpoints of the original contour \(\Gamma\) we apply the standard Dijkstra shortest path algorithm [13, SS10.6.2].
### Step 6 - Evaluating the contour integrals
The quasi-SD contour deformation corresponding to the graph-theoretic shortest path between the endpoints of \(G\) calculated during step 5 involves integrals over three types of contour:
**Type 1**: Straight line contours between points in the non-oscillatory region;
**Type 2**: Infinite SD contours from exits/endpoints to valleys;
**Type 3**: Finite SD contours from exits/endpoints to entrances.
Some of these contours will make a larger contribution to the value of the original integral (1) than others. It is natural to neglect contours that make a negligibly small contribution. In our implementation, we only compute the contribution from a contour \(\gamma\) in the quasi-SD deformation if at least one of the finite endpoints \(\eta\) of \(\gamma\) satisfies \(|\mathrm{e}^{\mathrm{i}\omega g(\eta)}|/M>\delta_{\mathrm{quad}}\), where \(\delta_{\mathrm{quad}}\geq 0\) is a small, user-specified parameter and
\[M:=\max|\mathrm{e}^{\mathrm{i}\omega g(\xi)}|,\]
where the maximum is taken over all \(\xi\in\mathcal{P}_{\mathrm{stat}}\cup\mathcal{P}_{\mathrm{endp}}\cup \mathcal{P}_{\mathrm{exit}}\) appearing in the shortest path corresponding to the quasi-SD deformation.
In our implementation, for Type 1 contours we use Gauss-Legendre quadrature, for Type 2 contours we use either Gauss-Laguerre quadrature (which is the default choice in PathFinder) or truncated Gauss-Legendre quadrature, and for Type 3 contours we use (possibly truncated) Gauss-Legendre quadrature, as detailed below. By default our implementation uses the same number \(N\) of quadrature points on each contour in the quasi-SD deformation whose contribution we compute, regardless of the type of integral (we comment on this in SS3.3.4). Accordingly, if \(N_{\mathrm{cont}}\) is the number of these contours then the total number of quadrature points used in the algorithm, \(N_{\mathrm{tot}}\), is given by
\[N_{\mathrm{tot}}=NN_{\mathrm{cont}}. \tag{14}\]
#### 2.6.1 Evaluation of integrals over Type 1 contours
Let \(z_{0},z_{1}\in\Omega\), and let \(\gamma\) be the straight-line contour in \(\mathbb{C}\) starting at \(z_{0}\) and ending at \(z_{1}\), parametrized by
\[z_{[z_{0},z_{1}]}(t)=\frac{1}{2}\Big{(}(z_{1}-z_{0})t+(z_{0}+z_{1})\Big{)}, \qquad t\in[-1,1]. \tag{15}\]
Given \(N\in\mathbb{N}\), let \(t_{m}^{\mathrm{Leg}}\) and \(w_{m}^{\mathrm{Leg}}\), for \(m=1,\ldots,N\), denote the nodes and weights for standard \(N\)-point Gauss-Legendre quadrature on \([-1,1]\). Our quadrature approximation to the integral over \(\gamma\) is then:
\[\int_{\gamma}f(z)\mathrm{e}^{\mathrm{i}\omega g(z)}\ \mathrm{d}z\approx \frac{z_{1}-z_{0}}{2}\sum_{m=1}^{N}w_{m}^{\mathrm{Leg}}f(z_{[z_{0},z_{1}]}(t_{ m}^{\mathrm{Leg}}))\mathrm{e}^{\mathrm{i}\omega g(z_{[z_{0},z_{1}]}(t_{m}^{ \mathrm{Leg}}))}. \tag{16}\]
#### 2.6.2 Evaluation of integrals over Type 2 contours
Let \(\eta\in\mathcal{P}_{\rm exit}\cup(\mathcal{P}_{\rm endp}\setminus\Omega)\) be such that the SD contour \(\gamma\) from \(\eta\) leads to a valley. Parametrizing \(\gamma\) by (9), for \(p\in[0,\infty)\), noting (10), and rescaling \(p=\tilde{p}/\omega\), we have
\[\int_{\gamma}f(z){\rm e}^{{\rm i}\omega g(z)}\ {\rm d}z=\frac{{\rm e}^{{\rm i} \omega g(\eta)}}{\omega}\int_{0}^{\infty}\tilde{f}(\tilde{p}){\rm e}^{-\tilde {p}}\ {\rm d}\tilde{p}, \tag{17}\]
where
\[\tilde{f}(\tilde{p}):=f(h_{\eta}(\tilde{p}/\omega))h_{\eta}^{\prime}(\tilde{p} /\omega)={\rm i}\frac{f(h_{\eta}(\tilde{p}/\omega))}{g^{\prime}(h_{\eta}( \tilde{p}/\omega))}.\]
By tracing contours outside of \(\Omega\), the contours remain a positive distance from \(\mathcal{P}_{\rm stat}\). This ensures that \(\tilde{f}\) is analytic in a complex neighbourhood of \([0,\infty)\).
By default, PathFinder evaluates the integral on the right-hand side of (17) by Gauss-Laguerre quadrature. Let \(t_{m}^{\rm Lag}\) and \(w_{m}^{\rm Lag}\), for \(n=1,\ldots,N\), denote the standard Gauss-Laguerre nodes and weights on \([0,\infty)\). Our quadrature approximation to the integral over \(\gamma\) is then:
\[\int_{\gamma}f(z){\rm e}^{{\rm i}\omega g(z)}\ {\rm d}z\approx\frac{{\rm e}^{{ \rm i}\omega g(\eta)}}{\omega}\sum_{m=1}^{N}w_{m}^{\rm Lag}\tilde{f}(t_{m}^{ \rm Lag}). \tag{18}\]
To evaluate \(\tilde{f}(t_{m}^{\rm Lag})\) we need accurate computations of \(h_{\eta}(t_{m}^{\rm Lag}/\omega)\) for \(m=1,\ldots,N\). For this, for each \(m\) we run a Newton iteration on (9) with \(p=t_{m}^{\rm Lag}/\omega\) fixed, until the magnitude of the increment is smaller than a user-specified tolerance \(\delta_{\rm fine}>0\). Typically we take \(\delta_{\rm fine}\) to be considerably smaller than the tolerance \(\delta_{\rm coarse}\) used in the Newton iteration in step 4, since when carrying out quadrature we require higher accuracy in our approximation of the SD contour than is required for determining the global structure of the quasi-SD deformation in step 4. As the initial guess for the Newton method we use a piecewise linear interpolant of the points \(\{(p_{0},h_{\eta}^{(0)}),(p_{1},h_{\eta}^{(1)}),\ldots,(p_{n_{\rm max}},h_{ \eta}^{(n_{\rm max})})\}\) computed in step 4, where \(n_{\rm max}\) denotes the total number of steps taken in the ODE solve in step 4 before the contour tracing algorithm terminated. If \(p_{n_{\rm max}}<t_{N}^{\rm Lag}/\omega\) then before running the Newton iteration we first need to restart the contour tracing algorithm of step 4 to extend the SD contour until \(p_{n_{\rm max}}\geq t_{N}^{\rm Lag}/\omega\), so that there are points to interpolate between.
As an alternative, one can evaluate the integral over a Type 2 contour using truncated Gauss-Legendre quadrature, as suggested in [16]. To activate this alternative in PathFinder one should add the optional input 'inf quad rule', 'legendre'. In this case we truncate the integral to
\[\int_{\gamma}f(z){\rm e}^{{\rm i}\omega g(z)}\ {\rm d}z\approx\frac{{\rm e}^{{ \rm i}\omega g(\eta)}}{\omega}\int_{0}^{P}\tilde{f}(\tilde{p}){\rm e}^{-\tilde {p}}\ {\rm d}\tilde{p}, \tag{19}\]
for some \(P>0\), then apply Gauss-Legendre quadrature on \([0,P]\), to obtain the approximation
\[\int_{\gamma}f(z){\rm e}^{{\rm i}\omega g(z)}\ {\rm d}z\approx\frac{P{\rm e}^{{ \rm i}\omega g(\eta)}}{2\omega}\sum_{m=1}^{N}w_{m}^{\rm Les}\tilde{f}(z_{[0,P] }(t_{m}^{\rm Leg})){\rm e}^{-z_{[0,P]}(t_{m}^{\rm Leg})}, \tag{20}\]
where we compute \(h_{\eta}(z_{[0,P]}(t_{m}^{\rm Leg}/\omega))\) (which is required for the evaluation of \(\tilde{f}(z_{[0,P]}(t_{m}^{\rm Leg}))\)) by the same Newton iteration discussed above for \(h_{\eta}(t_{m}^{\rm Lag}/\omega)\). For the truncation point \(P\) we take
\[P=L, \tag{21}\]
where
\[L:=-\log\left(\delta_{\rm quad}M/|{\rm e}^{\rm i\omega g(\eta)}| \right), \tag{22}\]
which describes the point at which the magnitude of the exponential part of the integrand drops below \(\delta_{\rm quad}\) times its maximum value \(M\) on the quasi-SD deformation.
#### 2.6.3 Evaluation of integrals over Type 3 contours
Let \(\eta\in{\cal P}_{\rm exit}\cup({\cal P}_{\rm endp}\setminus\Omega)\) be such that the SD contour \(\gamma\) from \(\eta\) leads to an entrance \(z\in{\cal P}_{\rm entr}\). In this case we apply (possibly truncated) Gauss-Legendre quadrature as in formulas (19) and (20), but now with
\[P=\min(p_{n_{\rm max}}/\omega,L), \tag{23}\]
where \(p_{n_{\rm max}}\) is defined as in SS2.4 and \(L\) is defined as in SS2.6.2.
In the case where the minimum is attained by \(p_{n_{\rm max}}/\omega\), so that the whole contour is considered, a potential inconsistency arises, because the application of the higher accuracy Newton iteration described in SS2.6.2 for the calculation of \(h_{\eta}(z_{[0,P]}(t_{m}^{\rm Leg}/\omega))\) corresponds implicitly to a slight shifting of the endpoint of the contour \(\gamma\) away from the entrance \(z=h_{\eta}^{(n_{\rm max})}\) added to the graph \(G\) in step 5. To avoid this inconsistency, in our implementation, in step 4, whenever the contour tracing terminates in case (i), we run a Newton iteration on the final point \(h_{\eta}^{(n_{\rm max})}\) with the high accuracy tolerance \(\delta_{\rm fine}\), before adding it to the list of entrances \({\cal P}_{\rm entr}\). Note that this may mean that \(h_{\eta}^{(n_{\rm max})}\) lies very slightly outside \(\Omega\).
## 3 Theoretical results
In this section we collect some theoretical results that motivate the design of our algorithm.
### Removal of stationary points
In SS2.2 we described our algorithm for removing stationary points from the set \({\cal P}_{\rm stat}\) when they are close. When removing stationary points and their associated non-oscillatory balls, we need to ensure that the removed stationary points still lie inside one of the remaining non-oscillatory balls, so that we don't encounter any stationary points along the trajectory in our ODE solve for the SD contour tracing (see the discussion in SS3.3.2 below). In this section we provide a sufficient condition on the parameter \(\delta_{\rm ball}\) for this to be guaranteed.
**Proposition 3.1**.: _Suppose that in the removal algorithm of SS2.2, \(n\) stationary points have been removed from \({\cal P}_{\rm stat}\). Then for any stationary point \(\xi\) that was removed, there exists \(\xi^{\prime}\in{\cal P}_{\rm stat}\) such that \(|\xi-\xi^{\prime}|\leq n\delta_{\rm ball}r_{\xi^{\prime}}\)._
Proof.: We proceed by induction on \(n\). The result is trivially true for \(n=0\). Assume that it is true after the removal of \(n\) points, and suppose that the \((n+1)\)st point is now to be removed. Let \(\xi_{1},\xi_{2}\) denote the pair of points selected as realising \(\min_{\xi_{1},\xi_{2}}d_{\xi_{1},\xi_{2}}\), and without loss of generality suppose that \(\xi_{2}\) is the point to be removed (so that \(r_{\xi_{1}}\geq r_{\xi_{2}}\)). Then \(|\xi_{2}-\xi_{1}|\leq\delta_{\mathrm{ball}}r_{\xi_{1}}\leq(n+1)\delta_{\mathrm{ ball}}r_{\xi_{1}}\), so the claimed property holds for \(\xi_{2}\). Furthermore, by the inductive hypothesis, for each point \(\xi\) previously removed, there exists \(\xi^{\prime}\in\mathcal{P}_{\mathrm{stat}}\) such that \(|\xi-\xi^{\prime}|\leq n\delta_{\mathrm{ball}}r_{\xi^{\prime}}\). If \(\xi^{\prime}\neq\xi_{2}\) then \(\xi^{\prime}\) will still be present in \(\mathcal{P}_{\mathrm{stat}}\) after the removal of \(\xi_{2}\), and \(|\xi-\xi^{\prime}|\leq n\delta_{\mathrm{ball}}r_{\xi^{\prime}}\leq(n+1)\delta _{\mathrm{ball}}r_{\xi^{\prime}}\). On the other hand, if \(\xi^{\prime}=\xi_{2}\) then \(\xi^{\prime}\) will not be present in \(\mathcal{P}_{\mathrm{stat}}\) after the removal of \(\xi_{2}\), but \(\xi_{1}\) will be, and by the triangle inequality
\[|\xi-\xi_{1}|\leq|\xi-\xi_{2}|+|\xi_{2}-\xi_{1}|\leq n\delta_{\mathrm{ball}}r_ {\xi_{2}}+\delta_{\mathrm{ball}}r_{\xi_{1}}\leq(n+1)\delta_{\mathrm{ball}}r_{ \xi_{1}},\]
completing the inductive step.
As a consequence, we obtain the following.
**Corollary 3.2**.: _If \(J>2\) and \(0<\delta_{\mathrm{ball}}\leq 1/(2(J-2))\) then, after the removal algorithm has run, for every stationary point \(\xi\) there exists \(\xi^{\prime}\in\mathcal{P}_{\mathrm{stat}}\) such that \(\xi\in\Omega_{\xi^{\prime}}\) and \(\mathrm{dist}(\xi,\partial\Omega_{\xi^{\prime}})\geq r_{\xi^{\prime}}/2\)._
### Region of no return for SD contours
The following result establishes a _region of no return_: once an SD contour enters this region, we can say with certainty which valley it will converge to. The idea behind this result is that in the region of no return the highest degree term \(\alpha_{J}z^{J}\) of the polynomial \(g\) is sufficiently dominant over the lower degree terms that the SD contours inside the region converge to the same valley as those corresponding to the monomial phase \(\alpha_{J}z^{J}\).
**Theorem 3.3** (Region of no return).: _Let \(g\), \(\mathcal{V}\) and \(R_{v}\), for \(v\in\mathcal{V}\), be as in (3), (5) and (11). The regions \(R_{v}\), \(v\in\mathcal{V}\), contain no stationary points of \(g\). Furthermore, if an SD contour enters \(R_{v}\) for some \(v\in\mathcal{V}\), it never leaves \(R_{v}\)._
Proof.: That \(R_{v}\) contains no stationary points follows because if \(G(r,\theta)>0\) then
\[J|\alpha_{j}||z|^{J-1}>\sum_{j=1}^{J-1}j|\alpha_{j}||z|^{j-1}\geq\left|\sum_{ j=1}^{J-1}j\alpha_{j}z^{j-1}\right|,\]
so that \(g^{\prime}(z)\neq 0\).
Now fix \(v\in\mathcal{V}\). Given \(\theta^{\prime}\in(0,\pi/(2J))\) and \(R>0\) we define the sector
\[S_{v}(R,\theta^{\prime}):=\{z\in\mathbb{C}:|\arg z-v|_{2\pi}<\theta^{\prime} \text{ and }|z|>R\},\]
with \(|\cdot|_{2\pi}\) defined as in (12). We also define the function
\[\tilde{G}(R,\theta^{\prime}):=|J||\alpha_{J}|R^{J-1}\min\left(\sin J\theta^{ \prime},\cos J\theta^{\prime}\right)-\sum_{j=1}^{J-1}j|\alpha_{j}|R^{j-1}, \tag{24}\]
which for each fixed \(\theta^{\prime}\) is a polynomial in \(R\) of degree \(J-1\).
We claim that if \(\theta^{\prime}\in(0,\pi/(2J))\) and \(\bar{G}(R,\theta^{\prime})>0\), then if an SD contour enters \(S_{v}(R,\theta^{\prime})\) it never leaves \(S_{v}(R,\theta^{\prime})\). To prove this, we show that if an SD contour intersects \(\partial S_{v}(R,\theta^{\prime})\) then the direction of descent always points into \(S_{v}(R,\theta^{\prime})\). Since \(\partial S_{v}(R,\theta^{\prime})\) is the union of the sets
\[\{z\in\mathbb{C}:|\arg z-v|_{2\pi}\leq\theta^{\prime}\text{ and }|z|=R\}\]
and
\[\{z\in\mathbb{C}:|\arg z-v|_{2\pi}=\theta^{\prime}\text{ and }|z|\in[R,\infty)\},\]
it suffices to show that, in polar coordinates \((r,\theta)\),
\[\Im\frac{\partial g}{\partial r}>0,\quad\text{for }|\theta-v|_{2 \pi}\leq\theta^{\prime}\text{ and }r=R, \tag{25}\] \[\mp\Im\frac{1}{r}\frac{\partial g}{\partial\theta}>0,\quad \text{for }\theta=v\pm\theta^{\prime}\text{ (mod }2\pi)\text{ and }r\geq R. \tag{26}\]
For (25), let \(|\theta-v|_{2\pi}\leq\theta^{\prime}\). Since
\[\frac{\partial g(r\mathrm{e}^{\mathrm{i}\theta})}{\partial r}=\sum_{j=1}^{J}j \alpha_{j}\mathrm{e}^{\mathrm{i}j\theta}r^{j-1}\]
and \(\Im[\alpha_{J}\mathrm{e}^{\mathrm{i}J\theta}]=|\alpha_{J}|\cos\left(J|\theta- v|_{2\pi}\right)\) (using the definition of \(v\)) we have that
\[\Im\frac{\partial g(r\mathrm{e}^{\mathrm{i}\theta})}{\partial r}\geq J|\alpha _{J}|r^{J-1}\cos(J|\theta-v|_{2\pi})-\sum_{j=1}^{J-1}j|\alpha_{j}|r^{j-1},\]
so a sufficient condition for (25) to hold is that
\[J|\alpha_{J}|R^{J-1}\cos(J\theta^{\prime})-\sum_{j=1}^{J-1}j|\alpha_{j}|R^{j- 1}>0. \tag{27}\]
For (26), let \(\theta=v\pm\theta^{\prime}\text{ (mod }2\pi)\). Since
\[\frac{1}{r}\frac{\partial g(r\mathrm{e}^{\mathrm{i}\theta})}{\partial\theta} =\sum_{j=1}^{J}\mathrm{i}j\alpha_{j}\mathrm{e}^{\mathrm{i}j\theta}r^{j-1},\]
and \(\Im[\mathrm{i}\alpha_{J}\mathrm{e}^{\mathrm{i}J\theta}]=\Im[\mathrm{i}\alpha_ {J}\mathrm{e}^{\mathrm{i}J(v\pm\theta^{\prime})}]=\mp|\alpha_{J}|\sin(J \theta^{\prime})\) we have that
\[\mp\Im\frac{1}{r}\frac{\partial g(r\mathrm{e}^{\mathrm{i}\theta})}{\partial \theta}\bigg{|}_{\theta=v\pm\theta^{\prime}}\geq J|\alpha_{J}|r^{J-1}\sin(J \theta^{\prime})-\sum_{j=1}^{J-1}j|\alpha_{j}|r^{j-1}=:\phi(r).\]
The function \(\phi(r)\) has the property that if \(R>0\) and \(\phi(R)>0\) then \(\phi(r)>0\) for all \(r\geq R\). To see this, note that
\[\phi(r)=r^{J-1}\Big{(}J|\alpha_{J}|\sin(J\theta^{\prime})-\sum_{j=1}^{J-1}j| \alpha_{j}|r^{j-J}\Big{)},\]
and that the term in brackets is a strictly decreasing function of \(r\), which tends to \(-\infty\) as \(r\to 0\) and to \(J|\alpha_{J}|\sin(J\theta^{\prime})>0\) as \(r\to\infty\). Hence a sufficient condition for (26) is that \(\phi(R)>0\), i.e.
\[J|\alpha_{J}|R^{J-1}\cos(J\theta^{\prime})-\sum_{j=1}^{J-1}j|\alpha_{j}|R^{j-1 }>0. \tag{28}\]
Since the assumption \(\tilde{G}(R,\theta^{\prime})>0\) implies both (27) and (28), our claim is proved.
The statement of the theorem then follows by noting that the region \(R_{v}\) is the union of all the sectors \(S_{v}(R,\theta^{\prime})\) such that \(0<\theta^{\prime}\leq\pi/(2J)\) and \(\tilde{G}(R,\theta^{\prime})>0\). We note that if \(0<\theta^{\prime}<\pi/(4J)\) then \(\sin J\theta^{\prime}<\sin\pi/4\), so that if \(\tilde{G}(R,\theta^{\prime})>0\) then \(\tilde{G}(R,\pi/(4J))>0\). This implies that the union can actually be taken over \(\pi/(4J)\leq\theta^{\prime}<\pi/(2J)\) only, justifying the definition of the function \(G\) in (13).
### Quadrature error
In SS2.2 we defined the non-oscillatory region as a union of balls on which the exponential \(\mathrm{e}^{\mathrm{i}\omega g(z)}\) undergoes a bounded number of oscillations. Here we show that the definition (8) strikes a balance between the accuracy of our quadrature approximations to the integrals outside and inside this region.
#### 3.3.1 Quadrature in the non-oscillatory region
The Type 1 straight line contour integrals between points in the non-oscillatory region are evaluated using Gauss-Legendre quadrature, as detailed in SS2.6.1. To assess the accuracy of this we note the following theorem, which is a simple consequence of the standard error analysis presented in [15, Chap. 19].
**Theorem 3.4**.: _Let \(z_{0},z_{1}\in\mathbb{C}\). Suppose that \(\gamma\) is a straight-line contour in \(\mathbb{C}\) starting at \(z_{0}\) and ending at \(z_{1}\) and that there exists \(\rho>0\), \(C>0\) and \(\xi_{*}\in\mathbb{C}\) such that \(f\) is analytic and bounded in \(z_{[z_{0},z_{1}]}(B_{\rho})\), where \(B_{\rho}\) is a standard Bernstein ellipse (relative to \([-1,1]\)) and \(z_{[z_{0},z_{1}]}\) is defined as in (15), and_
\[\omega|g(\xi_{*})-g(z)|\leq C,\qquad z\in z_{[z_{0},z_{1}]}(B_{\rho}). \tag{29}\]
_Let \(I\) and \(Q\) denote the left- and right-hand sides of (16), respectively. Then, for some \(\tilde{C}>0\), depending only on \(\rho\),_
\[|I-Q|\leq\tilde{C}|z_{1}-z_{0}|\|f\|_{L^{\infty}(z_{[z_{0},z_{1}]}(B_{\rho}))} e^{-\omega\Im[g(\xi_{*})]}e^{C}\rho^{-2N}. \tag{30}\]
Proof.: Noting that
\[I=\mathrm{e}^{\mathrm{i}\omega g(\xi_{*})}\int_{\gamma}f(z)\mathrm{e}^{ \mathrm{i}\omega(g(z)-g(\xi_{*}))}\ \mathrm{d}z\]
and that
\[|f(z)\mathrm{e}^{\mathrm{i}\omega(g(z)-g(\xi_{*}))}|\leq\|f\|_{L^{\infty}(z_{ [z_{0},z_{1}]}(B_{\rho}))}\mathrm{e}^{C},\qquad z\in z_{[z_{0},z_{1}]}(B_{\rho }),\]
the result follows from [15, Thm 19.3].
Theorem 3.4 motivates the definition of the non-oscillatory region in (8). Indeed, if the assumptions of Theorem 3.4 hold with \(\rho\) and \(C\) independent of \(\omega\) then the bound (30) guarantees \(\omega\)-independent exponential convergence for \(\omega\) bounded away from zero. However, even when (8) is satisfied, the relationship between \(\xi_{*}\), \(\rho\), \(C\) and \(\omega\) is beyond our control in general because the ellipse may extend beyond the non-oscillatory region, so that \(C>C_{\mathrm{ball}}\). Thus we cannot control the factor \(\mathrm{e}^{C}\) entirely based on condition (8).
Still, the bound (30) shows that the quadrature error decreases with increasing \(N\). The precise rate of decrease depends on a balance between the decay of \(\rho^{-2N}\) and the growth of \(\mathrm{e}^{C}\) and \(\|f\|_{L^{\infty}(\varepsilon_{[z_{0},z_{1}]}(B_{\rho}))}\) for increasing \(\rho\). We quantify this in the special case of monomial phase in SS3.3.3.
#### 3.3.2 Quadrature for the SD contours
For Type 2 or Type 3 integrals along SD contours we use either Gauss-Laguerre or (possibly truncated) Gauss-Legendre quadrature, as detailed in SS2.6.2 and SS2.6.3. We expect these rules to converge rapidly to the true value of the integral as the number of quadrature points \(N\) tends to infinity, provided that the integrand is analytic and bounded in a suitable region of the complex \(\tilde{p}\) plane.
For Gauss-Laguerre the following result appeared recently in [17, Thm 6.3].
**Theorem 3.5**.: _Suppose that \(\tilde{f}\) is analytic inside and on the parabola \(P_{\rho}:=\{z\in\mathbb{C}:\sqrt{-z}=\rho\}\) for some \(\rho>0\), where the branch cut is along the positive real axis and \(\sqrt{-z}\) is real and positive on the negative real axis, that \(\tilde{f}\) grows at most algebraically as \(z\to\infty\) inside the parabola, and that the integral_
\[\mathcal{K}_{\rho}:=\int_{P_{\rho}}|e^{-z}\sqrt{-z}\tilde{f}(z)|\ \mathrm{d}z\]
_is finite. Let \(I\) and \(Q\) denote the left- and right-hand sides of (18), respectively. Then_
\[|I-Q|\leq\mathcal{K}_{\rho}\frac{e^{-\omega\Im[g(\eta)]}}{\omega}e^{-4\rho \sqrt{N}}. \tag{31}\]
This result implies that our Gauss-Laguerre quadrature approximation should converge root-exponentially as \(N\to\infty\), provided that \(f\) is sufficiently well-behaved at infinity. The presence of singularities in the complex \(\tilde{p}\)-plane limits the size of \(\rho\), and hence the convergence rate. We know from (17) that our integrand is singular at points \(\tilde{p}\in\mathbb{C}\) where \(g^{\prime}(h_{\eta}(\tilde{p}/\omega))=0\), i.e. where \(h_{\eta}(\tilde{p}/\omega)=\xi\) for some stationary point \(\xi\). Since we only trace SD contours outside the non-oscillatory region (which contains the stationary points), we know that there cannot be singularities on the SD contour itself. If the start point \(\eta\) lies on an SD contour emanating from a stationary point \(\xi\) then we expect there to be a singularity in the \(\tilde{p}\)-plane at \(\tilde{p}=\omega\Im[g(\xi)-g(\eta)]<0\). We show in SS3.3.3 that in the special case of monomial phase this singularity lies at \(\tilde{p}=-C_{\mathrm{ball}}\), which implies root-exponential convergence independent of \(\omega\) for \(\omega\) bounded away from zero. Determining the locations of the other possible singularities in the complex \(\tilde{p}\)-plane is more challenging, since it involves study of the (multivalued) inverse of \(g\). We leave further theoretical investigation of this to future work.
#### 3.3.3 Results for monomial phase
It is instructive to consider the special case of a monomial phase \(g(z)=z^{J}\) for some \(J\in\mathbb{N}\). In this case there is a single stationary point of order \(J-1\) at \(\xi=0\), and \(g(0)=0\). Following the prescription (8), we obtain a ball radius
\[r_{0}=(C_{\rm ball}/\omega)^{1/J}.\]
We first consider a Type 1 integral in the non-oscillatory region. For simplicity we choose \(f(z)\equiv 1\). Specifically, we consider the evaluation of the integral
\[\int_{0}^{r_{0}{\rm e}^{{\rm i}\theta}}{\rm e}^{{\rm i}\omega g(z)}\ {\rm d}z,\]
for some \(\theta\in[0,2\pi]\). Taking \(\xi_{*}=0\), we can apply Theorem 3.4 with any \(\rho>1\), and the resulting scaled and translated Bernstein ellipse surrounding \([0,r_{0}{\rm e}^{{\rm i}\theta}]\) is tightly contained in the disc \(|z|\leq sr_{0}\), where \(\rho\) and \(s\) are related by
\[\rho=2s-1+\sqrt{(2s-1)^{2}-1}=2s-1+2\sqrt{s^{2}-s}.\]
Hence condition (29) is satisfied, independently of \(\theta\), with
\[C=C_{\rm ball}s^{J},\]
which is independent of \(\omega\) but dependent on \(J\). When \(s\) is large, we have \(\rho\approx 4s\), and in this regime the error bound provided by (30) for Gauss-Legendre quadrature is approximately proportional to
\[(C_{\rm ball}/\omega)^{1/J}{\rm e}^{C_{\rm ball}s^{J}}(4s)^{-2N}.\]
As a function of \(s\), with \(J\) and \(N\) fixed, this quantity is minimised where its \(s\)-derivative vanishes, which occurs where
\[C_{\rm ball}Js^{J}-2N=0,\]
i.e. where
\[s=\left(\frac{2N}{C_{\rm ball}J}\right)^{1/J}.\]
Accordingly, the error bound is approximately proportional to
\[(C_{\rm ball}/\omega)^{1/J}16^{J}\left(\frac{8eN}{C_{\rm ball}J}\right)^{-2N/J}.\]
Thus we expect super-exponential convergence as \(N\to\infty\) for fixed \(J\). However, we expect the convergence to be slower the larger \(J\) is.
Next we consider a Type 2 integral over an SD contour, again with \(f(z)\equiv 1\). Specifically, we consider the evaluation of the integral
\[\int_{r_{0}{\rm e}^{{\rm i}v}}^{\infty{\rm e}^{{\rm i}v}}{\rm e}^{{\rm i} \omega g(z)}\ {\rm d}z,\]
where \(v=((2j+1/2)\pi)/J\) for some \(j\in\{1,\ldots,J\}\). Following our method, the contour is parametrized by
\[h_{\eta}(p)=(r_{0}^{J}+p)^{1/J}{\rm e}^{{\rm i}v},\qquad p\in[0,\infty),\]
and, recalling (10) and (17), after rescaling \(p=\tilde{p}/\omega\) the integral becomes
\[\frac{\mathrm{e}^{-C_{\mathrm{ball}}}\mathrm{e}^{\mathrm{i}v}}{\omega^{1/J}J} \int_{0}^{\infty}(C_{\mathrm{ball}}+\tilde{p})^{1/J-1}\mathrm{e}^{-\tilde{p}} \ \mathrm{d}\tilde{p}.\]
The integrand has a branch point at
\[\tilde{p}=-C_{\mathrm{ball}},\]
but we note that the distance between the branch point and the positive real \(\tilde{p}\)-axis equals \(C_{\mathrm{ball}}\), which is independent of both \(\omega\) and \(J\).
For truncated Gauss-Legendre the relevant theory can be found in [15, Chap. 19] (and see also [16]). Due to the branch point at \(\tilde{p}=-C_{\mathrm{ball}}\), as \(N\to\infty\) we obtain exponential convergence to the integral over the interval \([0,P]\), where \(P\) is given by either (21) or (23). In the case where \(P=L\), by the definition of \(L\) in (22), we expect the truncation error to have relative order \(\delta_{\mathrm{quad}}\).
#### 3.3.4 Number and distribution of quadrature points
PathFinder uses a fixed number \(N\) of quadrature points on each contributing contour, and that number is the same both for integrals within and outside the non-oscillatory region, i.e., for Gauss-Legendre and Gauss-Laguerre quadrature. Thus, increasing the single parameter \(N\) provides a way of uniformly improving accuracy.
The theoretical results in this section (specifically, Theorems 3.4 and 3.5) imply that the precise rate of improvement with respect to \(N\) depends on the type of integral being approximated. They suggest even that a different strategy for the distribution of quadrature points may be superior. Indeed, exponential convergence of Gauss-Legendre for Type 1 integrals in the non-oscillatory region is not balanced with root-exponential convergence of Gauss-Laguerre for Type 2 integrals outside. Similarly, convergence rates of Gauss-Laguerre and truncated Gauss-Legendre outside the non-oscillatory region are different. Our choice of a fixed parameter \(N\) is inspired on the one hand by simplicity, and on the other hand by the lack of robust methods to optimize parameters in alternative schemes. For example, we have shown in SS3.3.3 that the convergence rate of Gauss-Legendre for Type 1 integrals may depend on the order of nearby stationary points. While this can be quantified precisely for the case of monomial phase, it is not at all clear how to generalise this analysis when a cluster of multiple stationary points is present. Hence, stationary point order is a quantity that we deliberately do not explicitly compute, estimate or rely on in any way. Implicitly, of course, it plays a big role, and it does so mainly via the definition of the ball of the radius in (8).
The main practical benefit of the theoretical analysis of quadrature error in this section is the guarantee that \(N\) is a robust parameter for improving accuracy. Concerning possible future improvements, rather than attempting to optimize the quadrature point distribution a priori, we believe a more promising development would be the ability to invoke standard adaptive quadrature schemes along the contours for a given function \(f\). However, it should be borne in mind that quadrature forms just one step in our algorithm, and that the other steps (particularly the SD path tracing) incur a non-negligible cost overhead, that should also be considered when trying to further optimize performance.
## 4 Further implementation aspects
In this section we discuss some additional aspects of the implementation of our algorithm in PathFinder.
### Default parameter values
In Table 4.1 we list the user-specified parameters in our algorithm, along with the default values used in all our numerical results in SS5. These were determined as the result of extensive numerical experiments on a range of examples, not detailed here. Instructions on how to adjust these parameters away from their default values can be found at github.com/AndrewGibbs/PathFinder.
### Small \(\omega\)
While our algorithm is geared towards the case where \(\omega\) is moderate or large, we make a brief comment on the case where \(\omega\) is small. If \(\Gamma\) is infinite then the integral (1) typically diverges for \(\omega=0\). However, if \(\Gamma\) is finite then the integral converges for \(\omega=0\) and for small enough \(\omega\) it is non-oscillatory. In PathFinder we detect and deal with this case in the following way. If both endpoints are finite, then before starting step 1 of the algorithm we construct non-oscillatory balls around the endpoints (using the process in SS2.2) and check whether the balls intersect non-trivially. If so, we apply standard Gauss-Legendre quadrature to evaluate (1); if not, the balls are discarded and we proceed with the rest of the algorithm.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Parameter & Domain & Meaning & Default \\ \hline \(C_{\text{ball}}\) & \((0,\infty)\) & Governs maximum number of & \(2\pi\) \\ & & oscillations across each non-oscillatory & \\ & & ball (and hence the ball radius) & \\ \hline \(N_{\text{ball}}\) & \(\mathbb{N}\) & Number of rays used & 16 \\ & & when determining the ball radius & \\ \hline \(\delta_{\text{ball}}\) & \((0,1)\) & Governs when overlapping balls & \(10^{-3}/(2\max(J-2,1))\) \\ & & should be amalgamated & \\ \hline \(\delta_{\text{ODE}}\) & \((0,1)\) & Governs the local step size in & 0.1 \\ & & the ODE solver for SD path tracing & \\ \hline \(\delta_{\text{coarse}}\) & \((0,1)\) & Tolerance for the increment in the & \(10^{-2}\) \\ & & Newton iteration in the SD path tracing & \\ \hline \(\delta_{\text{fine}}\) & \((0,1)\) & Tolerance for the increment in the & \(10^{-13}\) \\ \((<\delta_{\text{coarse}})\) & & Newton iteration in the quadrature & \\ \hline \(\delta_{\text{quad}}\) & \((0,1)\) & Governs when the contribution & \(10^{-16}\) \\ & & from an integral on the quasi-SD & \\ & & deformation is computed & \\ \hline \(N\) & \(\mathbb{N}\) & Number of quadrature points to use & no default \\ & & in each integral evaluated in step 76 & \\ \hline \end{tabular}
\end{table}
Table 4.1: User-specified parameters and their default values in PathFinder.
### The case \(J=1\)
In the case \(J=1\) (linear phase) there are no stationary points, and our algorithm simplifies dramatically. Furthermore, the SD contours are simply parallel straight lines in the direction of the single valley at angle \(\pi/2-\arg(\alpha_{1})\), and there is no need to trace them numerically. Hence when \(J=1\) PathFinder skips the ODE contour tracing step and exploits the exact characterization of the SD contours mentioned above.
### Specifying infinite endpoints
In the description of our algorithm in SS2 we made the assumption that any infinite endpoint of the contour \(\Gamma\) should be at a valley \(v\in\mathcal{V}\). PathFinder is actually more flexible than this. The user is permitted to specify an infinite endpoint at any \(\theta\in[v-\pi/(2J),v+\pi/(2J)]\) and the code will automatically adjust this to equal \(v\). The case \(\theta=v\pm\pi/(2J)\) is delicate because the highest order term in the phase does not provide exponential decay along the contour. Nonetheless, we include it, because in applications one often encounters this case, with the integral converging conditionally (under appropriate assumptions on \(f\)) and the contour deformation to \(v\) being justified by Jordan's Lemma.
## 5 Numerical results
In this section we present numerical results illustrating the performance of our algorithm and its implementation in PathFinder. All results in this section were produced using PathFinder Version 1.0 [8].
### A "generic" example
We begin by illustrating the performance of PathFinder on the integral
\[I=\int_{-1}^{1}(2z^{4}+7z^{3}+z^{2}+8z+2)\mathrm{e}^{\mathrm{i}\omega(3z^{9}+z^ {8}+4z^{7}+z^{6}+5z^{5}+9z^{4}+2z^{3}+6z^{2}+5z+3)}\ \mathrm{d}z, \tag{32}\]
where, to convey the message that our approach is applicable to truly "generic" amplitudes and polynomial phase functions, the coefficients of \(f\) and \(g\) are chosen to be the first 5 digits of e and the first 10 digits of \(\pi\), respectively. This can be approximated by PathFinder via the Matlab code (cf. (2))
\[\begin{array}{c}\texttt{PathFinder(-1,1,@(z) 2*z.^{\text{-}4+7*z.^{\text{-}3+z.^{\text{-}2+8*z+2,\ldots}}}}\\ \texttt{[3 1 4 1 5 9 2 6 5 3],omega,N)}\end{array}\]
In Figure 5.1 we plot the quasi-SD deformations and quadrature point distributions (using the PathFinder 'plot' option) for (32) for \(\omega\in\{0.01,1,5,50\}\) and \(N=10\). As explained in SS2.2, for smaller \(\omega\) the non-oscillatory balls are larger, and can overlap, while for larger \(\omega\) they shrink around the stationary points. In more detail, in Figure 5.1(a) (\(\omega=0.01\)), \(\omega\) is small enough that both endpoints are inside the same non-oscillatory ball. Hence the integral is treated as non-oscillatory and is approximated by Gauss-Legendre quadrature along a
single straight-line contour. In Figure 5.1(b) (\(\omega=1\)), \(\omega\) is still small enough that many of the balls overlap, and the quasi-SD deformation comprises two SD contours (one from an exit and one from an endpoint) plus four straight-line contours in the non-oscillatory region. In Figure 5.1(c) (\(\omega=5\)), \(\omega\) is large enough that only two balls overlap, and the quasi-SD deformation comprises five SD contours (two from endpoints, two from exits to valleys, and one from an exit to an entrance), plus four straight-line contours in the non-oscillatory region. Finally, in Figure 5.1(d) (\(\omega=50\)), \(\omega\) is so large that none of the balls overlap, and the quasi-SD deformation comprises eight contributing SD contours (two from endpoints and six from exits to valleys), plus three straight-line contours in the non-oscillatory region. However, in this case the two SD contours and one straight-line contour associated with the stationary point near \(0.2+0.5\mathrm{i}\) are judged to make a negligible contribution to the integral, so are not
Figure 5.1: PathFinder output (cf. Figure 2.1) with \(N=10\) for the approximation of (32).
assigned any quadrature points. We emphasize that this intricate behaviour is fully automated, with no expert input required from the user.
In Figure 5.2(a) we plot the error in the PathFinder approximation of (32), compared to reference values computed using the Julia QuadGK package when \(\omega<500\), and using PathFinder with \(N=500\) when \(\omega\geq 500\). For fixed \(\omega\) we observe rapid convergence as \(N\to\infty\), at a rate that appears independent of \(\omega\). In Figure 5.2(b) we show the associated computation times, which remain bounded as \(\omega\) increases.
### Coalescence and the Airy function
The canonical example of an integral with two coalescing stationary points is provided by the integral representation for the Airy function, viz. (see [1, 9.5.4])
\[\operatorname{Ai}(x)=\int_{\infty\mathrm{e}^{-\mathrm{i}\pi/3}}^{\infty\mathrm{ e}^{\mathrm{i}\pi/3}}\mathrm{e}^{\mathrm{i}^{3}/3-xz}\;\mathrm{d}z=\int_{\infty \mathrm{e}^{-\mathrm{i}\pi/3}}^{\infty\mathrm{e}^{\mathrm{i}\pi/3}}\mathrm{e} ^{\mathrm{i}(-\mathrm{i}(z^{3}/3-xz))}\;\mathrm{d}z,\qquad x\in\mathbb{C}, \tag{33}\]
which is of the form (1) with \(f\equiv 1\), \(\omega=1\) and \(g(z;x)=-\mathrm{i}(z^{3}/3-xz)\). Up to a change of variable this is the same example for which, as mentioned in SS1, a bespoke, complex Gaussian quadrature rule was developed in [10]. Ai can be approximated by PathFinder via the Matlab code
\[\begin{array}{c}\text{Ai = \text@underline{@}}\text{@(x)}\text{ }\text{PathFinder(-pi/3,pi/3,[],}\ldots\\ \text{ }\text{
contours from exits plus three straight-line contours inside balls (which go via both stationary points). In Figure 5.3(c) the balls overlap enough that both stationary points are contained in both balls, so we get two SD contours from exits plus just two straight-line contours inside balls (which go via only one of the stationary points). In Figure 5.3(d) the balls have merged completely and in addition to the two SD contours from exits there is just one straight-line contour inside a ball (which does not go via the stationary point). In Figure 5.3(e) the balls have split again, but we see the same deformation structure as in Figure 5.3(d). Again, we emphasize that these calculations are fully automated.
In Figure 5.4 we show the accuracy of the PathFinder approximation for this example as a function of \(x\in[-10,4]\), for different \(N\). Our reference is the built-in Matlab command airy. We note that between \(x=-3\) and \(x=0\) the error for the smaller values of \(N\) undergoes some jumps. These are due to the fact that near stationary point coalescence the topology of the quasi-SD deformation, the number of contours constituting it, and hence the total
Figure 5.4: Accuracy of PathFinder approximation of \(\mathrm{Ai}(x)\) for different \(N\).
Figure 5.3: PathFinder output with \(N=20\) for the approximation of \(\mathrm{Ai}(x)\) via (33) at various \(x\), showing the stationary point coalescence at \(x=0\).
number of quadrature points along it (recall (14)), all change discontinuously as a function of \(x\) (as illustrated in Figure 5.3). However, as \(N\) increases we see a clear, approximately exponential decrease in the error, and, although the rate of decrease depends slightly on \(x\) (because of the factors mentioned above), for \(N=30\) we achieve approximately \(10^{-13}\) error uniformly across the interval.
### A high order stationary point - comparison with Mathematica's implementation of Levin quadrature
We now consider the integral
\[I=\int_{-1}^{1}\sin(z)\mathrm{e}^{\mathrm{i}\omega z^{9}}\,\mathrm{d}z, \tag{34}\]
which has a stationary point of order \(8\) at the origin. The integral (34) can be approximated by PathFinder via the command
\[\mathtt{PathFinder(-1,1,@(z)\sin(z),[1\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0],omega,N)}\]
Figure 5.5 shows the quasi-SD deformation and quadrature point distribution obtained by PathFinder for \(\omega=100,000\) and \(N=50\). There are small contributions from the endpoints, but the main contribution comes from the ball containing the stationary point.
In the Mathematica documentation [18, pp75-86], it is stated that oscillatory integrals with monomial phase functions such as (34) can be evaluated efficiently using the built-in Mathematica function NIntegrate, via its implementation of Levin quadrature (which is described, e.g., in [7, SS3.3]). To do this one can use the Mathematica command:
NIntegrate[Sin[x]Exp[omega*I*x^9],{x,-1,1}, Method->{"LevinRule","Kernel"->Exp[omega*I*x^9]}]
In Figure 5.6(a) we show a plot of the relative accuracy of our PathFinder approximation, compared to the Mathematica approximation (using the default settings), as a function of \(\omega\), for different \(N\) values. For all three \(N\) values the accuracy of our approximation is approximately uniform in \(\omega\), and for \(N=50\) our approximation agrees with Mathematica's to approx \(13\) digits. In Figure 5.6(b) we report the corresponding computation times (averaged over \(100\)
Figure 5.5: PathFinder output for (34) with \(\omega=100,000\) and \(N=50\).
identical runs) for the Mathematica routine and for the PathFinder approximation with \(N=50\). These results were obtained on a laptop (i7-1185G7, 32GB RAM) running Mathematica v13.0 and Matlab v2021b. The results suggest that PathFinder is highly competitive with Mathematica for this problem, especially for large \(\omega\).
### Coalescence to a high order stationary point
We now investigate the robustness of our algorithm in the presence of a large number of coalescing stationary points. Specifically, we consider the integral
\[\int_{-1}^{1}\mathrm{e}^{\mathrm{i}\omega(z^{7}/7-r^{6}z)}\ \mathrm{d}z, \tag{35}\]
where \(r\geq 0\) is a parameter controlling the coalescence. For \(r>0\) there are 6 stationary points with \(|\xi|=r\), namely the solutions of \(\xi^{6}=r^{6}\), and for \(r=0\) there is a single stationary point of order 6. To evaluate this integral in PathFinder for a given \(r\), one can use the command
\[\mathtt{PathFinder(-1,1,[],[1/7\ 0\ 0\ 0\ 0\ 0\ -r^{\mbox{-}6}\ 0],omega,N)}\]
In Figure 5.7 we plot the quasi-SD deformations and quadrature point distributions for a some different \(r\) values, showing how the balls first intersect and then merge as \(r\to 0\).
In Figure 5.8 we show convergence (with respect to a PathFinder reference with \(N=500\)) and CPU times (averaged over 100 runs) for fixed \(r=0.01\). We see that both the error and the CPU time are essentially independent of \(\omega\) in this case. In Figure 5.9 we plot errors for two fixed \(N\) values \(N=10,50\), as a function of \(r\). We observe that as \(r\to 0\), the error stays bounded. For \(N=10\) the error jumps up between \(r=10^{-3}\) and \(r=10^{-2}\), at a point depending on \(\omega\). This represents the point at which the balls around the stationary points merge, resulting in a reduction of \(N_{\mathrm{tot}}\), and hence a reduction in accuracy.
Figure 5.6: Accuracy (a) and timings (b) of the PathFinder approximation of (34), compared to Mathematica’s NIntegrate command.
But after this point we observe no further reduction in accuracy as \(r\to 0\). We remark that for sufficiently small \(r>0\) the six stationary points are numerically indistinguishable, but this isn't a problem for our algorithm because in that case the problem will be treated identically to that of a monomial phase.
Figure 5.7: PathFinder output for the approximation of (35) with \(\omega=1000\) and \(N=15\).
Figure 5.8: Accuracy (a) and timings (b) for (35) for \(p=6\) and \(r=0.01\).
### Canonical cuspoid integrals and their generalisations
In this section we show how our algorithm can be applied to the computation of some of the canonical integrals catalogued by Berry and Howls in [1, SS36], which, as mentioned already in SS1, are of fundamental importance in numerous application areas including optics, acoustics and quantum mechanics.
In this context, our algorithm is related to that of [11], where an adaptive contour deformation approach was applied to evaluate the cuspoid integrals considered in SS5.5.1. The algorithm in [11] is similar in spirit to our approach, in that it deforms the integration contour so that it terminates in valleys at infinity, and splits the contour into portions close to stationary points plus portions away from stationary points. However, in contrast to our approach, the algorithm in [11] does not attempt to trace SD contours, and hence is susceptible to rounding errors associated with the "violent" behaviour of the exponential factor \(\mathrm{e}^{\mathrm{i}\omega g(z)}\) when one is not on a true SD contour - see [11, SS2]. Furthermore, while the algorithm in [11] was specialised to the case of integration over the real line, our algorithm can handle much more general contours, as we illustrate in SS5.5.2.
#### 5.5.1 Cuspidoid integrals
The so-called "cuspoid integrals" listed in [1, SS36.2.4] are all of the form (1) with polynomial phase \(g\) and unit amplitude \(f\equiv 1\), unit frequency \(\omega=1\), and integration along the real line. Our algorithm is ideally suited to the evaluation of these integrals, and to demonstrate this we compute two of them. In the notation of [1, SS36], we consider the cusp catastrophe integral
\[\Psi_{2}(x,y)=P(y,x)=\int_{-\infty}^{\infty}\mathrm{e}^{\mathrm{i}(t^{4}+yt^{ 2}+xt)}\ \mathrm{d}t, \tag{36}\]
where \(P\) is the Pearcey function, and the swallowtail catastrophe integral
\[\Psi_{3}(x,y,z)=\int_{-\infty}^{\infty}\mathrm{e}^{\mathrm{i}(t^{5}+zt^{3}+yt ^{2}+xt)}\ \mathrm{d}t. \tag{37}\]
Figure 5.9: Accuracy for (35) as \(r\to 0\) for \(p=6\), for \(N=10\) (a) and \(N=50\) (b).
Both exhibit coalescence of stationary points on certain algebraic varieties (see [1, SS36.5(ii)]) on which both the first and second derivatives of the phase function vanish. In the case of (36) this occurs when
\[y=-\frac{3}{2}|x|^{2/3}, \tag{38}\]
and for (37) this occurs when
\[400x^{3}-360x^{2}z^{2}-135y^{4}-27y^{2}z^{3}+540xy^{2}z+81xz^{4}=0. \tag{39}\]
The integrals (36) and (37) can be computed in Pathfinder via the commands
Psi2 = @(x,y) PathFinder(pi,0,[],[1 0 y x 0],... 1,N,'infcontour',[true true]) Psi3 = @(x,y,z) PathFinder(pi,0,[],[1 0 z y x 0],... 1,N,'infcontour',[true true])
Figure 5.10 shows plots of the magnitude of (36) and (37) (the latter over the plane \(z=-7.5\)), computed using PathFinder with the default settings and \(N=50\). The plots agree qualitatively with those presented in [1, Figs 36.3.1 & 36.6.5], and, for (36), agree quantitatively (to all five decimal places presented) with the values presented in [11, Table 1]. Computation times on a small desktop computer (Intel i7-4790, 32GB RAM) were less than a minute for the cusp (which required the computation of 10000 instances of (36), averaging 0.005s per instance) and less than an hour for the swallowtail (which required 250000 instances of (37), averaging 0.01s per instance).
#### Generalisations
In [9] the authors considered a family of generalisations of certain canonical cuspoid integrals, with integration no longer over the real line, but rather over
Figure 5.10: Magnitude plots of (36) and (37), with coalescence curves (38) and (39) (the latter with \(z=-7.5\)) superimposed in black. The computational grid was of size \(100\times 100\) for (a) and \(500\times 500\) for (b).
a complex contour starting and ending at valleys at infinity, and possibly with a non-unit amplitude function.
A specific aim of [9] was to investigate the relevance of such integrals to the study of the so-called "inflection point problem", a canonical problem in wave scattering originally introduced over 50 years ago by Popov in [12]. This problem, which remains unsolved in closed form, concerns two-dimensional time-harmonic wave propagation near a boundary with an inflection point, and seeks a solution for the wave field near the inflection point that describes the transition from an incoming "whispering gallery wave" supported on the concave portion of the boundary, to outgoing "creeping waves" along the convex portion of the boundary, along with a scattered "searchlight" beam (for details and further references see [14]).
In this context, in [9, SS3.3] the authors studied the family of integrals
\[A_{ij}(x,y)=\int_{\Gamma_{ij}}f(t)\mathrm{e}^{\mathrm{i}(2t^{5}/5-xt^{4}/2-yt^ {2})}\ \mathrm{d}t, \tag{40}\]
where \(f(t)\) is some amplitude to be specified, and \(\Gamma_{ij}\) denotes any contour from valley \(v_{i}\) to valley \(v_{j}\), where \(v_{j}:=(2(j-1)+1/2)\pi/5\), \(j=1,\ldots,5\). These integrals have stationary point coalescence on the cubic curve \(y+4x^{3}/27=0\), which suggests that, by appropriately choosing \(f\) and \(\Gamma_{ij}\), they might exhibit certain features of the solution of the inflection point problem. Indeed, in [9, SS4] it was shown that as \(x\to-\infty\) near the cubic curve, the integral \(A_{32}\) has the character of an incoming whispering gallery type wave, and that, as \(x\to+\infty\) near the cubic curve, the integral \(A_{52}\) has the character of an outgoing creeping wave. However, plots of the resulting fields could not be presented in [9] due to the lack of a suitable numerical evaluation method and implementation.
Using PathFinder we are able to remedy this. In Figures 5.11(a) and 5.11(b) we provide plots of the magnitude of \(A_{32}\) and \(A_{52}\) with \(f\equiv 1\). To evaluate the integrals we used the PathFinder code
A32 = @(x,y) PathFinder(9*pi/10,pi/2,[],[2/5 -x/2 0 -y 0 0],... 1,N,'infcontour',[true true]) A52 = @(x,y) PathFinder(17*pi/10,pi/2,[],[2/5 -x/2 0 -y 0 0],... 1,N,'infcontour',[true true]) We only plot \(A_{52}\) above the cubic curve \(y+4x^{3}/27=0\), because below this curve \(A_{52}\) becomes exponentially large (cf. [9, Fig. 12(i)]). In Figures 5.11(c) and 5.11(d) we present corresponding plots of the modulated plane wave
\[u(x_{0},y_{0})=A_{ij}(x,y)\mathrm{e}^{\mathrm{i}kx_{0}},\]
where \((x_{0},y_{0})\) are outer variables, related to the inner variables \((x,y)\) by \(x=k^{1/5}x_{0}\), \(y=k^{3/5}y_{0}\), which is an asymptotic solution of the Helmholtz equation \(\Delta u+k^{2}u=0\) as \(k\to\infty\) in the region \(x_{0}=O(k^{-1/5})\), \(y_{0}=O(k^{-3/5})\)[9, SS1]. Here one observes the predicted incoming whispering gallery type behaviour of \(A_{32}\) near the top of Figure 5.11(c) between \(x_{0}=-2\) and \(x_{0}=-1\), with oscillations giving way to an exponentially small field in the caustic shadow, and the predicted creeping wave type behaviour of \(A_{52}\) near the bottom of Figure 5.11(d) between \(x_{0}=1\) and \(x_{0}=2\), with waves propagating along the cubic curve, shedding rays tangentially.
In ongoing and future studies we plan to use PathFinder to further investigate the properties of integrals of the form (40), and generalisations involving different choices of \(f\) and higher degree phase functions (see [9]), which we hope may shed new light on the inflection point problem and related problems in high frequency wave propagation.
## 6 Acknowledgements
DPH and AG were supported by EPSRC grants EP/S01375X/1 and EP/V053868/1. DPH and AG would like to thank the Isaac Newton Institute for Mathematical Sciences for support and hospitality during the programme Mathematical The
Figure 5.11: Plots of (40) with \(f\equiv 1\), along with the associated approximate Helmholtz equation solutions for \(k=40\).
ory and Applications of Multiple Wave Scattering when work on this paper was undertaken. This work was supported by EPSRC grant number EP/R014604/1. Additionally, AG acknowledges support from KU Leuven project C14/15/05, and DH acknowledges support FWO-Flanders project G.088.622N.
|
2305.07418 | Regularity of Lipschitz free boundaries for a $p(x)$-Laplacian problem
with right hand side | We continue our study in \cite{FL} on viscosity solutions to a one-phase free
boundary problem for the $p(x)$-Laplacian with non-zero right hand side. We
first prove that viscosity solutions are locally Lipschitz continuous, which is
the optimal regularity for the problem. Then we prove that Lipschitz free
boundaries of viscosity solutions are $C^{1,\alpha}$. We also present some
applications of our results.
Moreover, we obtain new results for the operator under consideration that are
of independent interest, such as a Harnack inequality. | Fausto Ferrari, Claudia Lederman | 2023-05-12T12:36:59Z | http://arxiv.org/abs/2305.07418v1 | # Regularity of Lipschitz free boundaries for a \(p(x)\)-Laplacian problem with right hand side
###### Abstract.
We continue our study in [FL] on viscosity solutions to a one-phase free boundary problem for the \(p(x)\)-Laplacian with non-zero right hand side. We first prove that viscosity solutions are locally Lipschitz continuous, which is the optimal regularity for the problem. Then we prove that Lipschitz free boundaries of viscosity solutions are \(C^{1,\alpha}\). We also present some applications of our results.
Moreover, we obtain new results for the operator under consideration that are of independent interest, such as a Harnack inequality.
Key words and phrases:free boundary problem, singular/degenerate operator, variable exponent spaces, regularity of the free boundary, non-zero right hand side, viscosity solutions, Harnack inequality, optimal regularity. 2020 _Mathematics Subject Classification._ 35R35, 35B65, 35J60, 35J70. F. F. was partially supported by INDAM-GNAMPA 2019 project: _Proprieta di regolarita delle soluzioni viscose con applicazioni a problemi di frontiera libera_ and INDAM-GNAMPA 2022 project: _Regolarita locale e globale per problemi completamente non lineari_. C. L. was partially supported by the project GHAIA Horizon 2020 MCSA RISE 2017 programme grant 777822 and by the grants CONICET PIP 11220150100032CO 2016-2019, UBACYT 20020150100154BA and ANPCyYT PICT 2019-00985. C. L. wishes to thank the Department of Mathematics of the University of Bologna, Italy, for the kind hospitality.
studied in [LW3], as well as in the seminal paper by Alt and Caffarelli [AC] in the case \(p(x)\equiv 2\) and \(f\equiv 0.\) We refer also to [LW4], where (1.1) appears in the study of an optimal design problem.
We are interested in the regularity of both the solutions and the free boundaries of viscosity solutions of (1.1). This problem has been already faced in [LW2] for weak solutions of (1.1), with the aid of the techniques developed in [AC].
In the present work we are following the strategy introduced in the important paper by De Silva [D], that was inspired by [S], for one-phase problems and linear non-divergence operators. [D] was further extended to two-phase problems in different settings see [DFS1, DFS2, DFS3]. The same technique was applied to the \(p\)-Laplace operator (\(p(x)\equiv p\) in (1.1)) for the one phase case, with \(p\geq 2\), in [LR].
In the linear homogeneous case, \(f\equiv 0\), (1.1) was studied for viscosity solutions in the pioneer works by Caffarelli [C1, C2]. The results in [C1, C2] have been widely generalized to different classes of homogeneous elliptic problems. See for example [CFS, FS1, FS2] for linear operators, [AF, F1, F2, Fe1, W1, W2] for fully nonlinear operators and [LN1, LN2] for the \(p\)-Laplacian.
We recall that problem (1.1) was originally studied in the linear homogeneous case in [AC], associated to (1.2). These techniques were generalized to the linear case with \(f\not\equiv 0\) in [GS, Le]. In the homogeneous case, to a quasilinear uniformly elliptic situation [ACF], to the \(p\)-Laplacian [DP], to an Orlicz setting [MW] and to the \(p(x)\)-Laplacian with \(p(x)\geq 2\) [FMW]. Finally, (1.1) with \(1<p(x)<\infty\) and \(f\not\equiv 0\) was dealt with in [LW2].
In [FL] we proved that flat free boundaries of viscosity solutions to (1.1) are \(C^{1,\alpha}.\) Here we first prove that viscosity solutions are locally Lipschitz continuous, which is the optimal regularity for the problem. Then we prove that Lipschitz free boundaries of viscosity solutions are \(C^{1,\alpha}.\)
We devote this sequel to the study of these issues, which brought challenging difficulties due to the nonlinear behavior of the \(p(x)\)-Laplacian and, as a consequence, we present several novelties that are described in detail below.
Our main results are the following (for notation and the precise definition of viscosity solution to (1.1) we refer to Section 2)
**Theorem 1.1** (Optimal regularity).: _Let \(u\) be a viscosity solution to (1.1) in \(B_{1}\). There exists a constant \(C>0\) such that_
\[\|\nabla u\|_{L^{\infty}(B_{1/2})}\leq C.\]
**Theorem 1.2** (Lipschitz implies \(C^{1,\alpha}\)).: _Let \(u\) be a viscosity solution to (1.1) in \(B_{1}\), with \(0\in F(u)\). If \(F(u)\) is a Lipschitz graph in a neighborhood of \(0\), then \(F(u)\) is \(C^{1,\alpha}\) in a (smaller) neighborhood of \(0\)._
In addition to the assumptions already stated above, we suppose that there exist positive numbers \(p_{\min},p_{\max}\) and \(\gamma_{0}\) such that \(1<p_{\min}\leq p(x)\leq p_{\max}<\infty\) and \(g(x)\geq\gamma_{0}>0\).
In Theorem 1.1 the constant \(C\) depends only on \(p_{\min}\), \(p_{\max}\), \(\|\nabla p\|_{L^{\infty}(B_{3/4})}\), \(\|f\|_{L^{\infty}(B_{3/4})}\), \(\|g\|_{C^{0,\beta}(\overline{B_{3/4}})}\), \(\beta\), \(\|u\|_{L^{\infty}(B_{3/4})}\) and \(n\) (the dimension of the space).
In Theorem 1.2 the constant \(\alpha\) depends only on \(p_{\min}\), \(p_{\max}\), \(\beta\), \(\|g\|_{L^{\infty}(B_{\rho})}\), \(\gamma_{0}\) and \(n\), where \(\rho\) is the radius of the ball \(B_{\rho}\) where \(F(u)\) is Lipschitz. Moreover, the size of the neighborhood where \(F(u)\) is \(C^{1,\alpha}\) depends only on \(\rho\), \(p_{\min}\), \(p_{\max}\), \(\|\nabla p\|_{L^{\infty}(B_{\rho})}\), \(\|f\|_{L^{\infty}(B_{\rho})}\), \(\|g\|_{C^{0,\beta}(\overline{B_{\rho}})}\), \(\gamma_{0}\), \(\beta\), \(\|u\|_{L^{\infty}(B_{3/4})}\), \(n\) and the Lipschitz constant of \(F(u)\).
After we develop the necessary tools, Theorem 1.2 follows from Theorem 1.1 in [FL] --where we proved that flat free boundaries are \(C^{1,\alpha}\)-- and from the main result in [LN1], via a blow-up argument.
As already mentioned, problem (1.1) was faced in [LW2] for weak solutions with the techniques developed in [AC]. We want to emphasize at this point that the approach in [AC] for weak solutions gives that _flat_ free boundaries are \(C^{1,\alpha}\). Alt - Caffarelli's approach does not include the result _Lipschitz_ free boundaries are \(C^{1,\alpha}\). One of the consequences of our Theorem 1.2 is an analogous result for weak solutions of (1.1) (see Corollary 7.3).
Among the novelties that our work presents, we also refer to Section 5 where we prove some auxiliary results that are crucial in the proof of our main theorem. In that section we revisit some lemmas that are well known in the linear setting (see [CS] and the Appendix in [C2]), for the case of \(p_{0}\)-harmonic functions (i.e., \(\Delta_{p_{0}}u=0\), \(p_{0}\in(1,\infty)\)). Our results concern the existence of first order expansions at one side regular boundary points of positive Lipschitz \(p_{0}\)-harmonic functions, vanishing at the boundary of a domain. This part required great effort and passed through the equivalence of the notions of weak and viscosity solution in the case of the \(p_{0}\)-Laplace operator. Moreover, our proof can be applied to a general class of fully nonlinear degenerate elliptic operators (see Remark 5.2). We strongly believe that these results are of independent interest.
We remark that, as was the case in [FL], carrying out, for the inhomogeneous \(p(x)\)-Laplace operator, the strategy devised in [D] required that we develop new tools. In fact, the \(p(x)\)-Laplacian is a nonlinear operator that appears naturally in divergence form from minimization problems, i.e., in the form \(\text{div}A(x,\nabla u)=f(x)\), with
\[\lambda|\eta|^{p(x)-2}|\xi|^{2}\leq\sum_{i,j=1}^{n}\frac{\partial A_{i}}{ \partial\eta_{j}}(x,\eta)\xi_{i}\xi_{j}\leq\Lambda|\eta|^{p(x)-2}|\xi|^{2}, \quad\xi\in\mathbb{R}^{n},\]
where \(0<\lambda\leq\Lambda\). This operator is singular in the regions where \(1<p(x)<2\) and degenerate in the ones where \(p(x)>2\).
Let us stress that the main arguments in the approach introduced in [D] are based on Harnack inequality. However, Harnack inequality for the \(p(x)\)-Laplacian has a different form from the standard one --still valid for the \(p_{0}\)-Laplace operator-- even in the homogeneous case. Namely, Harnack inequality for the inhomogeneous equation \(\Delta_{p(x)}u=f\), with \(f\) bounded, states that, for any nonnegative weak solution \(u\) in \(B_{4R}\), there exist constants \(C>0\) and \(\mu\geq 0\) such that
\[\sup_{B_{R}}u\leq C(\inf_{B_{R}}u+R+\mu R), \tag{1.3}\]
where \(C\) depends on \(u\), and \(\mu\) depends on \(||f||_{L^{\infty}(B_{4R})}\) (among other dependencies). We refer to [Wo] for the the proof and further details on Harnack inequality for the inhomogeneous \(p(x)\)-Laplacian.
The presence of the extra term appearing in the right hand side of (1.3) -- _even present when_\(f\equiv 0\), in which case \(\mu=0\)-- brought a major difficulty in the application of the strategy of [D] for problem (1.1), under a small perturbation assumption. In order to successfully apply that strategy, we proved a new Harnack inequality for the inhomogeneous \(p(x)\)-Laplacian (Theorem 3.2) that is appropriate for small perturbation settings. Our result roughly says that if \(||f||_{L^{\infty}}\) is small and \(p(x)\) is close to a constant \(p_{0}\), then the constant terms appearing in the right hand side of (1.3) can be taken small.
This constitutes a key result in our proof of the nondegeneracy of viscosity solutions of (1.1) with Lipschitz free boundaries, and it eventually leads to our main Theorem 1.2. Let us emphasize that, in light of the discussion above on inequality (1.3), our Harnack inequality for small perturbation settings is indeed of independent interest.
Another important matter, not present in other free boundary problems treated with the present approach, is the a priori control on the dependence on \(u\) in the constant \(C\) appearing in Harnack inequality (1.3). This control is required in order to perform iteration and blow-up arguments. This same fact made the proof of Theorem 1.1 much more delicate.
As already mentioned, in Theorem 1.2 we make use of the main result in [11]. It is worth noticing that the application of this result to our problem required nontrivial arguments due to the different notion of solution employed in [11] (see Secton 6, Theorem 1.2 and Propositions 6.4 and 6.5).
Let us remark that, as a by-product of our theorems on the regularity of \(F(u)\), we get in Corollary 6.6 further regularity results for \(F(u)\), under additional regularity assumptions on the data \(p,f\) and \(g\).
We also discuss some applications of Theorem 1.1 in [FL] and Theorem 1.2 in the present paper (see Section 8 and, in particular, Remark 8.4).
Let us mention as well that our results in Sections 7 and 8 are new even for \(p(x)\equiv p_{0}\), with \(p_{0}\) a constant.
We finally point out that the \(p(x)\)-Laplacian is a particular case of operator with nonstandard growth. Partial differential equations with nonstandard growth have been receiving a lot of attention due to their wide range of applications. Among them we mention the modeling of non-Newtonian fluids, for instance, electrorheological [R] or thermorheological fluids [AR]. Other applications include non-linear elasticity [21], image reconstruction [AMS, CLR] and the modeling of electric conductors [Z2], to cite a few.
Our work is organized as follows. In Section 2 we provide notation and basic definitions. We also recall the relationship between the different notions of solutions to \(\Delta_{p(x)}u=f\) we are using. In Section 3 we obtain a Harnack inequality for the inhomogeneous equation \(\Delta_{p(x)}u=f\) (Theorem 3.2) that is appropriate for small perturbation settings. Next, in Section 4 we prove the local Lipschitz continuity of viscosity solutions of (1.1), Theorem 1.1. We then show the nondegeneracy of these solutions under the additional assumption that \(F(u)\) is a Lipschitz graph. In Section 5 we obtain a result on asymptotic developments of positive Lipschitz \(p_{0}\)-harmonic functions at one side regular boundary points that we use in Theorem 1.2 and in Section 7. Then, in Section 6 we prove our main result, Theorem 1.2. In Section 7 we discuss some consequences, and finally, in Section 8, we present some applications of our results. For the sake of completeness, in Appendix A we introduce the Sobolev spaces with variable exponent, which are the appropriate spaces to work with weak solutions of the \(p(x)\)-Laplacian. We conclude the paper with Appendix B, where we include a Liouville type result that we use in our main theorem.
### Assumptions
Throughout the paper we let \(\Omega\subset\mathbb{R}^{n}\) be a bounded domain.
**Assumptions on \(p(x)\).** We assume that the function \(p(x)\) verifies
\[p\in C^{1}(\Omega),\qquad 1<p_{\min}\leq p(x)\leq p_{\max}<\infty,\qquad\nabla p \in L^{\infty}(\Omega),\]
for some positive constants \(p_{\min}\) and \(p_{\max}\).
### Assumptions on \(f\)
We assume that function \(f\) verifies
\[f\in C(\Omega)\cap L^{\infty}(\Omega).\]
### Assumptions on \(g\)
We assume that the function \(g\) verifies
\[g\in C^{0,\beta}(\Omega)\cap L^{\infty}(\Omega),\qquad g(x)\geq\gamma_{0}>0,\]
for some positive constants \(0<\beta<1\) and \(\gamma_{0}\).
## 2. Basic definitions, notation and preliminaries
In this section, we provide notation, basic definitions and some preliminaries that will be relevant for our work.
### Notation
For any continuous function \(u:\Omega\subset\mathbb{R}^{n}\to\mathbb{R}\) we denote
\[\Omega^{+}(u):=\{x\in\Omega:u(x)>0\},\qquad F(u):=\partial\Omega^{+}(u)\cap\Omega. \tag{2.1}\]
We refer to the set \(F(u)\) as the _free boundary_ of \(u\), while \(\Omega^{+}(u)\) is its _positive phase_ (or _side_).
Throughout the paper, when we say that \(F(u)\)_is Lipschitz_ we are assuming that
\[\Omega^{+}(u)=\{x=(x^{\prime},x_{n})\in\Omega:x_{n}>\psi(x^{\prime})\},\]
in an appropriate coordinate system, with \(\psi\) Lipschitz on \(\mathbb{R}^{n-1}\).
We begin with some remarks on the \(p(x)\)-Laplacian. In particular, we recall the relationship between the different notions of solutions to \(\Delta_{p(x)}u=f\) we are using, namely, weak and viscosity solutions. Then we give the definition of viscosity solution to problem (1.1) and we deduce some consequences. We here refer to the usual \(C\)-viscosity definition of sub/supersolution and solution of an elliptic PDE, see e.g., [CIL].
We start by observing that direct calculations show that, for \(C^{2}\) functions \(u\) such that \(\nabla u(x)\neq 0\),
\[\Delta_{p(x)}u=\operatorname{div}(|\nabla u|^{p(x)-2}\nabla u)\] \[=|\nabla u(x)|^{p(x)-2}\left(\Delta u+(p(x)-2)\Delta_{\infty}^{N} u+\langle\nabla p(x),\nabla u(x)\rangle\log|\nabla u(x)|\right),\]
where
\[\Delta_{\infty}^{N}u:=\left\langle D^{2}u(x)\frac{\nabla u(x)}{|\nabla u(x)|} \,,\,\frac{\nabla u(x)}{|\nabla u(x)|}\right\rangle\]
denotes the normalized \(\infty\)-Laplace operator.
We also deduce that
\[|\nabla u(x)|^{p(x)-2}\left(\mathcal{M}^{-}_{\lambda_{0},\Lambda_ {0}}(D^{2}u(x))+\langle\nabla p(x),\nabla u(x)\rangle\log|\nabla u(x)|\right)\] \[\leq\Delta_{p(x)}u\leq|\nabla u(x)|^{p(x)-2}\left(\mathcal{M}^{+} _{\lambda_{0},\Lambda_{0}}(D^{2}u(x))+\langle\nabla p(x),\nabla u(x)\rangle \log|\nabla u(x)|\right), \tag{2.2}\]
where, \(\lambda_{0}:=\min\{1,p_{\min}-1\}\) and \(\Lambda_{0}:=\max\{1,p_{\max}-1\}.\) As usual, if \(0<\lambda\leq\Lambda\) are numbers, and \(e_{i}\) is the \(i-\)th eigenvalue of the \(n\times n\) symmetric matrix \(M,\) then \(\mathcal{M}_{\lambda,\Lambda}^{+}\) and \(\mathcal{M}_{\lambda,\Lambda}^{-}\) denote the extremal Pucci operators and are defined (see [CC]) as
\[\begin{split}\mathcal{M}_{\lambda,\Lambda}^{+}(M)&= \lambda\sum_{e_{i}<0}e_{i}+\Lambda\sum_{e_{i}>0}e_{i},\\ \mathcal{M}_{\lambda,\Lambda}^{-}(M)&=\Lambda\sum_{ e_{i}<0}e_{i}+\lambda\sum_{e_{i}>0}e_{i}.\end{split} \tag{2.3}\]
First we need (see Appendix A for the definition of Sobolev spaces with variable exponent)
**Definition 2.1**.: Assume that \(1<p_{\min}\leq p(x)\leq p_{\max}<\infty\) with \(p(x)\) Lipschitz continuous in \(\Omega\) and \(f\in L^{\infty}(\Omega).\)
We say that \(u\) is a weak solution to \(\Delta_{p(x)}u=f\) in \(\Omega\) if \(u\in W^{1,p(\cdot)}(\Omega)\) and, for every \(\varphi\in C_{0}^{\infty}(\Omega),\) there holds that
\[-\int_{\Omega}|\nabla u(x)|^{p(x)-2}\nabla u\cdot\nabla\varphi\,dx=\int_{ \Omega}\varphi\,f(x)\,dx.\]
We recall the following result we proved in [FL] (see [FL], Theorem 3.2)
**Theorem 2.2**.: _Let \(p\) and \(f\) be as in Definition 2.1. Assume moreover that \(f\in C(\Omega)\) and \(p\in C^{1}(\Omega).\)_
_Let \(u\in W^{1,p(\cdot)}(\Omega)\cap C(\Omega)\) be a weak solution to \(\Delta_{p(x)}u=f\) in \(\Omega\). Then \(u\) is a viscosity solution to \(\Delta_{p(x)}u=f\) in \(\Omega\)._
**Remark 2.3**.: We point out that the equivalence between weak and viscosity solutions to the \(p(x)\)-Laplacian with right hand side \(f\equiv 0\) was proved in [JLP]. On the other hand, this equivalence, in case \(p(x)\equiv p\) and \(f\not\equiv 0\) was dealt with in [JJ] and [MO]. See also [JLM] for the case \(p(x)\equiv p\) and \(f\equiv 0.\)
We need the following standard notion.
**Definition 2.4**.: Given \(u,\varphi\in C(\Omega),\) we say that \(\varphi\) touches \(u\) from below (resp. above) at \(x_{0}\in\Omega\) if \(u(x_{0})=\varphi(x_{0}),\) and
\[u(x)\geq\varphi(x)\quad\text{(resp. $u(x)\leq\varphi(x)$)}\quad\text{in a neighborhood $O$ of $x_{0}$}.\]
If this inequality is strict in \(O\setminus\{x_{0}\},\) we say that \(\varphi\) touches \(u\) strictly from below (resp. above).
**Definition 2.5**.: Let \(u\) be a continuous nonnegative function in \(\Omega\). We say that \(u\) is a viscosity solution to (1.1) in \(\Omega\), if the following conditions are satisfied:
* \(\Delta_{p(x)}u=f\) in \(\Omega^{+}(u)\) in the weak sense of Definition 2.1.
* For every \(\varphi\in C(\Omega),\)\(\varphi\in C^{2}(\overline{\Omega^{+}(\varphi)})\). If \(\varphi^{+}\) touches \(u\) from below (resp. above) at \(x_{0}\in F(u)\) and \(\nabla\varphi(x_{0})\neq 0,\) then \[|\nabla\varphi(x_{0})|\leq g(x_{0})\quad\text{(resp. $\geq g(x_{0})$)}.\]
Next theorem follows as a consequence of our Theorem 2.2.
**Theorem 2.6**.: _Let \(u\) be a viscosity solution to (1.1) in \(\Omega.\) Then the following conditions are satisfied:_
* \(\Delta_{p(x)}u=f\) _in_ \(\Omega^{+}(u)\) _in the viscosity sense, that is:_
_for every_ \(\varphi\in C^{2}(\Omega^{+}(u))\) _and for every_ \(x_{0}\in\Omega^{+}(u),\) _if_ \(\varphi\) _touches_ \(u\) _from above at_ \(x_{0}\) _and_ \(\nabla\varphi(x_{0})\neq 0,\) _then_ \(\Delta_{p(x_{0})}\varphi(x_{0})\geq f(x_{0}),\) _that is,_ \(u\) _is a viscosity subsolution;_ _for every_ \(\varphi\in C^{2}(\Omega^{+}(u))\) _and for every_ \(x_{0}\in\Omega^{+}(u),\) _if_ \(\varphi\) _touches_ \(u\) _from below at_ \(x_{0}\) _and_ \(\nabla\varphi(x_{0})\neq 0,\) _then_ \(\Delta_{p(x_{0})}\varphi(x_{0})\leq f(x_{0}),\) _that is,_ \(u\) _is a viscosity supersolution._
2. _For every_ \(\varphi\in C(\Omega)\)_,_ \(\varphi\in C^{2}(\overline{\Omega^{+}(\varphi)})\)_. If_ \(\varphi^{+}\) _touches_ \(u\) _from below (resp. above) at_ \(x_{0}\in F(u)\) _and_ \(\nabla\varphi(x_{0})\neq 0\)_, then_ \[|\nabla\varphi(x_{0})|\leq g(x_{0})\quad(\text{resp. }\geq g(x_{0})).\]
**Remark 2.7**.: If \(p(x)\equiv p\) or \(f\equiv 0,\) then any function satisfying the conditions of Theorem 2.6 is a solution to (1.1) in the sense of Definition 2.5 (see Remark 2.3).
We introduce also the notion of comparison sub/supersolution.
**Definition 2.8**.: We say that \(v\in C(\Omega)\) is a strict (comparison) subsolution (resp. supersolution) to (1.1) in \(\Omega\) if \(v\in C^{2}(\overline{\Omega^{+}(v)}),\)\(\nabla v\neq 0\) in \(\overline{\Omega^{+}(v)}\) and the following conditions are satisfied:
1. \(\Delta_{p(x)}v>f\) (resp. \(<f\)) in \(\Omega^{+}(v);\)
2. If \(x_{0}\in F(v),\) then \[|\nabla v(x_{0})|>g(x_{0})\quad(\text{resp. }|\nabla v(x_{0})|<g(x_{0})).\]
Notice that by the implicit function theorem, according to our definition, the free boundary of a comparison sub/supersolution is \(C^{2}\).
As a consequence of the previous discussion we have
**Lemma 2.9**.: _Let \(u\) be a viscosity solution to (1.1) in \(\Omega\). If \(v\) is a strict (comparison) subsolution to (1.1) in \(\Omega\) and \(u\geq v^{+}\) in \(\Omega\) then \(u>v\) in \(\Omega^{+}(v)\cup F(v)\). Analogously, if \(v\) is a strict (comparison) supersolution to (1.1) in \(\Omega\) and \(v\geq u\) in \(\Omega\) then \(v>u\) in \(\Omega^{+}(u)\cup F(u)\)._
**Notation.** From now on \(B_{\rho}(x_{0})\subset\mathbb{R}^{n}\) will denote the open ball of radius \(\rho\) centered at \(x_{0},\) and \(B_{\rho}=B_{\rho}(0)\). A positive constant depending only on the dimension \(n\), \(p_{\min}\), \(p_{\max}\) will be called a universal constant. We will use \(c\), \(c_{i}\) to denote small universal constants and \(C\), \(C_{i}\) to denote large universal constants.
## 3. A Harnack inequality for \(\Delta_{p(x)}u=f\)
In this section we prove a Harnack inequality for \(\Delta_{p(x)}u=f,\) under a small perturbation assumption (Theorem 3.2).
We first prove
**Lemma 3.1**.: _Assume that \(1<p_{\min}\leq p(x)\leq p_{\max}<\infty\) with \(p(x)\) Lipschitz continuous in \(B_{1}\) and \(\|\nabla p\|_{L^{\infty}}\leq L\), for some \(L>0\). Let \(p_{0}\) be such that \(p_{\min}\leq p_{0}\leq p_{\max}\) and \(f\in L^{\infty}(B_{1})\)._
_Let \(u\in W^{1,p(\cdot)}(B_{1})\cap L^{\infty}(B_{1})\) be a nonnegative solution to_
\[\Delta_{p(x)}u=f\quad\text{in }B_{1},\]
_with \(||u||_{L^{\infty}(B_{1})}\leq M\), for some \(M>0\)._
_Given \(\eta>0\), there exists \(\varepsilon_{0}=\varepsilon_{0}(\eta,n,p_{\min},p_{\max},M,L)>0\) such that if_
\[||f||_{L^{\infty}(B_{1})}\leq\varepsilon,\qquad||p-p_{0}||_{L^{\infty}(B_{1}) }\leq\varepsilon,\]
_with \(\varepsilon\leq\varepsilon_{0}\), then_
\[||u-u_{0}||_{L^{\infty}(B_{3/4})}\leq\eta, \tag{3.1}\]
_for a suitable \(u_{0}\in W^{1,\infty}(B_{3/4})\) nonnegative solution to_
\[\Delta_{p_{0}}u_{0}=0\quad\text{in }B_{3/4}. \tag{3.2}\]
Proof.: Let us suppose by contradiction that there exist \(\eta_{0}>0\) and a sequence of nonnegative functions \(u_{k}\in W^{1,p_{k}(\cdot)}(B_{1})\cap L^{\infty}(B_{1})\) with \(p_{\min}\leq p_{k}(x)\leq p_{\max}\), \(\|\nabla p_{k}\|_{L^{\infty}}\leq L\), \(||u_{k}||_{L^{\infty}(B_{1})}\leq M\), such that
\[||f_{k}||_{L^{\infty}(B_{1})}\leq\frac{1}{k},\qquad||p_{k}-p_{0}||_{L^{\infty} (B_{1})}\leq\frac{1}{k},\]
\[\Delta_{p_{k}(x)}u_{k}=f_{k}\quad\text{in }B_{1},\]
and such that
\[||u_{k}-v||_{L^{\infty}(B_{3/4})}\geq\eta_{0},\]
for every \(v\in W^{1,\infty}(B_{3/4})\) nonnegative solution to \(\Delta_{p_{0}}v=0\) in \(B_{3/4}\).
Then, by Theorem 1.1 in [Fa] we obtain that
\[||u_{k}||_{C^{1,\alpha}(\overline{B_{3/4}})}\leq C\quad\text{ with }\quad 0<\alpha<1,\]
where \(C\) and \(\alpha\) depend only on \(n\), \(p_{\min}\), \(p_{\max}\), \(L\) and \(M\). Therefore, there is a function \(u_{0}\in C^{1,\alpha}(\overline{B_{3/4}})\) such that, for a subsequence,
\[u_{k}\to u_{0}\quad\text{and}\quad\nabla u_{k}\to\nabla u_{0}\quad\text{ uniformly in }\overline{B_{3/4}}.\]
Since
\[f_{k}\to 0\quad\text{and}\quad p_{k}\to p_{0}\quad\text{uniformly in }B_{1},\]
it follows that \(u_{0}\in W^{1,\infty}(B_{3/4})\) is a nonnegative solution to
\[\Delta_{p_{0}}u_{0}=0\quad\text{in }B_{3/4}\]
and thus,
\[0<\eta_{0}\leq||u_{k}-u_{0}||_{L^{\infty}(B_{3/4})}\to 0,\]
which gives a contradiction and concludes the proof.
As a consequence we get
**Theorem 3.2**.: _Assume that \(1<p_{\min}\leq p(x)\leq p_{\max}<\infty\) with \(p(x)\) Lipschitz continuous in \(\Omega\) and \(\|\nabla p\|_{L^{\infty}}\leq L\), for some \(L>0\). Let \(p_{0}\) be such that \(p_{\min}\leq p_{0}\leq p_{\max}\) and \(f\in L^{\infty}(\Omega)\). Let \(x_{0}\in\Omega\) and \(0<R_{1}\leq R\leq R_{2}\) such that \(B_{R}(x_{0})\subset\Omega\)._
_Let \(u\in W^{1,p(\cdot)}(B_{R}(x_{0}))\cap L^{\infty}(B_{R}(x_{0}))\) be a nonnegative solution to_
\[\Delta_{p(x)}u=f\quad\text{in }B_{R}(x_{0}),\]
_with \(||u||_{L^{\infty}(B_{R}(x_{0}))}\leq M\), for some \(M>0\)._
_Given \(\sigma>0\), there exist positive constants \(\varepsilon_{1}=\varepsilon_{1}(\sigma,n,p_{\min},p_{\max},M,L,R_{1},R_{2})\) and \(C=C(n,p_{\min},p_{\max})\) such that if_
\[||f||_{L^{\infty}(B_{R}(x_{0}))}\leq\varepsilon,\qquad||p-p_{0}||_{L^{\infty} (B_{R}(x_{0}))}\leq\varepsilon, \tag{3.3}\]
_with \(\varepsilon\leq\varepsilon_{1}\), then_
\[\sup_{B_{R/2}(x_{0})}u\leq C\inf_{B_{R/2}(x_{0})}u+\sigma. \tag{3.4}\]
Proof.: We assume without loss of generality that \(x_{0}=0\).
_Case I_. Suppose first that \(R=1\).
We let \(\eta>0\) to be precised later. We now take \(\varepsilon_{0}=\varepsilon_{0}(\eta,n,p_{\min},p_{\max},M,L)\) given by Lemma 3.1. Then, if (3.3) is satisfied with \(\varepsilon\leq\varepsilon_{0}\), there holds (3.1), for a suitable \(u_{0}\in W^{1,\infty}(B_{3/4})\) nonnegative solution to (3.2).
By Harnack's inequality (Theorem 1.1 in [T]), there exists a positive constant \(C=C(n,p_{\min},p_{\max})\) such that
\[\sup_{B_{1/2}}u_{0}\leq C\inf_{B_{1/2}}u_{0}.\]
Since \(||u-u_{0}||_{L^{\infty}(B_{3/4})}\leq\eta\), we obtain
\[\sup_{B_{1/2}}u \leq\sup_{B_{1/2}}u_{0}+\eta\leq C\inf_{B_{1/2}}u_{0}+\eta\] \[\leq C\inf_{B_{1/2}}u+(C+1)\eta\leq C\inf_{B_{1/2}}u+\sigma,\]
if we choose \(\eta\) such that \((C+1)\eta<\sigma\). So (3.4) follows if (3.3) is satisfied with \(\varepsilon\leq\tilde{\varepsilon}_{0}\), where \(\tilde{\varepsilon}_{0}=\tilde{\varepsilon}_{0}(\sigma,n,p_{\min},p_{\max},M,L)\).
_Case II_. We now assume that \(0<R_{1}\leq R\leq 1\) and consider \(\bar{u}(x)=\frac{u(Rx)}{R}\). Then, \(\bar{u}\in W^{1,\bar{p}(\cdot)}(B_{1})\cap L^{\infty}(B_{1})\) is a nonnegative solution to
\[\Delta_{\bar{p}(x)}\bar{u}=\bar{f}\quad\text{in }B_{1},\]
with \(||\bar{u}||_{L^{\infty}(B_{1})}\leq\overline{M}\), where \(\bar{p}(x)=p(Rx)\), \(\|\nabla\bar{p}\|_{L^{\infty}}\leq L\), \(\bar{f}(x)=Rf(Rx)\) and \(\overline{M}=\frac{M}{R_{1}}\).
Then, if
\[||\bar{f}||_{L^{\infty}(B_{1})}\leq||f||_{L^{\infty}(B_{R})}\leq\varepsilon, \qquad||\bar{p}-p_{0}||_{L^{\infty}(B_{1})}=||p-p_{0}||_{L^{\infty}(B_{R})} \leq\varepsilon,\]
for \(\varepsilon\leq\tilde{\varepsilon}_{0}\), with \(\tilde{\varepsilon}_{0}=\tilde{\varepsilon}_{0}(\sigma,n,p_{\min},p_{\max}, \overline{M},L)\) chosen as in _Case I_, we get
\[\sup_{B_{1/2}}\bar{u}\leq C\inf_{B_{1/2}}\bar{u}+\sigma.\]
That is,
\[\sup_{B_{R/2}}u\leq C\inf_{B_{R/2}}u+R\sigma\leq C\inf_{B_{R/2}}u+\sigma.\]
Hence we get (3.4), if (3.3) is satisfied with \(\varepsilon\leq\varepsilon_{1}(\sigma,n,p_{\min},p_{\max},M,L,R_{1})\).
_Case III_. Finally, if we assume that \(0<R_{1}\leq R\leq R_{2}\), we proceed as in _Case II_ and we obtain the desired result with \(\varepsilon_{1}=\varepsilon_{1}(\sigma,n,p_{\min},p_{\max},M,L,R_{1},R_{2})\).
## 4. Lipschitz continuity and nondegeneracy
In this section we prove Theorem 1.1, which gives the optimal regularity for viscosity solutions to (1.1), i.e., the local Lipschitz continuity. We also prove that if \(F(u)\) is a Lipschitz graph, then viscosity solutions to (1.1) are nondegenerate.
We recall the following result we proved in [FL]
**Lemma 4.1**.: _Let \(x_{0}\in B_{1}\) and \(0<\bar{r}_{1}<\bar{r}_{2}\leq 1\). Assume that \(1<p_{\min}\leq p(x)\leq p_{\max}<\infty\) and \(\|\nabla p\|_{L^{\infty}}\leq\varepsilon^{1+\theta}\), for some \(0<\theta\leq 1\). Let \(c_{1}\) and \(c_{2}\) be positive constants._
_There exist positive constants \(\gamma\geq 1\), \(\bar{c}\) and \(\varepsilon_{0}\) such that the function_
\[w(x)=c_{1}|x-x_{0}|^{-\gamma}-c_{2},\]
_satisfies, for \(\bar{r}_{1}\leq|x-x_{0}|\leq\bar{r}_{2}\),_
\[\Delta_{p(x)}w\geq\bar{c},\quad\text{for}\;\;0<\varepsilon\leq\varepsilon_{0}.\]
_Here \(\gamma=\gamma(n,p_{\min},p_{\max})\), \(\bar{c}=\bar{c}(p_{\min},p_{\max},c_{1})\) and \(\varepsilon_{0}=\varepsilon_{0}(n,p_{\min},p_{\max},\bar{r}_{1},c_{1})\)._
Proof.: See Lemma 4.2 in [FL].
We will now prove two key estimates for viscosity solutions to (1.1). Estimate (4.2) will imply that viscosity solutions are locally Lipschitz continuous (see Theorem 1.1). If \(F(u)\) is a Lipschitz continuous graph, we also obtain estimate (4.3), which gives the nondegeneracy of \(u\) close to \(F(u)\).
We will use the notation \(p_{+}^{r}=\sup_{B_{r}}p\) and \(p_{-}^{r}=\inf_{B_{r}}p\), for \(r>0\) (see [Wo]).
**Proposition 4.2**.: _Let \(p_{\min}\leq p_{0}\leq p_{\max}\) and \(0<\gamma_{0}\leq g_{0}\leq\|g\|_{L^{\infty}(B_{2})}\). Let \(u\) be a viscosity solution to (1.1) in \(B_{2}\) such that \(0\in F(u)\). There exists a constant \(0<\tilde{\varepsilon}<1\) such that if_
\[\|f\|_{L^{\infty}(B_{2})}\leq\tilde{\varepsilon},\quad||g-g_{0}||_{L^{\infty}( B_{2})}\leq\tilde{\varepsilon},\quad||\nabla p||_{L^{\infty}(B_{2})}\leq\tilde{ \varepsilon},\quad||p-p_{0}||_{L^{\infty}(B_{2})}\leq\tilde{\varepsilon}, \tag{4.1}\]
_then_
\[u(x)\leq C_{0}\text{dist}(x,F(u)),\quad x\in B_{1/2}^{+}(u). \tag{4.2}\]
_Assume moreover that \(F(u)\) is a Lipschitz continuous graph in \(B_{2}\). Then_
\[c_{0}\text{dist}(x,F(u))\leq u(x),\quad x\in B_{\rho_{0}}^{+}(u). \tag{4.3}\]
_The constants \(\tilde{\varepsilon}\), \(c_{0}\) and \(C_{0}\) depend only on \(n\), \(p_{\min}\), \(p_{\max}\), \(\|g\|_{L^{\infty}(B_{2})}\) and \(||u||_{L^{\infty}(B_{3/2})}^{p_{+}^{3/2}-p_{-}^{3/2}}\) where \(p_{+}^{3/2}=\sup_{B_{3/2}}p\) and \(p_{-}^{3/2}=\inf_{B_{3/2}}p\). The constants \(\tilde{\varepsilon}\) and \(c_{0}\) depend also on the Lipschitz constant of \(F(u)\) and on \(\gamma_{0}\), and the constant \(\rho_{0}\) depends only on the Lipschitz constant of \(F(u)\)._
Proof.: Without loss of generality we will assume that \(g_{0}=1\). We let \(x_{0}\in B_{1/2}^{+}(u)\) and we denote \(d=\text{dist}(x_{0},F(u)).\) We consider the rescaled function
\[\tilde{u}(x)=\frac{u(x_{0}+dx)}{d}. \tag{4.4}\]
Then \(\tilde{u}\) is a viscosity solution to (1.1) with right hand side \(\tilde{f}(x)=df(x_{0}+dx)\), exponent \(\tilde{p}(x)=p(x_{0}+dx)\) and free boundary condition \(\tilde{g}(x)=g(x_{0}+dx).\) Since \(d\leq 1\), the assumptions (4.1) hold for the rescaled functions in \(B_{3/2}\).
In particular, \(\tilde{u}\) is well defined in the ball \(\overline{B_{1}}\), with \(\tilde{u}>0\) in \(B_{1}\), and it satisfies the equation
\[\Delta_{\tilde{p}(x)}\tilde{u}=\tilde{f}\quad\text{ in }B_{1}. \tag{4.5}\]
We will show that
\[c_{0}\leq\tilde{u}(0)\leq C_{0}, \tag{4.6}\]
for suitable universal constants \(C_{0},c_{0}>0\).
_Step I: Upper bound._ Let us prove the upper bound in (4.6). We will argue by contradiction, assuming that \(\tilde{u}(0)>C_{0}\), with \(C_{0}\geq 1\) to be precised later.
We will use a barrier like the one considered in Lemma 4.1, in the annulus \(B_{1}\backslash\overline{B_{r}}\), with \(r\) suitably chosen.
We are going to fix \(0<r<1\) in a universal way, keeping in mind the particular form of Harnack's inequality for the \(p(x)\)-Laplacian (see Theorem 2.1 [Wo]). In fact, since there holds (4.5), it follows from [Wo] that there exists a positive constant \(C_{H}\) such that
\[\sup_{B_{r}}\tilde{u}\leq C_{H}(\inf_{B_{r}}\tilde{u}+r(||\tilde{f}||_{L^{ \infty}}^{\frac{1}{p_{\max-1}}}+1)), \tag{4.7}\]
if \(r<\frac{1}{4}\). Using that \(||\tilde{f}||_{L^{\infty}}\leq 1\) and \(||\nabla\tilde{p}||_{L^{\infty}}\leq 1\), we obtain that the constant \(C_{H}\) depends only on \(n\), \(p_{\min}\), \(p_{\max}\) and \(||\tilde{u}||_{L^{\infty}(B_{4r})}^{\tilde{p}_{+}^{4r}-\tilde{p}_{-}^{4r}}\), where \(\tilde{p}_{+}^{4r}=\sup_{B_{4r}}\tilde{p}\) and \(\tilde{p}_{-}^{4r}=\inf_{B_{4r}}\tilde{p}\).
We now notice that
\[||\tilde{u}||_{L^{\infty}(B_{4r})}^{\tilde{p}_{+}^{4r}-\tilde{p}_{-}^{4r}} \leq||u||_{L^{\infty}(B_{4r})}^{\tilde{p}_{+}^{4r}-\tilde{p}_{-}^{4r}}\Big{(} \frac{1}{d}\Big{)}^{\tilde{p}_{+}^{4r}-\tilde{p}_{-}^{4r}}, \tag{4.8}\]
\[\tilde{p}_{+}^{4r}-\tilde{p}_{-}^{4r}\leq||\nabla\tilde{p}||_{L^{\infty}(B_{ 4r})}8r\leq d||\nabla p||_{L^{\infty}(B_{4r})}2\leq 2d, \tag{4.9}\]
and also
\[\tilde{p}_{+}^{4r}-\tilde{p}_{-}^{4r}=\sup_{x\in B_{4r}}p(x_{0}+dx)-\inf_{x \in B_{4r}}p(x_{0}+dx)\leq\sup_{B_{d}(x_{0})}p-\inf_{B_{d}(x_{0})}p. \tag{4.10}\]
Then, from (4.8), (4.9) and (4.10) and using that \(B_{d}(x_{0})\subset B_{3/2}\), we conclude that
\[||\tilde{u}||_{L^{\infty}(B_{4r})}^{\tilde{p}_{+}^{4r}-\tilde{p}_{-}^{4r}} \leq c\max\Big{\{}1,\ ||u||_{L^{\infty}(B_{3/2})}^{p_{+}^{3/2}-p_{-}^{3/2}} \Big{\}},\]
where \(c=\sup_{x\in(0,1)}\big{(}\frac{1}{x}\big{)}^{2x}\), \(p_{+}^{3/2}=\sup_{B_{3/2}}p\) and \(p_{-}^{3/2}=\inf_{B_{3/2}}p\).
Hence, from (4.7) and the fact that \(||\tilde{f}||_{L^{\infty}}\leq 1\), we deduce that for every \(x\in B_{r}\),
\[\frac{\tilde{u}(0)}{C_{H}}-2r\leq\inf_{B_{r}}\tilde{u}\leq\tilde{u}(x). \tag{4.11}\]
We now fix \(r=\min\{\frac{1}{8},\frac{1}{4C_{H}}\}\), and using that \(\tilde{u}(0)>C_{0}\geq 1\), we get from (4.11)
\[\tilde{u}(x)\geq\frac{\tilde{u}(0)}{2C_{H}},\quad x\in\overline{B}_{r}.\]
Next let
\[w(x)=|x|^{-\gamma}-1,\]
where we fix \(\gamma=\gamma(n,p_{\min},p_{\max})\geq 1\) given in Lemma 4.1.
We denote
\[G(x)=\bar{C}w(x)=\bar{C}\big{(}|x|^{-\gamma}-1\big{)} \tag{4.12}\]
in \(B_{1}\setminus\overline{B}_{r}\), where we fix \(\bar{C}=\bar{C}(r,\gamma)>0\) in such a way that \(G=1\) on \(\partial B_{r}\).
Let
\[\bar{G}(x)=kG(x)=k\bar{C}\big{(}|x|^{-\gamma}-1\big{)},\quad\text{where }k=\frac{C_{0}}{2C_{H}}.\]
Recalling that \(\tilde{u}(x)\geq\frac{\tilde{u}(0)}{2C_{H}}>\frac{C_{0}}{2C_{H}}\) in \(\bar{B}_{r}\), we get
\[\tilde{u}\geq 0=\bar{G},\quad\text{ on }\partial B_{1},\] \[\tilde{u}\geq k=\bar{G},\quad\text{ on }\partial B_{r}. \tag{4.13}\]
We claim that
\[\Delta_{\bar{p}(x)}\bar{G}\geq\tilde{f}\quad\text{in }B_{1}\setminus\bar{B}_{r}, \tag{4.14}\]
if \(\tilde{\varepsilon}\) is suitably chosen.
In fact, by Lemma 4.1, we know that
\[\Delta_{\tilde{P}(x)}\bar{G}\geq\bar{c}\quad\text{in }B_{1}\setminus\bar{B}_{r},\]
with \(\bar{c}=\bar{c}(p_{\min},p_{\max},k,\bar{C})\), if \(\tilde{\varepsilon}\leq\bar{\varepsilon}_{0}(n,p_{\min},p_{\max},r,k,\bar{C})\), since \(||\nabla\bar{p}||_{L^{\infty}}\leq\tilde{\varepsilon}\). So, if we let \(\tilde{\varepsilon}\leq\bar{c}\), then \(||\tilde{f}||_{L^{\infty}}\leq\bar{c}\). That is, (4.14) holds.
Then, from (4.5), (4.14) and (4.13), we conclude that \(\tilde{u}\geq\bar{G}\) in \(\overline{B_{1}\setminus B_{r}}\), with \(\bar{G}\in C^{2}\) and \(\nabla\bar{G}\neq 0\) in that set, and \(\bar{G}\) touches \(\tilde{u}\) from below at some \(z\in\partial B_{1}\cap F(\tilde{u})\). Then
\[2>1+\tilde{\varepsilon}\geq\tilde{g}(z)\geq|\nabla\bar{G}(z)|=\frac{C_{0}}{2C _{H}}|\nabla G(z)|=\frac{C_{0}\gamma\bar{C}}{2C_{H}},\]
so we obtain a contradiction if we choose \(C_{0}=\max\left\{1,\frac{8C_{H}}{\gamma C}\right\}\). Hence (4.2) follows.
_Step II: Lipschitz estimate._ From (4.2) we deduce that \(u\) is Lipschitz continuous in \(B_{1/4}\), with a Lipschitz constant depending only on \(n\), \(p_{\min}\), \(p_{\max}\) and \(C_{0}\). In fact, this can be seen with similar arguments as those in Theorem 1.1, _Step III_. When estimating the Lipschitz constant, we use that, in the present case, \(\|f\|_{L^{\infty}(B_{2})}\leq\tilde{\varepsilon}<1\) and \(||\nabla p||_{L^{\infty}(B_{2})}\leq\tilde{\varepsilon}<1\).
_Step III: Lower bound._ Now we assume that \(F(u)\) is a Lipschitz continuous graph in \(B_{2}\). Without loss of generality we assume that \(F(u)\) is a Lipschitz graph in the direction \(e_{n}\) with Lipschitz constant \(1\). We want to prove that \(\tilde{u}\) given by (4.4) satisfies the lower bound in (4.6).
We assume moreover that our point \(x_{0}\in B_{1/2}^{+}(u)\) belongs to \(B_{\rho_{0}}\), with \(\rho_{0}<1/5\). Then, \(d=\text{dist}(x_{0},F(u))<\rho_{0}<1/5\) so \(\tilde{u}\) is well defined in the ball \(\overline{B_{5}}\).
Taking additionally \(\rho_{0}<1/24\), we also obtain from the previous step that \(\tilde{u}\) is Lipschitz in \(B_{5}\), with Lipschitz constant depending only on \(n\), \(p_{\min}\), \(p_{\max}\) and \(C_{0}\). Moreover, since there exists \(\bar{x}\in\partial B_{1}\cap F(\tilde{u})\), \(||\tilde{u}||_{L^{\infty}(B_{5})}\) depends only on the Lipschitz constant of \(\tilde{u}\) in \(B_{5}\).
Let us point out that also in this part of the proof we need to use more delicate arguments than those in [D]. Thus, we first remark what does not change. Since \(F(\tilde{u})\) is a Lipschitz continuous graph, then \(\{\tilde{u}>0\}\) is a NTA domain, see [JK]. This fact implies that for every couple of points \(\delta\)-away from \(F(\tilde{u})\) in \(\{\tilde{u}>0\}\) such that they are contained in a ball of size \(\bar{M}\delta\), there exists a Harnack chain of balls, whose length is of order \(\bar{M}\), contained in the domain, connecting the two points. In other words, there exist \(k\) balls in \(\{\tilde{u}>0\}\) of radius comparable to \(\delta\) (\(k\) depending only on \(\bar{M}\)), such that consecutive balls intersect, connecting the two points.
As a consequence we will show that, in the present case, we can apply a suitable Harnack inequality (Theorem 3.2) at each ball, and this will allow us to estimate the value of \(\tilde{u}\) at the first point with the value of \(\tilde{u}\) at the last one, times a universal constant, provided (4.1) holds, for appropriate \(\tilde{\varepsilon}\).
We start by considering, for \(\eta>0\),
\[\widetilde{G}(x)=\eta(1-G(x)),\quad\text{ in }B_{1}\setminus\bar{B}_{r}\]
where \(G\), as well as the constants \(r\), \(\gamma\) and \(\bar{C}\), are defined as in (4.12).
We observe that, \(\nabla\widetilde{G}\neq 0\) in \(\overline{B_{1}\setminus B_{r}}\) and, on \(\partial B_{r}\),
\[|\nabla\widetilde{G}|=\eta\bar{C}|\nabla w|=\eta\bar{C}\gamma r^{-1-\gamma},\]
then we can choose \(\eta=\eta(r,\gamma)\), so that
\[|\nabla\widetilde{G}|<\frac{1}{2}<1-\tilde{\varepsilon}\quad\text{ on }\partial B_{r},\]
if \(\tilde{\varepsilon}<\frac{1}{2}\). Now, since
\[\Delta_{\tilde{p}(x)}\widetilde{G}=-\Delta_{\tilde{p}(x)}\big{(}\eta G\big{)},\qquad\eta G(x)=\eta\bar{C}\big{(}|x|^{-\gamma}-1\big{)}, \tag{4.15}\]
we can apply Lemma 4.1 once more and deduce that
\[\Delta_{\tilde{p}(x)}\big{(}\eta G\big{)}\geq\hat{c}\quad\text{in }B_{1}\setminus\bar{B}_{r}, \tag{4.16}\]
with \(\hat{c}=\hat{c}(p_{\min},p_{\max},\eta,\bar{C})\), if \(\tilde{\varepsilon}\leq\hat{\varepsilon}_{0}(n,p_{\min},p_{\max},r,\eta,\bar {C})\), since \(||\nabla\tilde{p}||_{L^{\infty}}\leq\tilde{\varepsilon}\). So, if we let \(\tilde{\varepsilon}<\hat{c}\), then \(||\tilde{f}||_{L^{\infty}(B_{5})}<\hat{c}\) and therefore, from (4.15) and (4.16), we get
\[\Delta_{\tilde{p}(x)}\widetilde{G}<-||\tilde{f}||_{L^{\infty}(B_{5})}\quad \text{in }B_{1}\setminus\bar{B}_{r}.\]
That is, \(\widetilde{G}\) is a strict supersolution to the rescaled free boundary problem in \(B_{1}\setminus\bar{B}_{r}\).
Next, observe that from the assumptions we made, \(F(\tilde{u})\) is a Lipschitz graph in the direction \(e_{n}\) with Lipschitz constant \(1\) and consider the function
\[\widetilde{G}(x+4e_{n})\]
in \(B_{1}(-4e_{n})\backslash\overline{B_{r}}(-4e_{n})\), which is a strict supersolution of our rescaled free boundary problem. There holds that \(\widetilde{G}(x+4e_{n})\geq 0\) as well as \(\widetilde{G}(x+4e_{n})\geq\tilde{u}(x)\) in \(B(-4e_{n})\setminus\overline{B_{r}}(-4e_{n})\), since \(\tilde{u}\equiv 0\) in \(B_{1}(-4e_{n})\).
Now we move back the graph, by a translation depending on \(t>0\), until the graph of the function
\[\widetilde{G}(x+(4-t)e_{n}):-(4-t)e_{n}+(B_{1}\setminus\overline{B_{r}}) \rightarrow\mathbb{R}\]
touches the graph of \(\tilde{u}\). Let say that the contact happens when \(t=t^{*}\) at a point \(\tilde{z}\) such that \(\tilde{u}(\tilde{z})=\widetilde{G}(\tilde{z}+(4-t^{*})e_{n})\).
Since \(\widetilde{G}(x+(4-t^{*})e_{n}))\) is a strict supersolution to the rescaled free boundary problem, recalling the comparison result (see Lemma 2.9), we conclude that \(\widetilde{G}(x+(4-t^{*})e_{n})\) cannot touch \(\tilde{u}\) from above at the common free boundary sets, neither at interior of the annulus.
Then the contact point \(\tilde{z}\) belongs to \(-(4-t^{*})e_{n}+\partial B_{1}.\) As a consequence \(\eta=\tilde{u}(\tilde{z})\) and \(\tilde{d}=\operatorname{dist}(\tilde{z},F(\tilde{u}))\leq 1.\) Since \(\tilde{u}\) is Lipschitz continuous with universal constant, then \(\eta=\tilde{u}(\tilde{z})\leq C\tilde{d}\) so that
\[C^{-1}\eta\leq\tilde{d}\leq 1. \tag{4.17}\]
Hence, from (4.17) and by applying the cited result on NTA domains, we know that we can construct a Harnack chain connecting \(0\) and \(\tilde{z}\), and the length of this chain, let us say \(m\), is bounded by a universal constant.
That is, we have balls \(B_{r_{i}}(x_{i})\) with radius \(r_{i}\) comparable to \(1\), \(B_{2r_{i}}(x_{i})\subset\{\tilde{u}>0\}\), \(0\leq i\leq m\), \(x_{0}=0\), \(x_{m}=\tilde{z}\) and \(y_{i}\in B_{r_{i-1}}(x_{i-1})\cap B_{r_{i}}(x_{i})\), for \(1\leq i\leq m\).
We can now apply Theorem 3.2 to \(\tilde{u}\) at every ball \(B_{2r_{i}}(x_{i})\). That is, given \(\sigma>0\) there exist \(\varepsilon_{1}=\varepsilon_{1}(\sigma)\) and \(C^{*}\) universal such that if \(||\tilde{p}-p_{0}||_{L^{\infty}}\leq\varepsilon\) and \(||\tilde{f}||_{L^{\infty}}\leq\varepsilon\), with \(\varepsilon\leq\varepsilon_{1}\), then
\[\sup_{B_{r_{i}}(x_{i})}\tilde{u}\leq C^{*}\inf_{B_{r_{i}}(x_{i})}\tilde{u}+\sigma. \tag{4.18}\]
For the application of Theorem 3.2 we need to recall that \({||\tilde{u}||}_{L^{\infty}(B_{5})}\leq M\), with \(M\) universal.
Now, (4.18) implies that for any \(x,y\in B_{r_{i}}(x_{i})\),
\[c\tilde{u}(x)-c\sigma\leq\tilde{u}(y),\]
where we have denoted \(c=\frac{1}{C^{*}}\). Then, we obtain
\[c\tilde{u}(y_{1})-c\sigma\leq\tilde{u}(x_{0}),\]
\[c\tilde{u}(y_{i+1})-c\sigma\leq\tilde{u}(y_{i}),\qquad 1\leq i\leq m-1,\]
and
\[c\tilde{u}(x_{m})-c\sigma\leq\tilde{u}(y_{m}).\]
Then, iterating we deduce
\[c^{m+1}\tilde{u}(x_{m})-\sigma\sum_{j=1}^{m+1}c^{j}\leq\tilde{u}(x_{0}).\]
Thus, since \(x_{0}=0\) and \(x_{m}=\tilde{z}\), we have
\[c^{m+1}\tilde{u}(\tilde{z})-\sigma c\,\frac{1-c^{m+1}}{1-c}=c^{m+1}\tilde{u}( \tilde{z})-\sigma\sum_{j=1}^{m+1}c^{j}\leq\tilde{u}(0).\]
Hence denoting \(c_{1}=c^{m+1}\) and \(c_{2}=c\,\frac{1-c^{m+1}}{1-c}\), we obtain
\[c_{1}\eta-\sigma c_{2}=c_{1}\tilde{u}(\tilde{z})-\sigma c_{2}\leq\tilde{u}(0),\]
where \(c_{1}\) and \(c_{2}\) are universal constants. Now we fix \(\sigma\) universal,
\[\sigma=\frac{c_{1}\eta}{2c_{2}}.\]
In this way we conclude that
\[\tilde{u}(0)\geq c_{1}\eta-\frac{c_{1}\eta}{2}=\frac{c_{1}\eta}{2},\]
if \(||\tilde{p}-p_{0}||_{L^{\infty}}\leq||p-p_{0}||_{L^{\infty}(B_{2})}\leq\tilde {\varepsilon}\) and \(||\tilde{f}||_{L^{\infty}}\leq||f||_{L^{\infty}(B_{2})}\leq\tilde{\varepsilon}\), with \(\tilde{\varepsilon}\leq\varepsilon_{1}(\sigma)\). Since \(\eta\) is universal as well, we have finished the proof.
From Proposition 4.2, we can now obtain the proof of Theorem 1.1.
We recall again the notation we use: \(p_{+}^{r}=\sup_{B_{r}}p\) and \(p_{-}^{r}=\inf_{B_{r}}p\), for \(r>0\).
**Proof of Theorem 1.1.** Let \(u\) be a viscosity solution to (1.1) in \(B_{1}\). We will divide the proof into several steps.
_Step I_. Let us fix \(z_{0}\in B_{5/8}\cap F(u)\). For \(0<\rho\leq\frac{1}{16}\), we consider the function
\[\bar{u}(x)=\frac{1}{\rho}u(z_{0}+\rho x),\quad x\in B_{2}.\]
Then \(\bar{u}\) is a viscosity solution to (1.1) in \(B_{2}\), with right hand side \(\bar{f}(x)=\rho f(z_{0}+\rho x)\), exponent \(\bar{p}(x)=p(z_{0}+\rho x)\) and free boundary condition \(\bar{g}(x)=g(z_{0}+\rho x)\). Moreover, \(0\in F(\bar{u})\).
Let us see that we can apply the first part of Proposition 4.2 to \(\bar{u}\), if \(\rho\) is suitably chosen.
For that purpose, let us first show that the constants appearing in that proposition can be taken independent of \(\rho\). More precisely, we want to find a bound independent of \(\rho\) for
\[||\bar{u}||_{L^{\infty}(B_{3/2})}^{\bar{p}_{+}^{3/2}-\bar{p}_{-}^{3/2}},\qquad \text{where}\quad\bar{p}_{+}^{3/2}=\sup_{B_{3/2}}\bar{p},\ \ \bar{p}_{-}^{3/2}=\inf_{B_{3/2}}\bar{p}.\]
In fact, we have
\[||\bar{u}||_{L^{\infty}(B_{3/2})}^{\bar{p}_{+}^{3/2}-\bar{p}_{-}^{3/2}}\leq||u ||_{L^{\infty}(B_{1/8}(z_{0}))}^{\bar{p}_{+}^{3/2}-\bar{p}_{-}^{3/2}}, \tag{4.19}\]
and
\[\bar{p}_{+}^{3/2}-\bar{p}_{-}^{3/2}\leq 3||\nabla\bar{p}||_{L^{\infty}(B_{3/2} )}\leq 3\rho||\nabla p||_{L^{\infty}(B_{1/8}(z_{0}))}. \tag{4.20}\]
Then, from (4.19) and (4.20), we conclude that
\[||\bar{u}||_{L^{\infty}(B_{3/2})}^{\bar{p}_{+}^{3/2}-\bar{p}_{-}^{3/2}}\leq C =C\big{(}||u||_{L^{\infty}(B_{1/8}(z_{0}))},||\nabla p||_{L^{\infty}(B_{1/8}(z _{0}))}\big{)}.\]
It follows that in order to apply the first part of Proposition 4.2 to \(\bar{u}\) we can take the constants \(\tilde{\varepsilon}\) and \(C_{0}\) in that proposition depending only on \(n\), \(p_{\min}\), \(p_{\max}\), \(||u||_{L^{\infty}(B_{1/8}(z_{0}))}\), \(||\nabla p||_{L^{\infty}(B_{1/8}(z_{0}))}\) and \(||g||_{L^{\infty}(B_{1/8}(z_{0}))}\).
Then, if \(\rho\) is small enough, there holds in \(B_{2}\)
\[|\bar{f}(x)| \leq||f||_{L^{\infty}(B_{1/8}(z_{0}))}\,\rho\leq\tilde{\varepsilon},\] \[|\bar{g}(x)-g(z_{0})| =|g(z_{0}+\rho x)-g(z_{0})|\leq 2[g]_{C^{0,\beta}(B_{1/8}(z_{0}))}\, \rho^{\beta}\leq\tilde{\varepsilon},\] \[|\nabla\bar{p}(x)| \leq||\nabla p||_{L^{\infty}(B_{1/8}(z_{0}))}\,\rho\leq\tilde{ \varepsilon},\] \[|\bar{p}(x)-p(z_{0})| =|p(z_{0}+\rho x)-p(z_{0})|\leq 2||\nabla p||_{L^{\infty}(B_{1/8}(z_{0}))} \,\rho\leq\tilde{\varepsilon}.\]
Hence, if \(\rho\leq\rho_{0}\), \(\rho_{0}\) depending only on \(\tilde{\varepsilon}\), \(||f||_{L^{\infty}(B_{1/8}(z_{0}))}\), \([g]_{C^{0,\beta}(B_{1/8}(z_{0}))}\), \(\beta\) and \(||\nabla p||_{L^{\infty}(B_{1/8}(z_{0}))}\), then \(\bar{u}\) satisfies
\[\bar{u}(x)\leq C_{0}\text{dist}(x,F(\bar{u})),\quad x\in B_{1/2}^{+}(\bar{u}).\]
_Step II_. We deduce from the previous step that for every \(z_{0}\in B_{5/8}\cap F(u)\) there holds
\[u(x)\leq C_{0}\text{dist}(x,F(u)),\quad x\in B_{\rho_{1}}(z_{0})\cap\{u>0\}, \tag{4.21}\]
for \(C_{0}>0\) and \(0<\rho_{1}<\frac{1}{32}\) constants depending only on \(n\), \(p_{\min}\), \(p_{\max}\), \(||u||_{L^{\infty}(B_{3/4})}\), \(||\nabla p||_{L^{\infty}(B_{3/4})}\), \(||f||_{L^{\infty}(B_{3/4})}\), \(||g||_{C^{0,\beta}(\overline{B_{3/4}})}\) and \(\beta\) (here we have used that \(B_{1/8}(z_{0})\subset B_{3/4}\) for every \(z_{0}\in B_{5/8}\cap F(u)\)).
_Step III_. Let \(x_{0}\in B_{1/2}^{+}(u)\) such that \(\text{dist}(x_{0},F(u))\leq\rho_{1}/2.\) We will show that
\[|\nabla u(x_{0})|\leq C_{1}, \tag{4.22}\]
for \(C_{1}>0\) universal.
In fact, we denote \(d_{0}=\text{dist}(x_{0},F(u))\) and we define \(\tilde{u}(x)=\frac{1}{d_{0}}u(x_{0}+d_{0}x).\) Then, since \(B_{d_{0}}(x_{0})\subset\{u>0\}\),
\[\Delta_{\tilde{p}(x)}\tilde{u}=\tilde{f}\text{ in }B_{1},\]
with \(\tilde{f}(x)=d_{0}f(x_{0}+d_{0}x)\) and \(\tilde{p}(x)=p(x_{0}+d_{0}x)\) and therefore,
\[||\nabla\tilde{p}||_{L^{\infty}(B_{1})}\leq||\nabla p||_{L^{\infty}(B_{d_{0}}(x _{0}))},\qquad||\tilde{f}||_{L^{\infty}(B_{1})}\leq||f||_{L^{\infty}(B_{d_{0}}(x _{0}))}. \tag{4.23}\]
Since \(d_{0}=\text{dist}(x_{0},F(u))\), there exists \(z_{0}\in F(u)\) such that \(|x_{0}-z_{0}|=d_{0}\) and recalling that \(d_{0}<1/8\) we see that \(z_{0}\in B_{5/8}\cap F(u)\).
Also \(B_{d_{0}}(x_{0})\subset B_{2d_{0}}(z_{0})\subset B_{\rho_{1}}(z_{0})\). Then (4.21) yields
\[u(x)\leq C_{0}\text{dist}(x,F(u))\quad\text{ in }B_{d_{0}}(x_{0}).\]
Moreover, if \(x\in B_{d_{0}}(x_{0})\),
\[\text{dist}(x,F(u))\leq|x-z_{0}|<2d_{0}\]
and then,
\[u(x)\leq C_{0}\text{dist}(x,F(u))\leq C_{0}2d_{0}\quad\text{ in }B_{d_{0}}(x_{0})\]
which implies
\[||\tilde{u}||_{L^{\infty}(B_{1})}=\frac{1}{d_{0}}||u||_{L^{\infty}(B_{d_{0}}( x_{0}))}\leq 2C_{0}. \tag{4.24}\]
Hence, from Theorem 1.1 in [Fa] we deduce that \(\tilde{u}\in C^{1,\alpha}(\overline{B_{1/2}})\) and \(||\nabla\tilde{u}||_{L^{\infty}(B_{1/2})}\leq C_{1}\). Taking into account (4.23) and (4.24), we obtain that the constant \(C_{1}>0\) can be taken depending only on \(n\), \(p_{\text{min}}\), \(p_{\text{max}}\), \(||\nabla p||_{L^{\infty}(B_{3/4})}\), \(||f||_{L^{\infty}(B_{3/4})}\) and \(C_{0}\). It follows that
\[|\nabla u(x_{0})|=|\nabla\tilde{u}(0)|\leq C_{1},\]
which proves (4.22).
_Step IV_. Let \(x_{0}\in B_{1/2}^{+}(u)\) such that \(\text{dist}(x_{0},F(u))>\rho_{1}/2.\) We will show that
\[|\nabla u(x_{0})|\leq C_{2}, \tag{4.25}\]
for \(C_{2}>0\) universal.
In fact, there holds that \(B_{\rho_{1}/2}(x_{0})\subset\{u>0\}\) and then,
\[\Delta_{p(x)}u=f\text{ in }B_{\rho_{1}/2}(x_{0}).\]
Now Theorem 1.1 in [Fa] implies that \(u\in C^{1,\alpha}(\overline{B_{\rho_{1}/4}(x_{0})})\) and \(||\nabla u||_{L^{\infty}(B_{\rho_{1}/4}(x_{0}))}\leq C_{2}\), where \(C_{2}>0\) is a constant that can be taken depending only on \(n\), \(p_{\text{min}}\), \(p_{\text{max}}\), \(||u||_{L^{\infty}(B_{3/4})}\), \(||\nabla p||_{L^{\infty}(B_{3/4})}\), \(||f||_{L^{\infty}(B_{3/4})}\) and \(\rho_{1}\). This proves (4.25) and completes the proof.
## 5. Asymptotic expansions
In this section we revisit some lemmas that are well known in the linear setting (see [CS] and the Appendix in [C2]), for the case of \(p_{0}\)-harmonic functions (i.e., \(\Delta_{p_{0}}u=0\), \(p_{0}\in(1,\infty)\)). Our results --that are used in Theorem 1.2 and Section 7-- concern the existence of first order expansions at one side regular boundary points of positive Lipschitz \(p_{0}\)-harmonic functions, vanishing at the boundary of a domain. The proof can be applied to a general class of fully nonlinear degenerate elliptic operators (see Remark 5.2).
For the notion of solution we refer to Definition 2.1 and Remark 2.3. Our result is the following
**Lemma 5.1**.: _Let \(1<p_{0}<\infty\) and let \(u\) be a positive Lipschitz \(p_{0}\)-harmonic function in a domain \(\Omega\subset\mathbb{R}^{n}.\) Let \(x_{0}\in\partial\Omega\) and assume that \(u\) vanishes continuously on \(\partial\Omega\cap B_{\rho}(x_{0}),\) for some \(\rho>0.\)_
1. _If there exists_ \(B_{r}(y)\subset\Omega\) _such that_ \(x_{0}\in\partial B_{r}(y)\)_, then_ \[u(x)=\alpha\langle x-x_{0},\nu\rangle^{+}+o(|x-x_{0}|),\] _in the ball_ \(B_{r}(y),\) _with_ \(\alpha>0\) _and_ \(\nu=\frac{y-x_{0}}{|y-x_{0}|}.\)__
2. _If there exists a ball_ \(B_{r}(y)\subset\Omega^{c}\) _such that_ \(x_{0}\in\partial B_{r}(y)\)_, then_ \[u(x)=\beta\langle x-x_{0},\nu\rangle^{+}+o(|x-x_{0}|),\] _with_ \(\beta\geq 0\) _and_ \(\nu=\frac{x_{0}-y}{|x_{0}-y|}\)_. In addition, if_ \(\beta>0,\) _then_ \(B_{r}(y)\) _is tangent to_ \(\partial\Omega\) _at_ \(x_{0}.\)__
Proof.: We will assume, without loss of generality, that \(x_{0}=0,\)\(\nu=e_{n}\) and \(\rho>1.\) We will let \(\lambda_{0}:=\min\{1,p_{0}-1\}\) and \(\Lambda_{0}:=\max\{1,p_{0}-1\}.\)
We define
\[\tilde{u}=\left\{\begin{array}{ll}u&x\in\bar{\Omega}\cap\overline{B_{1}}, \\ 0&x\in\bar{\Omega}^{c}\cap\overline{B_{1}}.\end{array}\right. \tag{5.1}\]
Hence \(\tilde{u}\) is Lipschitz in \(\overline{B_{1}}\). To simplify the notation we will denote \(\tilde{u}\) as \(u.\)
**Case (a).** Let \(h\) be the solution of
\[\left\{\begin{array}{ll}\mathcal{M}^{-}_{\lambda_{0},\Lambda_{0}}(D^{2}h)=0,\quad B_{r}(y)\setminus\overline{B_{r/2}}(y)\\ h=0,\quad\text{on}\ \partial B_{r}(y)\\ h=\min_{\overline{B_{r/2}}(y)}u,\quad\text{on}\ \partial B_{r/2}(y).\end{array}\right.\]
Let \(h\equiv\min_{\overline{B_{r/2}}(y)}u\) in \(B_{r/2}(y)\) and \(h\equiv 0\) in \(B_{r}^{c}(y)\). Then, \(h\geq 0,\)\(h\in C^{2}(\overline{B_{r}(y)\setminus B_{r/2}(y)}),\) see [CC], and
\[h(x)=cx_{n}^{+}+o(|x|),\quad c>0. \tag{5.2}\]
In addition, recalling (2.2), we have in \(B_{r}(y),\) in the viscosity sense,
\[0=\Delta_{p_{0}}u(x) =|\nabla u(x)|^{p_{0}-2}\left(\Delta u+(p_{0}-2)\langle D^{2}u(x) \frac{\nabla u(x)}{|\nabla u(x)|},\frac{\nabla u(x)}{|\nabla u(x)|}\rangle\right)\] \[\geq|\nabla u(x)|^{p_{0}-2}\mathcal{M}^{-}_{\lambda_{0},\Lambda_{ 0}}(D^{2}u(x)).\]
Hence, applying Lemma 6 in [IS], we conclude that \(\mathcal{M}^{-}_{\lambda_{0},\Lambda_{0}}(D^{2}u(x))\leq 0,\) in the viscosity sense, in \(B_{r}(y).\) Since \(u\geq 0\) in \(B_{r}(y),\) we deduce that \(u\geq h\) in \(B_{r}(y).\) We define now
\[\alpha_{0}=\sup\{m:\quad u(x)\geq mh(x),\quad B_{1}\cap B_{r}(y)\}\]
and for \(k\in\mathbb{N}\)
\[\alpha_{k}=\sup\{m:\quad u(x)\geq mh(x),\quad B_{2^{-k}}\cap B_{r}(y)\}.\]
In particular these sets are well defined and not empty, since \(m=1\) belongs to all of them. The sequence \(\{\alpha_{k}\}_{k\in\mathbb{N}}\) is increasing and bounded because \(u\) is Lipschitz. Let
\[\tilde{\alpha}=\lim_{k\to\infty}\alpha_{k}.\]
From the definition of \(\tilde{\alpha}\) there holds that \(\tilde{\alpha}>0\) and
\[\liminf_{x\to 0,\ x\in B_{r}(y)}\frac{u(x)-\tilde{\alpha}h(x)}{|x|}\geq 0. \tag{5.3}\]
Let us show that
\[\limsup_{x\to 0,\ x\in B_{r}(y)}\frac{u(x)-\tilde{\alpha}h(x)}{|x|}\leq 0. \tag{5.4}\]
Then, (5.2), (5.3) and (5.4) will give the desired result.
We argue by contradiction assuming that
\[\limsup_{x\to 0,\ x\in B_{r}(y)}\frac{u(x)-\tilde{\alpha}h(x)}{|x|}=2\delta>0.\]
Hence, there exists a sequence \(x^{k}\in B_{r}(y),\)\(|x^{k}|=r_{k}\to 0\) such that for every \(k\)
\[\frac{u(x^{k})-\tilde{\alpha}h(x^{k})}{|x^{k}|}\geq\delta.\]
We define \(r_{k}=|x^{k}|,\)\(y^{k}=\frac{x^{k}}{r_{k}},\) so that \(r_{k}\to 0\) and \(|y^{k}|=1.\) Moreover, we denote
\[u_{k}(x):=\frac{u(r_{k}x)}{r_{k}},\quad h_{k}(x):=\frac{h(r_{k}x)}{r_{k}}.\]
Since \(u\) and \(h\) are Lipschitz in \(\overline{B}_{1}\) and \(u(0)=h(0)=0\) then, there exists \(v\) Lipschitz continuous in \(\mathbb{R}^{n}\) such that, for a subsequence,
\[u_{k}-\tilde{\alpha}h_{k}\to v\]
uniformly on compact sets and such that \(y^{k}\to y^{0},\)\(|y^{0}|=1,\)\(y_{n}^{0}\geq 0.\) Since
\[u_{k}(y^{k})-\tilde{\alpha}h_{k}(y^{k})\geq\delta,\]
as a consequence \(v(y^{0})\geq\delta.\) Then there exists \(z^{0}\) with \(|z^{0}|=1,\)\(z_{n}^{0}>0\) and \(\overline{B_{\varepsilon}(z^{0})}\subset\{x_{n}>0\}\) such that
\[v(x)\geq\frac{\delta}{2}\qquad\text{ in }B_{\varepsilon}(z^{0})\]
and
\[u_{k}(x)-\tilde{\alpha}h_{k}(x)\geq\frac{\delta}{2}\qquad\text{ in }B_{\varepsilon}(z^{0}). \tag{5.5}\]
We know that in \(B_{r}(y)\cap B_{2^{-k}}\)
\[u(x)\geq\alpha_{k}h(x),\]
and \(r_{k}\to 0.\) We take a sequence \(j_{k}\to+\infty\) such that \(r_{k}<2^{-j_{k}}\) and then
\[u(x)\geq\alpha_{j_{k}}h(x)\quad\text{ in }B_{r}(y)\cap B_{2^{-j_{k}}}.\]
Hence
\[u(x)\geq\alpha_{j_{k}}h(x)\quad\text{ if }|x|<r_{k},\ x\in B_{r}(y).\]
As a consequence,
\[u_{k}(x)\geq\alpha_{j_{k}}h_{k}(x)\quad\text{ if }|x|<1,\ |x-\frac{y}{r_{k}}|< \frac{r}{r_{k}},\]
and, recalling (5.5), we have \(u_{k}(x)-\alpha_{j_{k}}h_{k}(x)\geq\frac{\delta}{2}\) in \(B_{\varepsilon}(z^{0}).\) We also observe that
\[\begin{split}\mathcal{M}_{\lambda_{0},\Lambda_{0}}^{-}(D^{2}h_{k} )&=0\qquad\text{ in }B_{\frac{x}{r_{k}}}(\frac{y}{r_{k}})\cap\overline{B}_{\frac{x}{r_{k}}}^{c}( \frac{y}{r_{k}}),\\ \mathcal{M}_{\lambda_{0},\Lambda_{0}}^{-}(D^{2}u_{k})& \leq 0\qquad\text{ in }B_{1}\cap B_{\frac{x}{r_{k}}}(\frac{y}{r_{k}}).\end{split} \tag{5.6}\]
Hence, in the viscosity sense, if \(k\) is large,
\[\mathcal{M}_{\lambda_{0},\Lambda_{0}}^{-}(D^{2}(u_{k}-\alpha_{j_{k}}h_{k})) \leq 0\quad\text{ in }B_{1}\cap B_{\frac{x}{r_{k}}}(\frac{y}{r_{k}}).\]
In fact, since \(\alpha_{j_{k}}h_{k}\in C^{2}\), we get from (5.6), reasoning as in Proposition 2.13 in [CC],
\[\mathcal{M}^{-}_{\lambda_{0},\Lambda_{0}}(D^{2}(u_{k}-\alpha_{j_{k}}h_{k}))\leq- \mathcal{M}^{-}_{\lambda_{0},\Lambda_{0}}(D^{2}(\alpha_{j_{k}}h_{k}))=0\]
in \(B_{1}\cap B_{\frac{r}{r_{k}}}(\frac{y}{r_{k}})\). We now consider, for large \(k\), \(w_{k}\) satisfying
\[\left\{\begin{array}{l}\mathcal{M}^{-}_{\lambda_{0},\Lambda_{0}}(D^{2}w_{k} )=0\quad\text{ in }D_{k}:=B_{1}\cap B_{\frac{r}{r_{k}}}(\frac{y}{r_{k}}),\\ w_{k}=\frac{\tilde{c}}{4}\varphi\quad\text{ on }B_{\varepsilon}(z^{0})\cap \partial D_{k},\\ w_{k}=0\quad\text{ on }\partial D_{k}\setminus B_{\varepsilon}(z^{0}),\end{array}\right.\]
\[\varphi\in C_{0}^{\infty}(B_{\varepsilon}(z^{0})),\quad 0\leq\varphi\leq 1, \quad\varphi\equiv 1\text{ in }B_{\varepsilon/2}(z^{0}).\]
Then \(w_{k}\in C(\overline{D_{k}})\cap C^{2}(\overline{D_{k}\cap B_{1/2}})\), \(w_{k}\geq 0\) in \(\overline{D_{k}}\) and
\[u_{k}-\alpha_{j_{k}}h_{k}\geq w_{k}\quad\text{ in }\overline{D_{k}}.\]
We claim that there exist \(\mu>0\) and \(\tilde{\rho}_{0}>0\) such that, for large \(k\),
\[\frac{w_{k}(x)}{h_{k}(x)}\geq\mu\quad\text{ in }B_{\tilde{\rho}_{0}}\cap B_{ \frac{r}{r_{k}}}(\frac{y}{r_{k}}). \tag{5.7}\]
In fact, we consider \(\varphi_{k}\) a \(C^{2}\) diffeomorphism which maps, for \(\rho_{0}\) small, \(B_{\rho_{0}}\cap B_{\frac{r}{r_{k}}}(\frac{y}{r_{k}})\) in \(B_{1}^{+}:=B_{1}\cap\{x_{n}>0\}\), with \(\varphi_{k}(B_{\rho_{0}}\cap\partial B_{\frac{r}{r_{k}}}(\frac{y}{r_{k}}))=B_{ 1}\cap\{x_{n}=0\}\) and \(\varphi_{k}(0)=0\). We choose \(\varphi_{k}\) with uniformly bounded \(C^{2}\) norms. Then, we define
\[\tilde{w}_{k}(x)=w_{k}(\varphi_{k}^{-1}(x)),\quad\tilde{h}_{k}(x)=h_{k}( \varphi_{k}^{-1}(x))\quad\text{ for }x\in B_{1}^{+}.\]
We first observe that, for every \(M,N\in\mathcal{S}^{n\times n}\), the following inequalities hold (see Lemma 2.10 in [CC])
\[\mathcal{M}^{+}_{\lambda_{0},\Lambda_{0}}(M-N)\geq\mathcal{M}^{-}_{\lambda_{ 0},\Lambda_{0}}(M)-\mathcal{M}^{-}_{\lambda_{0},\Lambda_{0}}(N)\geq\mathcal{ M}^{-}_{\lambda_{0},\Lambda_{0}}(M-N).\]
Then, we can apply Proposition 2.1 in [SS] with \(F(M):=\mathcal{M}^{-}_{\lambda_{0},\Lambda_{0}}(M)\) and we obtain that
\[\tilde{F}_{k}(D^{2}\tilde{w}_{k}(x),D\tilde{w}_{k}(x),x)=0\quad\text{in }\ B_{1}^{+},\]
where, for \(M\in\mathcal{S}^{n\times n}\), \(q\in\mathbb{R}^{n}\) and \(x\in B_{1}^{+}\),
\[\tilde{F}_{k}(M,q,x):=\mathcal{M}^{-}_{\lambda_{0},\Lambda_{0}}(D\varphi_{k}^{ T}(\varphi_{k}^{-1}(x))MD\varphi_{k}(\varphi_{k}^{-1}(x))+qD^{2}\varphi_{k}( \varphi_{k}^{-1}(x))),\]
with \(\tilde{F}_{k}\) satisfying for every \(M,N\in\mathcal{S}^{n\times n}\), \(p,q\in\mathbb{R}^{n}\) and \(x\in B_{1}^{+}\),
\[\mathcal{M}^{+}_{\lambda_{0},\Lambda_{0}}(M-N)+K|p-q| \geq\tilde{F}_{k}(M,p,x)-\tilde{F}_{k}(N,q,x)\] \[\geq\mathcal{M}^{-}_{\lambda_{0},\Lambda_{0}}(M-N)-K|p-q|.\]
Here \(K\) is a fixed constant depending only on the uniform bound of the \(C^{2}\) norms of \(\varphi_{k}\).
As a consequence, \(\tilde{w}_{k}\) satisfy in the viscosity sense, the following set of inequalities
\[\left\{\begin{array}{l}\mathcal{M}^{+}_{\lambda_{0},\Lambda_{0}}(D^{2}\tilde {w}_{k})+K|\nabla\tilde{w}_{k}|\geq 0\quad\text{ in }B_{1}^{+},\\ \mathcal{M}^{-}_{\lambda_{0},\Lambda_{0}}(D^{2}\tilde{w}_{k})-K|\nabla\tilde{w} _{k}|\leq 0\quad\text{ in }B_{1}^{+}.\end{array}\right. \tag{5.8}\]
With similar arguments we obtain that \(\tilde{h}_{k}\) satisfy in the viscosity sense the inequalities in (5.8) in \(B_{1}^{+}\), as well.
We also notice that, since \(h_{k}(x)\to cx_{n}^{+}\) uniformly on compact sets of \(\mathbb{R}^{n}\), with \(c>0\), then, \(\tilde{h}_{k}(\frac{1}{2}e_{n})=h_{k}(\varphi_{k}^{-1}(\frac{1}{2}e_{n}))\to \tilde{c}>0\).
Hence, we can apply Proposition 2.4 of [SS] and we get
\[\tilde{h}_{k}(x)\leq C\tilde{h}_{k}(\frac{1}{2}e_{n})x_{n}\leq C_{0}x_{n}\quad \text{ in }B_{1/2}^{+}, \tag{5.9}\]
for a positive constant \(C_{0}\) and large \(k\).
On the other hand, for \(k_{1}\) large and fixed, there holds \(w_{k}\geq w_{k_{1}}\) in \(D_{k_{1}}\), for \(k\geq k_{1}\). We remark that \(w_{k_{1}}>0\) in \(D_{k_{1}}\). Thus, for any \(0<r_{0}<1\),
\[\tilde{w}_{k}(\frac{r_{0}}{2}e_{n})=w_{k}(\varphi_{k}^{-1}(\frac{r_{0}}{2}e_{n }))\geq w_{k_{1}}(\varphi_{k}^{-1}(\frac{r_{0}}{2}e_{n}))\to\tilde{c}_{r_{0}}>0. \tag{5.10}\]
Now the application of Proposition 2.5 in [SS] to \(\tilde{w}_{k}^{r_{0}}(x):=\tilde{w}_{k}(r_{0}x)\), for \(r_{0}>0\) universal and small, gives
\[\tilde{w}_{k}(x)\geq c_{0}\tilde{w}_{k}(\frac{r_{0}}{2}e_{n})x_{n}\quad\text{ in }B_{\frac{r_{0}}{2}}^{+},\]
for a positive constant \(c_{0}\). Hence, using (5.10) with this choice of \(r_{0}\), we get
\[\tilde{w}_{k}(x)\geq c_{1}x_{n}\quad\text{ in }B_{\frac{r_{0}}{2}}^{+}, \tag{5.11}\]
for \(c_{1}\) a positive constant and large \(k\). Thus, from (5.9) and (5.11), we obtain
\[\frac{\tilde{w}_{k}(x)}{\tilde{h}_{k}(x)}\geq\frac{c_{1}}{C_{0}}:=\mu\quad \text{ in }B_{\rho_{1}}^{+},\]
for \(\rho_{1}>0\) small and large \(k\). Now, going back to the original variables, we conclude that
\[\frac{w_{k}(x)}{h_{k}(x)}\geq\mu\quad\text{ in }B_{\tilde{\rho}_{0}}\cap B_{ \frac{r}{r_{k}}}(\frac{y}{r_{k}}),\]
for some constants \(\tilde{\rho}_{0}>0\) and \(\mu>0\), and large \(k\). That is, (5.7) holds.
Finally, since
\[u_{k}-\alpha_{j_{k}}h_{k}\geq w_{k}\quad\text{ in }B_{1}\cap B_{\frac{r}{r_{k}}} (\frac{y}{r_{k}}),\]
we get
\[u_{k}-\alpha_{j_{k}}h_{k}\geq w_{k}=\frac{w_{k}}{h_{k}}h_{k}\geq\mu h_{k}\quad \text{ in }B_{\tilde{\rho}_{0}}\cap B_{\frac{r}{r_{k}}}(\frac{y}{r_{k}}).\]
As a consequence,
\[u_{k}-(\alpha_{j_{k}}+\mu)h_{k}\geq 0\quad\text{ in }B_{\tilde{\rho}_{0}}\cap B _{\frac{r}{r_{k}}}(\frac{y}{r_{k}}).\]
Then, in the original variables, we have
\[u(r_{k}x)-(\alpha_{j_{k}}+\mu)h(r_{k}x)\geq 0\quad\text{ when }|x|\leq\tilde{\rho}_{0},\ |x-\frac{y}{r_{k}}|<\frac{r}{r_{k}},\]
or, equivalently, when \(|r_{k}x|\leq r_{k}\tilde{\rho}_{0}\), \(|r_{k}x-y|<r\). Since \(\alpha_{j_{k}}+\mu\to\tilde{\alpha}+\mu\), there holds that \(\alpha_{j_{k}}+\mu\geq\tilde{\alpha}+\mu/2\), if \(k\) is large enough. Hence,
\[u-(\tilde{\alpha}+\mu/2)h\geq 0\quad\text{ in }B_{r_{k_{0}}\tilde{\rho}_{0}} \cap B_{r}(y),\]
for some suitable \(k_{0}\). As a consequence, if \(2^{-k}\leq r_{k_{0}}\tilde{\rho}_{0}\),
\[u-(\alpha_{k}+\mu/2)h\geq u-(\tilde{\alpha}+\mu/2)h\geq 0\quad\text{ in }B_{2^{-k}}\cap B_{r}(y),\]
but this contradicts the definition of \(\alpha_{k}\) and completes the proof.
**Case (b).** Recalling (5.1), we have that \(\tilde{u}\) is Lipschitz in \(\overline{B_{1}}\), satisfies \(\Delta_{p_{0}}\tilde{u}\geq 0\) in \(B_{1}\) in the sense of Definition 2.2 in [JLM] and, by Theorem 2.5 of that paper, in the viscosity sense. We again denote \(\tilde{u}\) as \(u\). Without loss of generality we may suppose that \(B_{2r}(y)\subset B_{1}\).
Let \(h\) be the solution of
\[\left\{\begin{array}{ll}{\mathcal{M}}^{+}_{\lambda_{0},\Lambda_{0}}(D^{2}h)=0, \quad B_{2r}(y)\setminus\overline{B_{r}}(y)\\ h=0,\quad\text{on}\ \partial B_{r}(y)\\ h=\max_{\partial B_{2r}(y)}u,\quad\text{on}\ \partial B_{2r}(y),\end{array}\right.\]
and define \(h\equiv 0\) in \(B_{r}(y)\). Then, \(h\geq 0\), \(h\in C^{2}(\overline{B_{2r}(y)\setminus B_{r}(y)})\), see [CC], and
\[h(x)=cx^{+}_{n}+o(|x|),\quad c>0. \tag{5.12}\]
In addition, recalling (2.2), we have in \(B_{1}\), in the viscosity sense,
\[0\leq\Delta_{p_{0}}u(x) =|\nabla u(x)|^{p_{0}-2}\left(\Delta u+(p_{0}-2)\langle D^{2}u(x) \frac{\nabla u(x)}{|\nabla u(x)|},\frac{\nabla u(x)}{|\nabla u(x)|}\rangle\right)\] \[\leq|\nabla u(x)|^{p_{0}-2}{\mathcal{M}}^{+}_{\lambda_{0},\Lambda_ {0}}(D^{2}u(x)).\]
Hence, applying Lemma 6 in [IS], we conclude that \({\mathcal{M}}^{+}_{\lambda_{0},\Lambda_{0}}(D^{2}u(x))\geq 0\), in the viscosity sense, in \(B_{1}.\) Since \(u=0\) on \(\partial B_{r}(y)\), then \(u\leq h\) on \(\partial(B_{2r}(y)\setminus\overline{B}_{r}(y))\), thus we deduce that \(u\leq h\) in \(B_{2r}(y)\setminus\overline{B}_{r}(y).\) We define now
\[\beta_{0}=\inf\{m:\quad mh(x)\geq u(x),\quad B_{1}\cap B_{r}^{c}(y)\}\]
and for \(k\in{\mathbb{N}}\)
\[\beta_{k}=\inf\{m:\quad mh(x)\geq u(x),\quad B_{2^{-k}}\cap B_{r}^{c}(y)\}.\]
In particular these sets are well defined and not empty, for \(k\geq k_{0}\), since \(m=1\) belongs to all of them. The sequence \(\{\beta_{k}\}_{k\in{\mathbb{N}}}\) is monotone decreasing, so that
\[\tilde{\beta}:=\inf_{k\in{\mathbb{N}}}\beta_{k}\geq 0,\]
because \(\beta_{k}\geq 0\) for \(k\in{\mathbb{N}}.\) There holds that
\[\limsup_{x\to 0,\ x\in B_{r}^{c}(y)}\frac{u(x)-\tilde{\beta}h(x)}{|x|}\leq 0. \tag{5.13}\]
We will show that
\[\liminf_{x\to 0,\ x\in B_{r}^{c}(y)}\frac{u(x)-\tilde{\beta}h(x)}{|x|}\geq 0. \tag{5.14}\]
Then, (5.12), (5.13) and (5.14) will give the desired result.
We will proceed by contradiction. In fact, assume that there exists \(\delta>0\) such that
\[\liminf_{x\to 0,\ x\in B_{r}^{c}(y)}\frac{u(x)-\tilde{\beta}h(x)}{|x|}=-2\delta.\]
Then, there exists a sequence \(\{x^{k}\}_{k\in{\mathbb{N}}}\subset B_{r}^{c}(y)\), \(x^{k}\to 0\), such that
\[\frac{u(x^{k})-\tilde{\beta}h(x^{k})}{|x^{k}|}\leq-\delta.\]
We define \(r_{k}=|x^{k}|\), \(y^{k}=\frac{x^{k}}{r_{k}}\), so that \(r_{k}\to 0\) and \(|y^{k}|=1.\) Moreover, we denote
\[u_{k}(x):=\frac{u(r_{k}x)}{r_{k}},\quad h_{k}(x):=\frac{h(r_{k}x)}{r_{k}}.\]
Since \(u\) is Lipschitz in \(B_{1}\), \(h\in C^{2}(\overline{B_{2r}(y)\setminus B_{r}(y)})\) and \(u(0)=h(0)=0\) then, there exists \(v\) Lipschitz continuous in \({\mathbb{R}}^{n}\) such that, for a subsequence,
\[u_{k}-\tilde{\beta}h_{k}\to v\]
uniformly on compact sets and such that \(y^{k}\to y^{0}\), \(|y^{0}|=1\), \(y^{0}_{n}\geq 0.\) Since
\[u_{k}(y^{k})-\tilde{\beta}h_{k}(y^{k})\leq-\delta,\]
as a consequence \(v(y^{0})\leq-\delta.\) Then there exists \(z^{0}\) with \(|z^{0}|=1\), \(z^{0}_{n}>0\) and \(\overline{B_{\varepsilon}(z^{0})}\subset\{x_{n}>0\}\) such that
\[v(x)\leq-\frac{\delta}{2}\qquad\text{ in }B_{\varepsilon}(z^{0})\]
and
\[u_{k}(x)-\tilde{\beta}h_{k}(x)\leq-\frac{\delta}{2}\qquad\text{ in }B_{\varepsilon}(z^{0}). \tag{5.15}\]
We know that in \(B_{r}^{c}(y)\cap B_{2^{-k}}\)
\[u(x)\leq\beta_{k}h(x),\]
and \(r_{k}\to 0\). We take a sequence \(j_{k}\to+\infty\) such that \(r_{k}<2^{-j_{k}}\) and then
\[u(x)\leq\beta_{j_{k}}h(x)\quad\text{ in }B_{r}^{c}(y)\cap B_{2^{-j_{k}}}.\]
Hence
\[u(x)\leq\beta_{j_{k}}h(x)\quad\text{ if }|x|<r_{k},\ x\in B_{r}^{c}(y).\]
As a consequence,
\[u_{k}(x)\leq\beta_{j_{k}}h_{k}(x)\quad\text{ if }|x|<1,\ |x-\frac{y}{r_{k}}|> \frac{r}{r_{k}},\]
and, recalling (5.15), we have \(u_{k}(x)-\beta_{j_{k}}h_{k}(x)\leq-\frac{\delta}{2}\) in \(B_{\varepsilon}(z^{0}).\) We also observe that
\[\begin{array}{ll}\mathcal{M}^{+}_{\lambda_{0},\Lambda_{0}}(D^{2}h_{k})=0& \text{ in }B_{\frac{2\varepsilon}{r_{k}}}(\frac{y}{r_{k}})\cap\overline{B}^{ \varepsilon}_{\frac{r}{r_{k}}}(\frac{y}{r_{k}}),\\ \mathcal{M}^{+}_{\lambda_{0},\Lambda_{0}}(D^{2}u_{k})\geq 0&\text{ in }B_{1}\cap \overline{B}^{c}_{\frac{r}{r_{k}}}(\frac{y}{r_{k}}).\end{array} \tag{5.16}\]
Hence, in the viscosity sense,
\[\mathcal{M}^{+}_{\lambda_{0},\Lambda_{0}}(D^{2}(u_{k}-\beta_{j_{k}}h_{k})) \geq 0\quad\text{in }\ B_{1}\cap\overline{B}^{c}_{\frac{r}{r_{k}}}(\frac{y}{r_{k}}).\]
In fact, since \(\beta_{j_{k}}h_{k}\in C^{2}\), we get from (5.16), reasoning as in Proposition 2.13 in [CC],
\[\mathcal{M}^{+}_{\lambda_{0},\Lambda_{0}}(D^{2}(u_{k}-\beta_{j_{k}}h_{k})) \geq-\mathcal{M}^{+}_{\lambda_{0},\Lambda_{0}}(D^{2}(\beta_{j_{k}}h_{k}))=0\]
in \(B_{1}\cap\overline{B}^{c}_{\frac{r}{r_{k}}}(\frac{y}{r_{k}})\). Thus, we deduce that
\[\mathcal{M}^{-}_{\lambda_{0},\Lambda_{0}}(D^{2}(\beta_{j_{k}}h_{k}-u_{k})) \leq 0\quad\text{ in }B_{1}\cap\overline{B}^{c}_{\frac{r}{r_{k}}}(\frac{y}{r_{k}}).\]
We now consider \(w_{k}\) satisfying
\[\left\{\begin{array}{ll}\mathcal{M}^{-}_{\lambda_{0},\Lambda_{0}}(D^{2}w_{k} )=0&\text{ in }D_{k}:=B_{1}\cap\overline{B}^{c}_{\frac{r}{r_{k}}}(\frac{y}{r_{k}}),\\ w_{k}=\frac{\delta}{4}\varphi&\text{ on }B_{\varepsilon}(z^{0})\cap\partial D _{k},\\ w_{k}=0&\text{ on }\partial D_{k}\setminus B_{\varepsilon}(z^{0}),\end{array}\right. \tag{5.17}\]
\[\varphi\in C^{\infty}_{0}(B_{\varepsilon}(z^{0})),\quad 0\leq\varphi\leq 1, \quad\varphi\equiv 1\text{ in }B_{\varepsilon/2}(z^{0}).\]
Then \(w_{k}\in C(\overline{D_{k}})\cap C^{2}(\overline{D_{k}\cap B_{1/2}})\), \(w_{k}\geq 0\) in \(\overline{D_{k}}\) and
\[\beta_{j_{k}}h_{k}-u_{k}\geq w_{k}\quad\text{ in }\overline{D_{k}}.\]
We claim that there exist \(\mu>0\) and \(\tilde{\rho}_{0}>0\) such that, for large \(k\),
\[\frac{w_{k}(x)}{h_{k}(x)}\geq\mu\quad\text{ in }B_{\tilde{\rho}_{0}}\cap\overline{B} ^{c}_{\frac{\nu}{r_{k}}}(\frac{y}{r_{k}}). \tag{5.18}\]
In fact, we consider \(\varphi_{k}\) a \(C^{2}\) diffeomorphism which maps, for \(\rho_{0}\) small, \(B_{\rho_{0}}\cap\overline{B}^{c}_{\frac{\nu}{r_{k}}}(\frac{y}{r_{k}})\) in \(B_{1}^{+}:=B_{1}\cap\{x_{n}>0\}\), with \(\varphi_{k}(B_{\rho_{0}}\cap\partial B_{\frac{\nu}{r_{k}}}(\frac{y}{r_{k}}))=B _{1}\cap\{x_{n}=0\}\) and \(\varphi_{k}(0)=0\). We choose \(\varphi_{k}\) with uniformly bounded \(C^{2}\) norms. Then, we define
\[\tilde{w}_{k}(x)=w_{k}(\varphi_{k}^{-1}(x)),\quad\tilde{h}_{k}(x)=h_{k}( \varphi_{k}^{-1}(x))\quad\text{ for }x\in B_{1}^{+}.\]
Reasoning as in Case a), we get that \(\tilde{w}_{k}\) satisfy in the viscosity sense, the following set of inequalities
\[\left\{\begin{array}{ll}\mathcal{M}^{+}_{\lambda_{0},\Lambda_{0}}(D^{2} \tilde{w}_{k})+K|\nabla\tilde{w}_{k}|\geq 0&\text{ in }B_{1}^{+},\\ \mathcal{M}^{-}_{\lambda_{0},\Lambda_{0}}(D^{2}\tilde{w}_{k})-K|\nabla\tilde{ w}_{k}|\leq 0&\text{ in }B_{1}^{+},\end{array}\right. \tag{5.19}\]
where \(K\) is a fixed constant depending only on the uniform bound of the \(C^{2}\) norms of \(\varphi_{k}\). With similar arguments we obtain that \(\tilde{h}_{k}\) satisfy in the viscosity sense the inequalities in (5.19) in \(B_{1}^{+}\), as well.
We also notice that, since \(h_{k}(x)\to cx_{n}^{+}\) uniformly on compact sets of \(\mathbb{R}^{n}\), with \(c>0\), then, \(\tilde{h}_{k}(\frac{1}{2}e_{n})=h_{k}(\varphi_{k}^{-1}(\frac{1}{2}e_{n}))\to \tilde{c}>0\).
Hence, we can apply Proposition 2.4 of [SS] and we get
\[\tilde{h}_{k}(x)\leq C\tilde{h}_{k}(\frac{1}{2}e_{n})x_{n}\leq C_{0}x_{n}\quad \text{ in }B_{1/2}^{+}, \tag{5.20}\]
for a positive constant \(C_{0}\) and large \(k\).
On the other hand, let \(w_{0}\) satisfying
\[\left\{\begin{array}{ll}\mathcal{M}^{-}_{\lambda_{0},\Lambda_{p}}(D^{2}w_{ 0})=0\quad\text{ in }B_{1}^{+},\\ w_{0}=\frac{\tilde{\rho}}{4}\varphi\quad\text{ on }B_{\varepsilon}(z^{0})\cap \partial B_{1}^{+},\\ w_{0}=0\quad\text{ on }\partial B_{1}^{+}\setminus B_{\varepsilon}(z^{0}),\end{array}\right.\]
with \(\varphi\) as in (5.17). Then \(w_{k}\geq w_{0}\) in \(B_{1}^{+}\). We remark that \(w_{0}>0\) in \(B_{1}^{+}\). Thus, for any \(0<r_{0}<1\),
\[\tilde{w}_{k}(\frac{r_{0}}{2}e_{n})=w_{k}(\varphi_{k}^{-1}(\frac{r_{0}}{2}e_{ n}))\geq w_{0}(\varphi_{k}^{-1}(\frac{r_{0}}{2}e_{n}))\to\tilde{c}_{r_{0}}>0. \tag{5.21}\]
Now the application of Proposition 2.5 in [SS] to \(\tilde{w}_{k}^{r_{0}}(x):=\tilde{w}_{k}(r_{0}x)\), for \(r_{0}>0\) universal and small, gives
\[\tilde{w}_{k}(x)\geq c_{0}\tilde{w}_{k}(\frac{r_{0}}{2}e_{n})x_{n}\quad\text{ in }B_{\frac{r_{0}}{2}}^{+},\]
for a positive constant \(c_{0}\). Hence, using (5.21) with this choice of \(r_{0}\), we get
\[\tilde{w}_{k}(x)\geq c_{1}x_{n}\quad\text{ in }B_{\frac{r_{0}}{2}}^{+}, \tag{5.22}\]
for \(c_{1}\) a positive constant and large \(k\). Thus, from (5.20) and (5.22), we obtain
\[\frac{\tilde{w}_{k}(x)}{\tilde{h}_{k}(x)}\geq\frac{c_{1}}{C_{0}}:=\mu\quad \text{ in }B_{\rho_{1}}^{+},\]
for \(\rho_{1}>0\) small and large \(k\). Now, going back to the original variables, we conclude that
\[\frac{w_{k}(x)}{h_{k}(x)}\geq\mu\quad\text{ in }B_{\tilde{\rho}_{0}}\cap \overline{B}^{c}_{\frac{\nu}{r_{k}}}(\frac{y}{r_{k}}),\]
for some constants \(\tilde{\rho}_{0}>0\) and \(\mu>0\), and large \(k\). That is, (5.18) holds.
Finally, since
\[\beta_{j_{k}}h_{k}-u_{k}\geq w_{k}\quad\text{ in }B_{1}\cap\overline{B}^{c}_{ \frac{r}{r_{k}}}(\frac{y}{r_{k}}),\]
we get
\[\beta_{j_{k}}h_{k}-u_{k}\geq w_{k}=\frac{w_{k}}{h_{k}}h_{k}\geq\mu h_{k}\quad \text{ in }B_{\tilde{\rho}_{0}}\cap\overline{B}^{c}_{\frac{r}{r_{k}}}(\frac{y}{r_{k}}).\]
As a consequence,
\[(\beta_{j_{k}}-\mu)h_{k}-u_{k}\geq 0\quad\text{ in }B_{\tilde{\rho}_{0}}\cap \overline{B}^{c}_{\frac{r}{r_{k}}}(\frac{y}{r_{k}}).\]
Then, in the original variables, we have
\[(\beta_{j_{k}}-\mu)h(r_{k}x)-u(r_{k}x)\geq 0\quad\text{ when }|x|\leq\tilde{\rho}_{0},\ |x-\frac{y}{r_{k}}|>\frac{r}{r_{k}},\]
or, equivalently, when \(|r_{k}x|\leq r_{k}\tilde{\rho}_{0}\), \(|r_{k}x-y|>r\). Since \(\beta_{j_{k}}-\mu\to\tilde{\beta}-\mu\), there holds that \(\beta_{j_{k}}-\mu\leq\tilde{\beta}-\frac{\mu}{2}\), if \(k\) is large enough. Hence,
\[(\tilde{\beta}-\mu/2)h-u\geq 0\quad\text{ in }B_{r_{k_{0}}\tilde{\rho}_{0}} \cap B^{c}_{r}(y),\]
for some suitable \(k_{0}\). As a consequence, if \(2^{-k}\leq r_{k_{0}}\tilde{\rho}_{0}\),
\[(\beta_{k}-\mu/2)h-u\geq(\tilde{\beta}-\mu/2)h-u\geq 0\quad\text{ in }B_{2^{-k}}\cap B^{c}_{r}(y),\]
but this contradicts the definition of \(\beta_{k}\) and completes the proof.
**Remark 5.2**.: Lemma 5.1 also holds if we replace in the statement the \(p_{0}\)-Laplace operator by a general class of fully nonlinear degenerate elliptic operators. More precisely, we can consider \(u\) a Lipschitz viscosity solution of an equation of the form
\[F(D^{2}u(x),Du(x),x)=0\quad\text{ in }\Omega,\]
with \(F\) satisfying for every \(M\in\mathcal{S}^{n\times n}\), \(q\in\mathbb{R}^{n}\) and \(x\in\Omega\),
\[|q|^{\sigma}\mathcal{M}^{-}_{\lambda,\Lambda}(M)\leq F(M,q,x)\leq|q|^{\sigma} \mathcal{M}^{+}_{\lambda,\Lambda}(M),\]
for some \(0<\lambda\leq\Lambda\) and \(\sigma\in\mathbb{R}\), and the same proof applies.
## 6. Regularity of the free boundary
In this section we prove our main result, namely, Theorem 1.2.
Since we will apply a result of [LN1], we include first the definition of viscosity solution employed in that paper in case of nonnegative solutions. These are solutions of problem (1.1) with \(p(x)\equiv p_{0}\), \(f\equiv 0\) and \(g\equiv 1\).
**Definition 6.1** (Definition 1.4 in [Ln1]).: Let \(D\subset\mathbb{R}^{n}\) be a domain, \(u\in C(D)\) be nonnegative and \(1<p_{0}<\infty\). \(u\) is a viscosity (or weak) solution of
\[\left\{\begin{array}{ll}\Delta_{p_{0}}u=0\quad\text{in }D^{+}(u):=\{x\in D:u(x)>0\}, \\ \\ |\nabla u|=1\quad\text{ on }F(u):=\partial D^{+}(u)\cap D,\end{array}\right. \tag{6.1}\]
if there holds that \(u\) is \(p_{0}\)-harmonic in \(D^{+}(u)\), in the sense that \(u\in W^{1,p_{0}}(D^{+}(u))\) and
\[\int_{D^{+}(u)}|\nabla u|^{p_{0}-2}\nabla u\cdot\nabla\varphi\,dx=0\quad\text {for every }\varphi\in W^{1,p_{0}}_{0}(D^{+}(u)),\]
and the free boundary condition in (6.1) is satisfied in the following sense. Assume that \(x_{0}\in F(u)\) and there exists a ball \(B_{r}(y)\subset D\), with \(x_{0}\in\partial B_{r}(y)\). If \(\nu=\frac{y-x_{0}}{|y-x_{0}|}\), then the following holds, as \(x\to x_{0}\) non-tangentially, for \(\alpha=1\),
1. if \(B_{r}(y)\subset D^{+}(u),\) then \(u(x)=\alpha\langle x-x_{0},\nu\rangle^{+}+o(|x-x_{0}|),\)
2. if \(B_{r}(y)\subset D^{+}(u)^{c},\) then \(u(x)=\alpha\langle x_{0}-x,\nu\rangle^{+}+o(|x-x_{0}|).\)
We next extend the result of Lemma 6.2 in [10] to the global homogenous \(p_{0}\)-Laplacian free boundary problem (i.e., to problem (1.1) in \(\Omega=\mathbb{R}^{n}\) with \(p(x)\equiv p_{0},\)\(f\equiv 0\) and \(g\equiv 1\)). This result is valid for globally Lipschitz continuous functions. The notion of viscosity solution we employ in Lemma 6.2 is the one in [12] (see Definition 6.1 above).
**Lemma 6.2**.: _Let \(1<p_{0}<\infty\). Let \(v:\mathbb{R}^{n}\rightarrow\mathbb{R}\) be a nonnegative Lipschitz viscosity solution (in the sense of Definition 6.1) to_
\[\left\{\begin{array}{ll}\Delta_{p_{0}}v=0,&\mbox{in }\{v>0\},\\ |\nabla v|=1,&\mbox{on}\quad F(v):=\partial\{v>0\}.\end{array}\right. \tag{6.2}\]
_Assume that_
\[\{v>0\}=\{(x^{\prime},x_{n})\in\mathbb{R}^{n}:\quad x^{\prime}\in\mathbb{R}^{n -1},\quad x_{n}>h(x^{\prime})\},\]
_with \(h\) a Lipschitz continuous function, \(h(0)=0\) and \(\mbox{Lip}(h)\leq M.\) Then \(h\) is linear and, after a rotation,_
\[v(x)=x_{n}^{+}.\]
Proof.: We will denote \(B_{r}^{\prime}\) the ball of radius \(r\) centered at \(0\) in \(\mathbb{R}^{n-1}.\)
We follow the idea of the proof in Lemma 6.2 in [10], coupled with results about the regularity of the free boundary in the homogeneous two phase problem associated with the \(p_{0}\)-Laplace operator. In fact, if \(v\) is a viscosity solution of (6.2) and its free boundary is a Lipschitz graph, from the regularity results in [12], we know that the free boundary \(F(v)\) is \(C^{1,\alpha}\) in \(B_{1},\) with a bound \(C\) depending only on \(n,p_{0}\) and on the Lipschitz constant \(M\) of \(h.\) Then,
\[|h(x^{\prime})-h(0)-\langle\nabla h(0),x^{\prime}\rangle|\leq C|x^{\prime}|^{ 1+\alpha} \tag{6.3}\]
in \(B_{1}^{\prime},\) where \(C=C(n,p_{0},M).\) Moreover, since \(v\) is a global solution to (6.2), considering the rescaled function \(v_{R}(x)=\frac{v(Rx)}{R},\) we still obtain a solution to problem (6.2) whose free boundary is the graph of the function \(h_{R}(x^{\prime})=\frac{h(Rx^{\prime})}{R}\) for \(x^{\prime}\in\mathbb{R}^{n-1}.\) This function preserves the same Lipschitz constant and then satisfies the inequality (6.3). That is,
\[|h_{R}(x^{\prime})-h_{R}(0)-\langle\nabla h_{R}(0),x^{\prime}\rangle|\leq C|x ^{\prime}|^{1+\alpha}\]
for \(x^{\prime}\in B_{1}^{\prime}.\) This fact can be read as
\[|h(Rx^{\prime})-h(0)-\langle\nabla h(0),Rx^{\prime}\rangle|\leq CR|x^{\prime}| ^{1+\alpha}\]
for \(x^{\prime}\in B_{1}^{\prime}.\) Then,
\[|h(y^{\prime})-h(0)-\langle\nabla h(0),y^{\prime}\rangle|\leq C\frac{|y^{ \prime}|^{1+\alpha}}{R^{\alpha}}\]
for \(y^{\prime}\in B_{R}^{\prime}.\) Hence, passing to the limit \(R\rightarrow\infty,\) we conclude that \(h\) is linear in \(\mathbb{R}^{n-1}\). Since \(v\) is Lipschitz, then Lemma B.1 in Appendix B applies and, up to a proper rotation, \(v(x)=x_{n}^{+}.\)
For the sake of completeness we recall the following theorem we proved in [12]
**Theorem 6.3** (Theorem 1.1 in [Fl]).: _Let \(u\) be a viscosity solution to (1.1) in \(B_{1}\). Assume that \(0\in F(u),\)\(g(0)=1\) and \(p(0)=p_{0}.\) There exists a universal constant \(\bar{\varepsilon}>0\) such that, if the graph of \(u\) is \(\bar{\varepsilon}-\)flat in \(B_{1},\) in the direction \(e_{n},\) that is_
\[(x_{n}-\bar{\varepsilon})^{+}\leq u(x)\leq(x_{n}+\bar{\varepsilon})^{+},\quad x \in B_{1},\]
_and_
\[\|\nabla p\|_{L^{\infty}(B_{1})}\leq\bar{\varepsilon},\quad\|f\|_{L^{\infty}(B _{1})}\leq\bar{\varepsilon},\quad[g]_{C^{0,\beta}(B_{1})}\leq\bar{\varepsilon}, \tag{6.4}\]
_then \(F(u)\) is \(C^{1,\alpha}\) in \(B_{1/2}\)._
_The constants \(\bar{\varepsilon}\) and \(\alpha\) depend only on \(p_{\min}\), \(p_{\max}\), \(\beta\) and \(n\)._
In the proof of Theorem 1.2 we will also use
**Proposition 6.4**.: _Let \(u_{k}\) be a sequence of viscosity solutions to (1.1) in \(B_{2}\), with right hand side \(f_{k}\), exponent \(p_{k}\) and free boundary condition \(g_{k}\), where \(f_{k}\), \(p_{k}\) and \(g_{k}\) are as in Subsection 1.1. Assume that \(u_{k}\) are uniformly Lipschitz and that, for some \(\alpha>0\) and \(\nu\in\mathbb{R}^{n}\) with \(|\nu|=1\), \(u_{k}\to u_{0}(x)=\alpha\langle x,\nu\rangle^{+}\), \(f_{k}\to 0\), \(p_{k}\to p_{0}\), \(\nabla p_{k}\to 0\) and \(g_{k}\to 1\) uniformly in \(B_{2}\). Assume moreover that \(F(u_{k})\) are uniform Lipschitz graphs and \(F(u_{k})\to F(u_{0})\) in Hausdorff distance in \(B_{2}\). Then \(\alpha\geq 1\)._
Proof.: Without loss of generality we assume that \(\nu=e_{n}\). Suppose by contradiction that \(0<\alpha<1\). We take \(\varphi\in C^{\infty}(\mathbb{R}^{n}),\) with \(0\leq\varphi\leq 1\), \(\varphi\equiv 0\) in \(B_{1/2}^{c}\) and \(\varphi\equiv 1\) in \(B_{1/4}.\) For \(0<\xi<1/4\) depending on \(\alpha\), to be fixed later, and \(0<\varepsilon<1\), we define
\[D_{\varepsilon}=D_{\varepsilon}^{\xi}=B_{1}\cap\{x_{n}>-\xi+\varepsilon \varphi(x)\}.\]
Let \(\lambda_{0},\Lambda_{0}\) as in (2.2). For \(\rho>0\) fixed and depending on \(\alpha\), to be precised later, we consider \(v_{\varepsilon}\) such that
\[\left\{\begin{array}{l}\mathcal{M}_{\lambda_{0},\Lambda_{0}}^{+}(D^{2}v_{ \varepsilon})=-\rho,\quad\mbox{in }\,D_{\varepsilon}\\ v_{\varepsilon}=\alpha(x_{n}+\xi),\quad\mbox{on }\,\partial B_{1}\cap\{x_{n} \geq-\xi\},\\ v_{\varepsilon}=0,\quad\mbox{on }\,B_{1}\cap\{x_{n}=-\xi+\varepsilon\varphi(x)\}, \end{array}\right.\]
\(v_{\varepsilon}\equiv 0\) on \(\overline{B_{1}}\setminus\overline{D_{\varepsilon}}.\)
_Step I._ We will show that, in \(B_{3/4}\), \(v_{\varepsilon}\) is a strict supersolution to problem (1.1) with right hand side \(f_{k}\), exponent \(p_{k}\) and free boundary condition \(g_{k}\), for \(\rho\), \(\varepsilon\) and \(\xi\) suitably chosen and large \(k\).
We first observe that \(v_{\varepsilon}>0\) in \(D_{\varepsilon}.\) In addition, \(v_{\varepsilon}\in C^{2,\tilde{\alpha}}(D_{\varepsilon})\) and \(v_{\varepsilon}\in C^{1,\tilde{\alpha}}(\overline{D_{\varepsilon}\cap B_{3/4}})\). We define
\[w_{\varepsilon}=v_{\varepsilon}-\alpha(x_{n}+\xi),\]
that satisfies
\[\left\{\begin{array}{l}\mathcal{M}_{\lambda_{0},\Lambda_{0}}^{+}(D^{2}w_{ \varepsilon})=-\rho,\quad\mbox{in }\,D_{\varepsilon}\\ w_{\varepsilon}=0,\quad\mbox{on }\,\partial B_{1}\cap\{x_{n}\geq-\xi\},\\ w_{\varepsilon}=-\alpha(x_{n}+\xi)=-\alpha\varepsilon\varphi(x),\quad\mbox{on }\,B_{1}\cap\{x_{n}=-\xi+\varepsilon\varphi(x)\}.\end{array}\right.\]
Hence, using ABP estimate (Theorem 3.6 in [CC]), we obtain that \(||w_{\varepsilon}||_{L^{\infty}(D_{\varepsilon})}\leq C_{0}(\rho+\varepsilon),\) with \(C_{0}>0\) independent of \(\varepsilon\) and \(\xi\) universal. Then from the inner estimates in Corollary 5.7, [CC] and the boundary estimates in Theorem 1.4, [SS] we deduce that there exist positive constants \(C_{1}\) and \(C_{2}\) such that
\[||w_{\varepsilon}||_{C^{1,\tilde{\alpha}}(\overline{D_{\varepsilon}\cap B_{3/4 }})}\leq C_{1}\left(||w_{\varepsilon}||_{L^{\infty}(D_{\varepsilon})}+\rho+ \alpha\varepsilon||\varphi||_{C^{1,\tilde{\alpha}}(B_{1})}\right)\leq C_{2}( \rho+\varepsilon), \tag{6.5}\]
where \(C_{1}\) and \(C_{2}\) depend on the maximal curvature of \(\{x_{n}=-\xi+\varepsilon\varphi(x)\}\), see [SS], and can be chosen universal independent of \(\varepsilon\) and \(\xi\). Then
\[||\nabla w_{\varepsilon}||_{L^{\infty}(\overline{D_{\varepsilon}\cap B_{3/4}})} \leq C_{2}(\rho+\varepsilon)\]
and
\[||\nabla v_{\varepsilon}|-\alpha|\leq|\nabla v_{\varepsilon}-\alpha e_{n}| \leq C_{2}(\rho+\varepsilon)\qquad\text{ in }\overline{D_{\varepsilon}\cap B_{3/4}}.\]
We now fix
\[\begin{array}{l}\rho=\frac{c(\alpha)}{C_{2}},\qquad 0<\varepsilon\leq \min\{\frac{c(\alpha)}{C_{2}},\frac{1}{4}\},\qquad 2\xi=\min\{\frac{c(\alpha)}{C_{2}}, \frac{1}{4}\},\\ \text{with }\quad c(\alpha)=\frac{1}{2}\min\{\frac{1-\alpha}{2},\frac{\alpha}{2}\} \quad\text{and }C_{2}\quad\text{as in }\quad(\ref{eq:c_1}),\end{array} \tag{6.6}\]
and then,
\[\frac{\alpha}{2}\leq|\nabla v_{\varepsilon}|\leq\frac{1+\alpha}{2}\qquad \text{ in }\overline{D_{\varepsilon}\cap B_{3/4}}. \tag{6.7}\]
Recalling (2.2), we obtain, in \(D_{\varepsilon}\cap B_{3/4}\),
\[\Delta_{p_{k}(x)}v_{\varepsilon}\leq I+II\]
where
\[I=|\nabla v_{\varepsilon}|^{p_{k}(x)-2}\mathcal{M}_{\lambda_{0},\Lambda_{0}} ^{+}(D^{2}v_{\varepsilon})\]
and
\[II=|\nabla v_{\varepsilon}|^{p_{k}(x)-2}\langle\nabla p_{k}(x),\nabla v_{ \varepsilon}\rangle\log|\nabla v_{\varepsilon}|.\]
Then
\[I=|\nabla v_{\varepsilon}|^{p_{k}(x)-2}(-\rho)\leq-c_{1}\rho,\]
since \(|\nabla v_{\varepsilon}|^{p_{k}(x)-2}\geq c_{1}=c_{1}(\alpha,p_{\max})\) because of (6.7). Moreover,
\[II\leq|\nabla v_{\varepsilon}|^{p_{k}(x)-1}|\nabla p_{k}(x)|\,|\log|\nabla v_{ \varepsilon}||\leq|\nabla p_{k}(x)|c_{2},\]
where we have used that \(|t|^{p_{k}(x)-1}\left|\log|t||\leq c_{2}=c_{2}(p_{\min})\right.\) for \(|t|\leq 1\) and (6.7). Then,
\[\Delta_{p_{k}(x)}v_{\varepsilon}\leq-c_{1}\rho+||\nabla p_{k}||_{L^{\infty}}c _{2}\leq-c_{1}\rho+c_{2}\frac{c_{1}}{c_{2}}\frac{\rho}{2}=-\frac{c_{1}}{2}\rho \tag{6.8}\]
if \(k\) is large so that \(||\nabla p_{k}||_{L^{\infty}}\leq\frac{c_{1}}{c_{2}}\frac{\rho}{2}\).
On the other hand, for \(k\) large, we have \(||f_{k}||_{L^{\infty}}<\frac{c_{1}}{2}\rho\) and \(g_{k}(x)>\frac{1+\alpha}{2}\) in \(B_{1}\) and then,
\[\left\{\begin{array}{l}\Delta_{p_{k}(x)}v_{\varepsilon}\leq-\frac{c_{1}}{2} \rho<-||f_{k}||_{L^{\infty}}\leq f_{k},\quad\text{in }D_{\varepsilon}\cap B_{3/4}\\ |\nabla v_{\varepsilon}|\geq\frac{\alpha}{2},\quad\text{in }\overline{D_{ \varepsilon}\cap B_{3/4}}\quad\text{(this implies }\nabla v_{\varepsilon}\neq 0)\\ |\nabla v_{\varepsilon}|\leq\frac{1+\alpha}{2}<g_{k},\quad\text{in } \overline{D_{\varepsilon}\cap B_{3/4}}.\end{array}\right. \tag{6.9}\]
Hence, for our choice of \(\rho\), \(\varepsilon\) and \(\xi\) done in (6.6), \(v_{\varepsilon}\) is a strict supersolution to problem (1.1) in \(B_{3/4}\) with right hand side \(f_{k}\), exponent \(p_{k}\) and free boundary condition \(g_{k}\), for large \(k\), as claimed.
_Step II._ We will now get some uniform bounds for the functions \(u_{k}\). In fact, let \(v\) be such that
\[\left\{\begin{array}{l}{\mathcal{M}}^{+}_{\lambda_{0},\Lambda_{0}}(D^{2}v)=- \rho,\quad\mbox{in}\ \{x_{n}>-\xi,\ 5/8<|x|<1\},\\ v=\alpha(x_{n}+\xi),\quad\mbox{on}\ \partial B_{1}\cap\{x_{n}\geq-\xi\},\\ v=0,\quad\mbox{on}\ \{x_{n}=-\xi,\ 5/8<|x|<1\},\\ v=0,\quad\mbox{on}\ \partial B_{5/8}\cap\{x_{n}\geq-\xi\}.\end{array}\right.\]
Then, \(v_{\varepsilon}>v>0\) in \(\{x_{n}>-\xi,\ 5/8<|x|<1\}\). Moreover, there exists \(c_{3}>0\) universal such that
\[v>c_{3}\quad\mbox{ in }\{x_{n}\geq-\frac{\xi}{2},\ 3/4\leq|x|\leq 1\}. \tag{6.10}\]
Let us now fix \(\delta\) universal such that
\[0<\delta<\frac{\xi}{2}\quad\mbox{ and }\quad 2L\delta<c_{3}, \tag{6.11}\]
where \(L\) is the uniform Lipschitz constant of the functions \(u_{k}\).
We will first show that, if \(k\) is large,
\[u_{k}=0\qquad\mbox{ in }\overline{B_{1}}\cap\{x_{n}\leq-\delta\}, \tag{6.12}\]
\[u_{k}\leq 2L\delta\qquad\mbox{ in }\overline{B_{1}}\cap\{|x_{n}|\leq\delta\}, \tag{6.13}\]
\[\frac{\alpha}{2}\leq|\nabla u_{k}|\leq L\qquad\mbox{ in }\overline{B_{1}}\cap\{x_{n}\geq\delta\}. \tag{6.14}\]
In fact, since \(\partial\{u_{k}>0\}\to\partial\{u_{0}>0\}=\{x_{n}=0\}\) in the Hausdorff distance in \(B_{2}\), then \(\partial\{u_{k}>0\}\subset\{|x_{n}|<\delta\}\) in \(B_{2}\), if \(k\) is large. Hence (6.12) and (6.13) follow.
Since \(u_{k}\to u_{0}=\alpha x_{n}^{+}\) uniformly in \(B_{2}\), then for large \(k\),
\[\left\{\begin{array}{l}u_{k}\geq\alpha_{4}^{\delta}\quad\mbox{in }B_{2}\cap\{x_{n}>\frac{\delta}{2}\},\\ \Delta_{p_{k}(x)}u_{k}=f_{k}\quad\mbox{in }B_{2}\cap\{x_{n}>\frac{\delta}{2}\}, \end{array}\right. \tag{6.15}\]
and then, by the \(C^{1,\bar{\alpha}}\) estimates (Theorem 1.1 in [Fa]),
\[\nabla u_{k}\to\alpha e_{n}\quad\mbox{ uniformly in }\overline{B_{1}}\cap\{x_{n}\geq\delta\},\]
which gives (6.14) for \(k\) large.
We now observe that \(v-u_{0}\geq\frac{\alpha}{2}\xi\) on \(\partial B_{1}\cap\{x_{n}\geq-\frac{\xi}{2}\}\) and
\[v-u_{0}\geq\frac{\alpha\xi}{4}\quad\mbox{ in }\{x_{n}\geq-\frac{\xi}{2},\ 1- \sigma\leq|x|\leq 1\},\]
for some universal \(0<\sigma<1/4\). Here we have used that \(v\in C^{\bar{\alpha}}(\{x_{n}\geq-\xi,\ 5/8\leq|x|\leq 1\})\) (see, for instance, Theorem 2 in [Si]). Then,
\[v-u_{k}\geq\frac{\alpha\xi}{8}\quad\mbox{ in }\{x_{n}\geq-\frac{\xi}{2},\ 1- \sigma\leq|x|\leq 1\},\]
for large \(k\). Recalling (6.10), (6.11) and (6.13), we obtain
\[v_{\varepsilon}>u_{k}\qquad\mbox{ in }\{|x_{n}|\leq\delta,\ 3/4\leq|x|\leq 1\}, \tag{6.16}\]
\[v_{\varepsilon}-u_{k}\geq\frac{\alpha\xi}{8}\quad\mbox{ in }\{x_{n}\geq-\frac{\xi}{2},\ 1- \sigma\leq|x|\leq 1\}. \tag{6.17}\]
_Step III._ We will show that
\[v_{\varepsilon}\geq u_{k}\quad\mbox{in }\overline{B_{1}},\quad\mbox{ for every }0<\varepsilon<\frac{\xi}{2}, \tag{6.18}\]
if \(k\) is large.
If the result is not true, then
\[\max_{\overline{B_{1}}}(u_{k}-v_{\varepsilon})=(u_{k}-v_{\varepsilon})(\tilde{x} _{k})>0\quad\text{ for some }\tilde{x}_{k}\in\overline{B_{1}}.\]
If \(|\tilde{x}_{k}|\geq 3/4\), then (6.12), (6.16) and (6.17) imply that
\[\tilde{x}_{k}\in\{x_{n}>\delta,\ 3/4\leq|x|\leq 1-\sigma\}.\]
From (6.14) and (6.15) we get
\[\frac{\alpha}{2}\leq|\nabla v_{\varepsilon}(\tilde{x}_{k})|\leq L\]
and
\[\Delta_{p_{k}(\tilde{x}_{k})}v_{\varepsilon}(\tilde{x}_{k})\geq f_{k}(\tilde{ x}_{k}). \tag{6.19}\]
Now, the uniform \(C^{1,\bar{\alpha}}\) estimates for \(v_{\varepsilon}\) in \(\overline{B_{1}}\cap\{x_{n}\geq 0\}\) give
\[\frac{\alpha}{4}\leq|\nabla v_{\varepsilon}|\leq 2L\quad\text{ in }B_{\mu}( \tilde{x}_{k}),\]
for some \(\mu>0\) universal. Then, proceeding as in the computations leading to (6.8), we get
\[\Delta_{p_{k}(x)}v_{\varepsilon}\leq-\bar{c}\rho\quad\text{ in }B_{\mu}( \tilde{x}_{k}),\]
with \(\bar{c}>0\) universal, if \(k\) is large. Therefore,
\[\Delta_{p_{k}(x)}v_{\varepsilon}<f_{k}\quad\text{ in }B_{\mu}(\tilde{x}_{k}),\]
for large \(k\), which contradicts (6.19). Then \(\tilde{x}_{k}\in B_{3/4}\).
Since \(\varepsilon<\frac{\xi}{2}\) and \(\delta<\frac{\xi}{2}\), we have
\[\partial\{u_{k}>0\}\subset\{|x_{n}|<\delta\}\subseteq\{x_{n}>-\xi+\varepsilon \varphi(x)\},\]
and then \(\tilde{x}_{k}\in\{u_{k}>0\}\cap B_{3/4}\subset D_{\varepsilon}\cap B_{3/4}\).
So (6.9) implies that, for large \(k\),
\[\nabla v_{\varepsilon}(\tilde{x}_{k})\neq 0\]
and
\[f_{k}(\tilde{x}_{k})>\Delta_{p_{k}(\tilde{x}_{k})}v_{\varepsilon}(\tilde{x}_ {k})\geq f_{k}(\tilde{x}_{k}),\]
a contradiction. This shows (6.18).
_Step IV._ We will finally show that, for some \(\varepsilon_{k}>0\), we have \(v_{\varepsilon_{k}}\geq u_{k}\) in \(B_{3/4}\) and \(F(u_{k})\cap F(v_{\varepsilon_{k}})\cap B_{3/4}\neq\emptyset\), if \(k\) is large enough. This will contradict that \(v_{\varepsilon}\) is a strict supersolution to problem (1.1) in \(B_{3/4}\) and concludes the proof.
In fact, from (6.18) we know that
\[v_{\varepsilon}\geq u_{k}\quad\text{in }\overline{\{u_{k}>0\}}\quad\text{ for }0<\varepsilon<\frac{\xi}{2}.\]
Let
\[\varepsilon_{k}=\sup\left\{\varepsilon>0:\ v_{\varepsilon}\geq u_{k}\ \text{ in } \overline{\{u_{k}>0\}}\right\}.\]
Since \(\xi\leq 1/4\), if we consider \(\varepsilon=2\xi\), then \(B_{\xi}\subset\{x_{n}<-\xi+\varepsilon\varphi(x)\}\) because, in \(B_{1/4}\), \(x_{n}=-\xi+\varepsilon\varphi(x)=-\xi+2\xi=\xi\).
Moreover, \(0\in\partial\{u_{0}>0\}\) and \(\partial\{u_{k}>0\}\to\partial\{u_{0}>0\}\) in the Hausdorff distance, then for \(k\) large, there exist \(\tilde{x}_{k}\in B_{\xi}\cap\partial\{u_{k}>0\}\) and \(\bar{x}_{k}\in B_{\xi}\) such that \(u_{k}(\bar{x}_{k})>0=v_{\varepsilon}(\bar{x}_{k})\), with \(\bar{x}_{k}\in\overline{\{u_{k}>0\}}\). Then, \(0<\varepsilon_{k}<2\xi\).
Therefore, there holds \(v_{\varepsilon_{k}}\geq u_{k}\) in \(\overline{\{u_{k}>0\}}\) and then,
\[v_{\varepsilon_{k}}\geq u_{k}\quad\text{ in }\overline{B_{1}},\]
\[v_{\varepsilon_{k}}(x_{k})=u_{k}(x_{k})\quad\text{ for some }x_{k}\in\overline{\{u_{k}>0\}}.\]
Proceeding exactly as in _Step III_ we obtain that
\[x_{k}\in\overline{\{u_{k}>0\}}\cap B_{3/4}.\]
If \(x_{k}\in\{u_{k}>0\}\cap B_{3/4},\) then \(v_{\varepsilon_{k}}(x_{k})=u_{k}(x_{k})>0.\) Since \(v_{\varepsilon_{k}}\geq u_{k}>0\) in a neighborhood of \(x_{k},\) this produces a contradiction because \(v_{\varepsilon_{k}}\) is a strict supersolution to problem (1.1) in \(B_{3/4}.\)
As a consequence \(x_{k}\in\partial\{u_{k}>0\}\cap B_{3/4}\) and \(v_{\varepsilon_{k}}(x_{k})=u_{k}(x_{k})=0\) and there exist \(x_{k_{j}}\to x_{k}\) such that \(u_{k}(x_{k_{j}})>0\). Then \(v_{\varepsilon_{k}}(x_{k_{j}})\geq u_{k}(x_{k_{j}})>0\) and therefore \(x_{k}\in\partial\{v_{\varepsilon_{k}}>0\}.\) Hence \(x_{k}\in F(u_{k})\cap F(v_{\varepsilon_{k}})\cap B_{3/4},\) which gives a contradiction again. This shows that \(\alpha\geq 1\) and completes the proof.
We will also need
**Proposition 6.5**.: _Let \(u_{k}\) be a sequence of viscosity solutions to (1.1) in \(B_{2}\), with right hand side \(f_{k}\), exponent \(p_{k}\) and free boundary condition \(g_{k}\), where \(f_{k}\), \(p_{k}\) and \(g_{k}\) are as in Subsection 1.1. Assume that, for some \(\alpha\geq 0\) and \(\nu\in\mathbb{R}^{n}\) with \(|\nu|=1\), \(u_{k}\to u_{0}(x)=\alpha\langle x,\nu\rangle^{+}\), \(f_{k}\to 0\), \(p_{k}\to p_{0}\), \(\nabla p_{k}\to 0\) and \(g_{k}\to 1\) uniformly in \(B_{2}\). Then \(\alpha\leq 1\)._
Proof.: Without loss of generality we assume that \(\nu=e_{n}\). Suppose by contradiction that \(\alpha=1+\eta\), with \(\eta>0\), then
\[u_{0}(x)=(1+\eta)x_{n}^{+}.\]
For \(\delta>0\) and \(\varepsilon>0\) small, to be precised later, we define
\[Q(x):=(1+\frac{\eta}{2})x_{n}+\delta x_{n}^{2}-\varepsilon|x^{\prime}|^{2},\]
where we denote \(x=(x^{\prime},x_{n})\), \(x^{\prime}\in\mathbb{R}^{n}\).
Let us show that
\[\begin{cases}u_{0}>Q\quad\text{in }B_{\rho_{0}}\setminus\{0\},\\ u_{0}(0)=Q(0),\end{cases} \tag{6.20}\]
for some \(\rho_{0}=\rho_{0}(\delta,\eta)>0\). In fact, there holds that
\[u_{0}(x)=(1+\eta)x_{n}^{+}>(1+\frac{\eta}{2})x_{n}+\delta x_{n}^{2}\geq Q(x) \qquad\text{ for }\,0<|x_{n}|<\frac{\eta}{2\delta},\]
\[u_{0}(x^{\prime},0)=0>-\varepsilon|x^{\prime}|^{2}=Q(x^{\prime},0)\qquad\text { for }\,x^{\prime}\neq 0,\]
so (6.20) follows for \(\rho_{0}=\min\{1,\frac{\eta}{2\delta}\}\).
**Claim.** We claim that, in \(B_{1}\), \(Q\) is a strict subsolution to problem (1.1) with right hand side \(f_{k}\), exponent \(p_{k}\) and free boundary condition \(g_{k}\), for large \(k\).
Indeed, we have
\[\nabla Q=(1+\frac{\eta}{2})e_{n}+2Mx,\qquad D^{2}Q=2M,\]
where \(M\in R^{n\times n}\) is given by
\[M_{ij}=0\,\,\,\text{for }\,i\neq j\qquad M_{ii}=-\varepsilon\,\,\,\text{for }\,i\neq n,\qquad M_{nn}=\delta. \tag{6.21}\]
Then,
\[1+\frac{\eta}{4}\leq|\nabla Q|\leq 1+\eta\quad\text{ in }B_{1}, \tag{6.22}\]
if \(\delta\leq\eta/8\) and \(\varepsilon\leq\eta/8\).
Moreover, applying the lower bound in (2.2), we obtain
\[\Delta_{p_{k}(x)}Q(x)\geq|\nabla Q|^{p_{k}(x)-2}\mathcal{M}^{-}_{ \lambda_{0},\Lambda_{0}}(D^{2}Q)+|\nabla Q|^{p_{k}(x)-2}\langle\nabla Q,\nabla p _{k}(x)\rangle\log|\nabla Q|\] \[\geq|\nabla Q|^{p_{k}(x)-2}2\mathcal{M}^{-}_{\lambda_{0},\Lambda_ {0}}(M)-|\nabla p_{k}(x)||\nabla Q|^{p_{k}(x)-1}\log|\nabla Q|. \tag{6.23}\]
We also observe that (6.22) implies
\[|\nabla Q|^{p_{k}(x)-2}\geq c_{1}\qquad|\nabla Q|^{p_{k}(x)-1}\log|\nabla Q| \leq c_{2}, \tag{6.24}\]
in \(B_{1}\), where \(c_{1}=c_{1}(\eta,p_{\min})>0\) and \(c_{2}=c_{2}(\eta,p_{\max})>0\).
Now, from (6.21) and (2.3) it is not hard to see that
\[\mathcal{M}^{-}_{\lambda_{0},\Lambda_{0}}(M)=-\Lambda_{0}\varepsilon(n-1)+ \lambda_{0}\delta\geq\frac{\lambda_{0}\delta}{2}, \tag{6.25}\]
if \(\varepsilon\leq\frac{\lambda_{0}\delta}{2\Lambda_{0}(n-1)}\). We next take \(k\) large enough so that
\[|\nabla p_{k}|\leq\frac{\lambda_{0}\delta c_{1}}{2c_{2}},\qquad|f_{k}|\leq \frac{c_{1}\lambda_{0}\delta}{4}\quad\text{ for }x\in B_{1}. \tag{6.26}\]
Putting together (6.23), (6.24), (6.25) and (6.26), we obtain in \(B_{1}\)
\[\Delta_{p_{k}(x)}Q(x)\geq c_{1}2\mathcal{M}^{-}_{\lambda_{0}, \Lambda_{0}}(M)-|\nabla p_{k}(x)|c_{2}\] \[\geq c_{1}\lambda_{0}\delta-\frac{\lambda_{0}\delta c_{1}}{2c_{2 }}c_{2}>f_{k}.\]
If, additionally, \(k\) is large so that
\[g_{k}\leq 1+\frac{\eta}{8},\quad\text{ for }x\in B_{1},\]
we obtain from (6.22) that
\[|\nabla Q|>g_{k}\quad\text{ in }B_{1},\]
thus proving our claim.
We finally deduce from (6.20) that there exist a sequence \(\sigma_{k}\to 0\) and points \(x_{k}\in B_{\rho_{0}}\) such that, denoting \(Q_{k}=Q+\sigma_{k}\), we get
\[\begin{cases}u_{k}\geq Q_{k}\quad\text{in }B_{\rho_{0}},\\ u_{k}(x_{k})=Q(x_{k}),\end{cases}\]
if \(k\) is large. We notice that if \(u_{k}(x_{k})>0\), then \(Q_{k}(x_{k})>0\). Otherwise \(u_{k}(x_{k})=0=Q_{k}(x_{k})\), and since \(\nabla Q_{k}(x_{k})\neq 0\), then \(x_{k}\in F(Q_{k})\).
That is, for large \(k\), \(Q_{k}\) is a strict subsolution in \(B_{\rho_{0}}\) to problem (1.1), with right hand side \(f_{k}\), exponent \(p_{k}\) and free boundary condition \(g_{k}\), touching \(u_{k}\) from below at \(x_{k}\in B^{+}_{\rho_{0}}(Q_{k})\cup F(Q_{k})\), a contradiction. Then \(\alpha\leq 1\).
We are now in a position to prove Theorem 1.2.
**Proof of Theorem 1.2**.: Let \(u\) be a viscosity solution to (1.1) in \(B_{1}\) such that \(0\in F(u)\) and such that \(F(u)\) is a Lipschitz graph in \(B_{r_{0}}\), for some \(0<r_{0}\leq 1\). Without loss of generality we assume that \(g(0)=1\) and we denote \(p(0)=p_{0}\).
We will divide the proof into several steps.
_Step I. Lipschitz continuity and nondegeneracy._ Let us first show that \(u\) is Lipschitz and nondegenerate in a neighborhood of \(0\).
In fact, for \(0<r\leq\frac{r_{0}}{2}\leq\frac{1}{2}\), we consider the function
\[\bar{u}(x)=\frac{1}{r}u(rx),\quad x\in B_{2}.\]
Then \(\bar{u}\) is a viscosity solution to (1.1) in \(B_{2}\), with right hand side \(\bar{f}(x)=rf(rx)\), exponent \(\bar{p}(x)=p(rx)\) and free boundary condition \(\bar{g}(x)=g(rx)\). Moreover, \(0\in F(\bar{u})\).
From Theorem 1.1 we know that \(\bar{u}\) is Lipschitz continuous in \(B_{1/2}\) with a Lipschitz constant depending only on \(n\), \(p_{\min}\), \(p_{\max}\), \(\|\nabla p\|_{L^{\infty}(B_{3r_{0}/8})}\), \(\|f\|_{L^{\infty}(B_{3r_{0}/8})}\), \(\beta\), \(\|g\|_{C^{0,\beta}(\overline{B_{3r_{0}/8}})}\) and \(\|u\|_{L^{\infty}(B_{3r_{0}/8})}\).
In order to prove the nondegeneracy, let us see that we can apply the second part of Proposition 4.2 to \(\bar{u}\), if \(r\) is suitably chosen.
For that purpose, let us first show that the constants appearing in that proposition can be taken independent of \(r\). More precisely, we want to find a bound independent of \(r\) for
\[||\bar{u}||_{L^{\infty}(B_{3/2})}^{\bar{p}_{+}^{3/2}-\bar{p}_{-}^{3/2}},\qquad \text{where}\quad\bar{p}_{+}^{3/2}=\sup_{B_{3/2}}\bar{p},\ \ \bar{p}_{-}^{3/2}=\inf_{B_{3/2}}\bar{p}.\]
In fact, we have
\[||\bar{u}||_{L^{\infty}(B_{3/2})}^{\bar{p}_{+}^{3/2}-\bar{p}_{-}^{3/2}}\leq|| u||_{L^{\infty}(B_{3r_{0}/4})}^{\bar{p}_{+}^{3/2}-\bar{p}_{-}^{3/2}}\Big{(} \frac{1}{r}\Big{)}^{\bar{p}_{+}^{3/2}-\bar{p}_{-}^{3/2}}, \tag{6.27}\]
and
\[\bar{p}_{+}^{3/2}-\bar{p}_{-}^{3/2}\leq 3||\nabla\bar{p}||_{L^{\infty}(B_{3/2} )}\leq 3r||\nabla p||_{L^{\infty}(B_{3r_{0}/4})}. \tag{6.28}\]
Then, from (6.27) and (6.28), we conclude that
\[||\bar{u}||_{L^{\infty}(B_{3/2})}^{\bar{p}_{+}^{3/2}-\bar{p}_{-}^{3/2}}\leq C =C\big{(}||u||_{L^{\infty}(B_{3r_{0}/4})},||\nabla p||_{L^{\infty}(B_{3r_{0}/4 })}\big{)}.\]
It follows that in order to apply the second part of Proposition 4.2 to \(\bar{u}\) we can take the constants \(\tilde{\varepsilon}\) and \(c_{0}\) in that proposition depending only on \(n\), \(p_{\min}\), \(p_{\max}\), \(||u||_{L^{\infty}(B_{3r_{0}/4})}\), \(||\nabla p||_{L^{\infty}(B_{3r_{0}/4})}\), \(||g||_{L^{\infty}(B_{r_{0}})}\), \(\gamma_{0}\) and on the Lipschitz constant of \(F(u)\).
Then, if \(r\) is small enough, there holds in \(B_{2}\)
\[|\bar{f}(x)| \leq r||f||_{L^{\infty}(B_{r_{0}})}\leq\tilde{\varepsilon},\] \[|\bar{g}(x)-1| =|g(rx)-g(0)|\leq 2r^{\beta}[g]_{C^{0,\beta}(B_{r_{0}})}\leq \tilde{\varepsilon},\] \[|\nabla\bar{p}(x)| \leq r||\nabla p||_{L^{\infty}(B_{r_{0}})}\leq\tilde{\varepsilon},\] \[|\bar{p}(x)-p_{0}| =|p(rx)-p(0)|\leq 2r||\nabla p||_{L^{\infty}(B_{r_{0}})}\leq \tilde{\varepsilon}.\]
Hence, for \(r\) small enough, \(\bar{u}\) is nondegenerate in \(B_{\rho_{0}}\), for \(\rho_{0}>0\) depending only on the Lipschitz constant of \(F(u)\).
That is, \(u\) is Lipschitz continuous and nondegenerate in \(B_{\hat{\rho_{0}}}\), for a suitable universal \(\hat{\rho_{0}}>0\), with a universal Lipschitz constant \(L_{0}\).
_Step II. Blow up limit_. We now consider the blow up sequence
\[u_{k}(x)=u_{\delta_{k}}(x)=\frac{u(\delta_{k}x)}{\delta_{k}},\quad\text{ where }\delta_{k}\to 0, \tag{6.29}\]
\(\delta_{k}>0\). As before, each \(u_{k}\) is a viscosity solution to (1.1) with right hand side \(f_{k}(x)=\delta_{k}f(\delta_{k}x)\), exponent \(p_{k}(x)=p(\delta_{k}x)\) and free boundary condition \(g_{k}(x)=g(\delta_{k}x).\)
Our goal is to apply Theorem 6.3 to \(u_{k}\), for large \(k\). We will first observe that, taking \(k\) sufficiently large, the assumption (6.4) in that theorem is satisfied for the universal constant \(\bar{\varepsilon}\). In fact, in \(B_{1},\)
\[\begin{split}&|f_{k}(x)|=\delta_{k}|f(\delta_{k}x)|\leq\delta_{k }||f||_{L^{\infty}(B_{r_{0}})}\leq\bar{\varepsilon},\\ &|\nabla p_{k}(x)|\leq\delta_{k}||\nabla p||_{L^{\infty}(B_{r_{0 }})}\leq\bar{\varepsilon},\\ &|p_{k}(x)-p_{0}|=|p(\delta_{k}x)-p(0)|\leq\delta_{k}||\nabla p ||_{L^{\infty}(B_{r_{0}})}\leq\bar{\varepsilon},\\ &[g_{k}]_{C^{0,\beta}(B_{1})}\leq\delta_{k}^{\beta}[g]_{C^{0, \beta}(B_{r_{0}})}\leq\bar{\varepsilon},\\ &|g_{k}(x)-1|=|g(\delta_{k}x)-g(0)|\leq\delta_{k}^{\beta}[g]_{C^{ 0,\beta}(B_{r_{0}})}\leq\bar{\varepsilon}.\end{split} \tag{6.30}\]
On the other hand, since \(u\) is Lipschitz and nondegenerate in \(B_{\hat{\rho}_{0}}\), with Lipschitz constant \(L_{0}\), then, for every \(R>0\), \(u_{k}\) are Lipschitz and uniformly nondegenerate in \(B_{R}\), with Lipschitz constant \(L_{0}\), if \(k\geq k_{0}(R)\). Then, standard arguments (see for instance, [AC], 4.7) imply that (up to a subsequence), there holds that
\[\begin{split}& u_{k}\to u_{0}\text{ in }C^{0,\gamma}_{\rm loc}(\mathbb{R}^{n}),\text{ for all }0<\gamma<1,\\ &\partial\{u_{k}>0\}\to\partial\{u_{0}>0\}\text{ locally in Hausdorff distance},\end{split} \tag{6.31}\]
to a function \(u_{0}:\mathbb{R}^{n}\to\mathbb{R}\), which is globally Lipschitz with constant \(L_{0}\) and nondegenerate in \(\mathbb{R}^{n}\). Moreover, \(F(u_{0})\) is a global Lipschitz graph.
We also observe that the estimates in (6.30) also imply that
\[f_{k}\to 0,\quad\nabla p_{k}\to 0,\quad p_{k}\to p_{0},\quad g_{k}\to 1,\quad\text{ uniformly on compacts of }\mathbb{R}^{n}.\]
_Step III. Limit equation_. Since \(u\) satisfies in the viscosity sense \(\Delta_{p(x)}u=f\) in \(\{u>0\}\), then every \(u_{k}\) satisfies in the viscosity sense \(\Delta_{p_{k}(x)}u_{k}=f_{k}\) in \(\{u_{k}>0\}\). We claim that the blow up limit \(u_{0}\) is a viscosity solution to \(\Delta_{p_{0}}u_{0}=0\) in \(\{u_{0}>0\}\).
In fact, let us see that \(u_{0}\) is a viscosity subsolution to \(\Delta_{p_{0}}u_{0}=0\) in \(\{u_{0}>0\}\).
Let \(x_{0}\in\{u_{0}>0\}\) and let \(P\) be a quadratic polynomial such that \(P\geq u_{0}\) in \(B_{\sigma}(x_{0})\), \(P(x_{0})=u(x_{0})\) and \(\nabla P(x_{0})\neq 0\). We can assume that \(|\nabla P|\geq c>0\) in \(B_{\sigma}(x_{0})\) and \(B_{\sigma}(x_{0})\subset\{u_{k}>0\}\) for \(k\) large, so \(\Delta_{p_{k}(x)}u_{k}(x)=f_{k}(x)\) in \(B_{\sigma}(x_{0})\). We want to prove that \(\Delta_{p_{0}}P(x_{0})\geq 0\). We argue by contradiction assuming that there exists \(\rho>0\) such that \(\Delta_{p_{0}}P(x_{0})<-\rho<0.\) For \(\varepsilon>0\), we define \(\tilde{P}(x)=\tilde{P}_{\varepsilon}(x)=P(x)+\varepsilon|x-x_{0}|^{2}.\) Hence \(\nabla\tilde{P}=\nabla P+2\varepsilon(x-x_{0})\) and
\[|\nabla\tilde{P}|\geq\frac{c}{2}\quad\text{ in }B_{\sigma}(x_{0}), \tag{6.32}\]
if \(\varepsilon\) is sufficiently small. Letting \(\varepsilon\to 0\), we get
\[\Delta_{p_{0}}\tilde{P}(x_{0})\to\Delta_{p_{0}}P(x_{0})<-\rho\]
and then, if \(\varepsilon\) is small enough, we obtain
\[\Delta_{p_{0}}\tilde{P}(x_{0})<-\frac{\rho}{2}. \tag{6.33}\]
We now fix \(\varepsilon>0\) small such that (6.32) and (6.33) hold. We have
\[\tilde{P}(x)>u_{0}(x)\ \ \mbox{in }\overline{B}_{\sigma}(x_{0})\setminus\{x_{0}\}, \ \mbox{ and }\tilde{P}(x_{0})=u_{0}(x_{0}). \tag{6.34}\]
Moreover, since \(u_{k}\to u_{0}\) uniformly in \(B_{\sigma}(x_{0}),\) then \(|u_{k}-u_{0}|<\gamma_{k}\) in \(B_{\sigma}(x_{0})\) with \(\gamma_{k}\to 0.\) Hence, from
\[\tilde{P}(x)\geq u_{0}(x)>u_{k}(x)-\gamma_{k}\ \ \ \ \mbox{in }B_{\sigma}(x_{0}),\]
it follows
\[\tilde{P}(x)+\gamma_{k}>u_{k}(x)\ \ \ \ \mbox{in }B_{\sigma}(x_{0}).\]
Let
\[t_{k}=\sup\{t\geq 0:\ \ \ \tilde{P}(x)+\gamma_{k}\geq u_{k}(x)+t\ \ \ \mbox{in }B_{\sigma}(x_{0})\}.\]
Since \(\gamma_{k}\to 0\) and \(\tilde{P}(x)+\gamma_{k}\) is bounded in \(B_{\sigma}(x_{0}),\) then \(t_{k}\) is finite so that
\[\tilde{P}(x)+\gamma_{k}\geq u_{k}(x)+t_{k}\ \ \ \ \mbox{in }B_{\sigma}(x_{0})\]
and there exists \(x_{k}\in\overline{B}_{\sigma}(x_{0})\) such that
\[\tilde{P}(x_{k})+\gamma_{k}=u_{k}(x_{k})+t_{k}.\]
Then
\[u_{k}(x_{0})+\gamma_{k}+\gamma_{k}\geq u_{0}(x_{0})+\gamma_{k}=\tilde{P}(x_{0 })+\gamma_{k}\geq u_{k}(x_{0})+t_{k}.\]
As a consequence
\[t_{k}\leq 2\gamma_{k}\to 0\]
and \(t_{k}\to 0.\) Let \(\tilde{P}_{k}(x)=\tilde{P}(x)+\gamma_{k}-t_{k}.\)
Then
\[\tilde{P}_{k}(x)\geq u_{k}(x)\ \ \ \ \mbox{in }B_{\sigma}(x_{0})\]
and \(\tilde{P}_{k}(x_{k})=u_{k}(x_{k})\) for \(x_{k}\in\overline{B}_{\sigma}(x_{0}).\)
Since \(\tilde{P}(x)>u_{0}(x)\) on \(\partial B_{\sigma}(x_{0})\) then
\[\tilde{P}(x)-u_{0}(x)\geq\bar{c}>0\]
on \(\partial B_{\sigma}(x_{0})\) and
\[\tilde{P}_{k}(x)-u_{k}(x)=\tilde{P}(x)+\gamma_{k}-t_{k}-u_{k}(x)\geq\tilde{P} (x)+\gamma_{k}-t_{k}-u_{0}(x)-\gamma_{k}\]
\[=\tilde{P}(x)-t_{k}-u_{0}(x)\geq\bar{c}-t_{k}\geq\frac{\bar{c}}{2}\]
on \(\partial B_{\sigma}(x_{0})\) if \(k\geq k_{0},\) since \(t_{k}\to 0.\) We recall here that \(u_{k}\leq u_{0}+\gamma_{k}.\) Hence \(x_{k}\not\in\partial B_{\sigma}(x_{0})\) if \(k\geq k_{0}.\) Then
\[\tilde{P}_{k}(x)\geq u_{k}(x)\ \ \ \ \mbox{in }B_{\sigma}(x_{0}),\]
\(\tilde{P}_{k}(x_{k})=u_{k}(x_{k})\) for \(x_{k}\in B_{\sigma}(x_{0}),\) and \(\nabla\tilde{P}_{k}\neq 0\) in \(B_{\sigma}(x_{0})\) and thus,
\[\Delta_{p_{k}(x_{k})}\tilde{P}(x_{k})=\Delta_{p_{k}(x_{k})}\tilde{P}_{k}(x_{k })\geq f_{k}(x_{k}). \tag{6.35}\]
Since \(x_{k}\in B_{\sigma}(x_{0})\) then, for a subsequence, \(x_{k}\to\bar{x}\in\overline{B}_{\sigma}(x_{0})\). Hence, using that \(\gamma_{k}\to 0,\)\(t_{k}\to 0\) and
\[\tilde{P}(x_{k})+\gamma_{k}-t_{k}=\tilde{P}_{k}(x_{k})=u_{k}(x_{k}),\]
we obtain that \(\tilde{P}(\bar{x})=u_{0}(\bar{x}).\) Then \(\bar{x}=x_{0},\) because (6.34) holds.
Now, letting \(k\to\infty\) in (6.35), we get
\[\Delta_{p_{0}}\tilde{P}(x_{0})\geq 0,\]
which gives a contradiction to (6.33). Hence \(\Delta_{p_{0}}P(x_{0})\geq 0.\)
Arguing in a similar way, we deduce that \(u_{0}\) is a viscosity supersolution to \(\Delta_{p_{0}}u_{0}=0\) in \(\{u_{0}>0\}\) as well.
_Step IV. Limit free boundary problem_. We want to show that \(u_{0}\) is a viscosity solution (in the sense of Definition 6.1) to problem
\[\left\{\begin{array}{ll}\Delta_{p_{0}}u_{0}=0,&\mbox{in $\{u_{0}>0\}$,}\\ &\\ |\nabla u_{0}|=1,&\mbox{on $F(u_{0})$.}\end{array}\right. \tag{6.36}\]
Hence we have to check that free boundary condition is satisfied in the sense of (i) and (ii) of that definition. We divide our analysis into two cases.
**Case (a).** Let \(x_{0}\in F(u_{0})\) such that there exists a ball \(B_{r}(y)\subset\{u_{0}>0\}\), with \(x_{0}\in\partial B_{r}(y)\). We denote \(\nu=\frac{y-x_{0}}{|y-x_{0}|}\). Then, by Case (a) in Lemma 5.1,
\[u_{0}(x)=\alpha\langle x-x_{0},\nu\rangle^{+}+o(|x-x_{0}|)\quad\mbox{ in $B_{r}(y)$,} \tag{6.37}\]
with \(\alpha>0\).
We now consider a sequence \(\lambda_{j}\to 0\), \(\lambda_{j}>0\). Since \(u_{0}\) is Lipschitz in \(\mathbb{R}^{n}\) then there exists a function \(u_{00}\) such that, for a subsequence,
\[\frac{u_{0}(x_{0}+\lambda_{j}x)}{\lambda_{j}}\to u_{00}(x)\quad\mbox{uniformly on compact sets of $\mathbb{R}^{n}$.}\]
From (6.37) we know that \(u_{00}(x)=\alpha\langle x,\nu\rangle^{+}\) in \(\{\langle x,\nu\rangle\geq 0\}.\) Then \(\{\langle x,\nu\rangle=0\}\subset F(u_{00})\). Since \(F(u_{0})\) is a Lipschitz graph, also \(F(u_{00})\) is a Lipschitz graph, so we have \(\{\langle x,\nu\rangle=0\}=F(u_{00}).\) Hence,
\[u_{00}(x)=\alpha\langle x,\nu\rangle^{+}\quad\mbox{ in $\mathbb{R}^{n}$.}\]
This result holds for any sequence \(\lambda_{j}\to 0\), \(\lambda_{j}>0\), therefore
\[u_{0}(x)=\alpha\langle x-x_{0},\nu\rangle^{+}+o(|x-x_{0}|)\quad\mbox{ in $\mathbb{R}^{n}$,} \tag{6.38}\]
with \(\alpha>0\).
We want to show that \(\alpha=1\).
Since \(x_{0}\in F(u_{0})\) and recalling (6.29) and (6.31), we know that there exists, up to a subsequence,
\[x_{k}\in F(u_{k}),\quad|x_{k}-x_{0}|<1/k.\]
We fix \(R>0\) such that \(|x_{0}|<R\) and let \(\mu_{j}=1/\sqrt{j}\).
For each \(j\) there exists \(k_{j}\geq j\) such that
\[|u_{k_{j}}(x)-u_{0}(x)|\leq\frac{\mu_{j}}{j}\qquad\mbox{for $x\in B_{R+2}$.}\]
We now define
\[(u_{k_{j}})_{\mu_{j}}(x)=\frac{1}{\mu_{j}}u_{k_{j}}(x_{k_{j}}+\mu_{j}x),\quad( u_{0})_{\mu_{j}}(x)=\frac{1}{\mu_{j}}u_{0}(x_{k_{j}}+\mu_{j}x).\]
Then, if \(j\geq j_{0}\),
\[|(u_{k_{j}})_{\mu_{j}}(x)-(u_{0})_{\mu_{j}}(x)|=\frac{|u_{k_{j}}(x_{k_{j}}+\mu _{j}x)-u_{0}(x_{k_{j}}+\mu_{j}x)|}{\mu_{j}}\leq\frac{1}{j}\qquad\mbox{for $x\in B_{2}$.}\]
We now observe that
\[\frac{|x_{k_{j}}-x_{0}|}{\mu_{j}}<\frac{1/k_{j}}{\mu_{j}}\leq\frac{1}{\sqrt{j} }\quad\to 0,\]
and, recalling (6.38), we obtain
\[(u_{0})_{\mu_{j}}(x)\to u_{00}(x)=\alpha\langle x,\nu\rangle^{+}\quad\text{ uniformly in }B_{2}.\]
Then,
\[|(u_{k_{j}})_{\mu_{j}}(x)-u_{00}(x)|\leq |(u_{k_{j}})_{\mu_{j}}(x)-(u_{0})_{\mu_{j}}(x)|\] \[+|(u_{0})_{\mu_{j}}(x)-u_{00}(x)|\to 0\quad\text{ uniformly in }B_{2}.\]
Denoting \(\rho_{j}=\delta_{k_{j}}\mu_{j}\), \(\bar{x}_{j}=\delta_{k_{j}}x_{k_{j}}\) and \(u_{\rho_{j}}(x)=\frac{1}{\rho_{j}}u(\bar{x}_{j}+\rho_{j}x)=(u_{k_{j}})_{\mu_{j }}(x)\), we get
\[\rho_{j}\to 0,\quad\bar{x}_{j}\in F(u),\quad\bar{x}_{j}\to 0,\] \[u_{\rho_{j}}(x)=\frac{1}{\rho_{j}}u(\bar{x}_{j}+\rho_{j}x)\to \alpha\langle x,\nu\rangle^{+}\quad\text{ uniformly in }B_{2}.\]
Reasoning as in _Step II_, we see that each \(u_{\rho_{j}}\) is a viscosity solution to (1.1) in \(B_{2}\) with right hand side \(\bar{f}_{j}(x)=\rho_{j}f(\bar{x}_{j}+\rho_{j}x)\), exponent \(\bar{p}_{j}(x)=p(\bar{x}_{j}+\rho_{j}x)\) and free boundary condition \(\bar{g}_{j}(x)=g(\bar{x}_{j}+\rho_{j}x)\),
\[\bar{f}_{j}\to 0,\quad\nabla\bar{p}_{j}\to 0,\quad\bar{p}_{j}\to p_{0}, \quad\bar{g}_{j}\to 1,\quad\text{ uniformly in }B_{2}.\]
Moreover, \(u_{\rho_{j}}\) are uniformly Lipschitz and nondegenerate in \(B_{2}\) for \(j\geq j_{0}\), \(\partial\{u_{\rho_{j}}>0\}\) are uniform Lipschitz graphs and \(\partial\{u_{\rho_{j}}>0\}\to\{\langle x,\nu\rangle=0\}\) in Hausdorff distance in \(B_{2}\).
Now, applying Propositions 6.4 and 6.5 to the sequence \(u_{\rho_{j}}\), we deduce that \(\alpha=1\). Then, (i) in Definition 6.1 is satisfied in this case.
**Case (b).** Let \(x_{0}\in F(u_{0})\) such that there exists a ball \(B_{r}(y)\subset\{u_{0}\equiv 0\}\), with \(x_{0}\in\partial B_{r}(y)\). We denote \(\nu=\frac{x_{0}-y}{|x_{0}-y|}\). Then, from the proof of Case (b) in Lemma 5.1, we get
\[u_{0}(x)=\alpha\langle x-x_{0},\nu\rangle^{+}+o(|x-x_{0}|)\quad\text{ in }B_{r}^{c}(y), \tag{6.39}\]
with \(\alpha\geq 0\).
We now consider a sequence \(\lambda_{j}\to 0\), \(\lambda_{j}>0\). Then, for a subsequence and a function \(u_{00}\),
\[\frac{u_{0}(x_{0}+\lambda_{j}x)}{\lambda_{j}}\to u_{00}(x)\quad\text{uniformly on compact sets of }\mathbb{R}^{n}.\]
From (6.39) we know that \(u_{00}(x)=\alpha\langle x,\nu\rangle^{+}\) in \(\{\langle x,\nu\rangle\geq 0\}.\) Since \(B_{r}(y)\subset\{u_{0}\equiv 0\}\), we have \(u_{00}(x)=0\) in \(\{\langle x,\nu\rangle\leq 0\}.\) Hence,
\[u_{00}(x)=\alpha\langle x,\nu\rangle^{+}\quad\text{ in }\mathbb{R}^{n}.\]
Now, if \(\alpha=0\), then \(u_{00}\equiv 0\) in \(\mathbb{R}^{n}\). This contradicts that \(F(u_{00})\) is a Lipschitz graph and shows that \(\alpha>0\).
Since this result holds for any sequence \(\lambda_{j}\to 0\), \(\lambda_{j}>0\), we conclude that
\[u_{0}(x)=\alpha\langle x-x_{0},\nu\rangle^{+}+o(|x-x_{0}|)\quad\text{ in } \mathbb{R}^{n},\]
with \(\alpha>0.\) Now proceeding as in Case (a), we obtain that \(\alpha=1\). Then, (ii) in Definition 6.1 is satisfied in the present case.
This shows that \(u_{0}\) is a viscosity solution to problem (6.36) in the sense of Definition 6.1.
_Step V. Conclusion._ We have proved that \(u_{0}\) is a viscosity solution (in the sense of Definition 6.1) to (6.36) that is Lipschitz continuous and \(F(u_{0})\) is a Lipschitz graph.
Thus, from Lemma 6.2 it follows that, up to a rotation, \(u_{0}(x)=x_{n}^{+}.\) Then for sufficiently large \(k\) we have that, in \(B_{1},\)
\[(x_{n}-\bar{\varepsilon})^{+}\leq u_{k}(x)\leq(x_{n}+\bar{\varepsilon})^{+}, \tag{6.40}\]
for \(\bar{\varepsilon}\) the universal constant in Theorem 6.3. Recalling (6.30), we deduce that Theorem 6.3 applies and, as a consequence, we conclude that the free boundaries of \(u_{k}\) as well as that of \(u\) are \(C^{1,\alpha},\) in a neighborhood of \(0.\)
As a by-product of Theorems 6.3 and 1.2, we obtain further regularity results for \(F(u)\) under additional regularity assumptions on the data.
**Corollary 6.6**.: _Let \(u\) be as in Theorem 6.3 or as in Theorem 1.2. Assume moreover that \(p\in C^{2}(B_{1})\), \(f\in C^{1}(B_{1})\) and \(g\in C^{2}(B_{1})\), then there exists \(\delta>0\) such that \(B_{\delta}\cap F(u)\in C^{2,\sigma}\) for every \(0<\sigma<1\). If \(p\in C^{m+1,\sigma}(B_{1})\), \(f\in C^{m,\sigma}(B_{1})\) and \(g\in C^{m+1,\sigma}(B_{1})\) for some \(0<\sigma<1\) and \(m\geq 1\), then \(B_{\delta}\cap F(u)\in C^{m+2,\sigma}\)._
_Finally, if \(p\), \(f\) and \(g\) are analytic in \(B_{1}\), then \(B_{\delta}\cap F(u)\) is analytic._
Proof.: The result follows from the application of Theorem 2 in [KN].
## 7. Some consequences
In this section we discuss some consequences of our results.
As already mentioned, in [LW2] problem (1.1) was considered for weak solutions, which is a different notion of solution from the one we are considering here (see Definition 7.1 below). One of the consequences of our Theorem 1.2 is an analogous result for weak solutions (Corollary 7.3).
The notation and the assumptions on \(\Omega,p,f\) and \(g\) will be the same as in the rest of the paper (see Subsection 1.1 and Section 2). In particular we will use the notation \(\Omega^{+}(u)\) and \(F(u)\) in (2.1).
We first have
**Definition 7.1** (Definition 2.2 in [LW2]).: We call \(u\) a weak solution of (1.1) in \(\Omega\) if
* \(u\) is continuous and nonnegative in \(\Omega\), \(u\in W^{1,p(\cdot)}(\Omega)\) and \(\Delta_{p(x)}u=f\) in \(\Omega^{+}(u)\) (in the sense of Definition 2.1).
* For \(D\subset\subset\Omega\) there are constants \(c_{\min}=c_{\min}(D)\), \(C_{\max}=C_{\max}(D)\), \(r_{0}=r_{0}(D)\), \(0<c_{\min}\leq C_{\max}\), \(r_{0}>0\), such that for balls \(B_{r}(x)\subset D\) with \(x\in F(u)\) and \(0<r\leq r_{0}\) \[c_{\min}\leq\frac{1}{r}\sup_{B_{r}(x)}u\leq C_{\max}.\]
* For \(\mathcal{H}^{n-1}\) a.e. \(x_{0}\in\partial_{\operatorname{red}}\{u>0\}\) (that is, for \(\mathcal{H}^{n-1}\)-almost every point \(x_{0}\in F(u)\) such that \(F(u)\) has an exterior unit normal \(\nu(x_{0})\) in the measure theoretic sense) \(u\) has the asymptotic development \[u(x)=g(x_{0})\langle x-x_{0},\nu(x_{0})\rangle^{-}+o(|x-x_{0}|).\]
* For every \(x_{0}\in F(u)\), \[\limsup_{\begin{subarray}{c}x\to x_{0}\\ u(x)>0\end{subarray}}|\nabla u(x)|\leq g(x_{0}).\]
If there is a ball \(B\subset\{u=0\}\) touching \(F(u)\) at \(x_{0}\) then,
\[\limsup_{\begin{subarray}{c}x\to x_{0}\\ u(x)>0\end{subarray}}\frac{u(x)}{\operatorname{dist}(x,B)}\geq g(x_{0}).\]
Then we prove
**Proposition 7.2**.: _Let \(u\) be a weak solution to (1.1) in \(\Omega\) in the sense of Definition 7.1. Then \(u\) is a viscosity solution to (1.1) in \(\Omega\) in the sense of Definition 2.5._
Proof.: Let \(u\) be as in the statement. Then \(u\) is continuous and nonnegative in \(\Omega\) and satisfies condition (i) in Definition 2.5. In order to show that it verifies condition (ii) in that definition, we divide the analysis into two cases.
**Case (a).** Let \(\varphi\in C(\Omega)\), \(\varphi\in C^{2}(\overline{\Omega^{+}(\varphi)})\) be such that \(\varphi^{+}\) touches \(u\) from below at \(x_{0}\in F(u)\) and \(\nabla\varphi(x_{0})\neq 0\). We want to show that
\[|\nabla\varphi(x_{0})|\leq g(x_{0}). \tag{7.1}\]
We first observe that, under the present assumptions, Proposition 2.1 in [LW2] applies, so \(u\) is locally Lipschitz in \(\Omega\).
Also there holds that \(\varphi^{+}\) has a \(C^{2}\) extension \(\tilde{\varphi}\) in a neigborhood \(\mathcal{O}\) of \(x_{0}\) (\(\tilde{\varphi}=\varphi^{+}\) in \(\overline{\Omega^{+}(\varphi)}\cap\mathcal{O}\), \(\tilde{\varphi}<0\) otherwise in \(\mathcal{O}\)), that to simplify the notation we still denote \(\varphi\).
Moreover, \(\varphi\) touches \(u\) from below at \(x_{0}\in F(u)\) as well.
By the implicit function theorem, \(F(\varphi)\) is a \(C^{2}\) hypersurface in a neighborhood of \(x_{0}\). Then, \(F(\varphi)\) has a tangent ball \(B\) at \(x_{0}\), with \(B\subset\Omega^{+}(\varphi)\) and also with \(B\subset\Omega^{+}(u)\) and \(x_{0}\in F(u)\cap\partial B\).
We now consider a sequence \(\lambda_{j}\to 0\), \(\lambda_{j}>0\). Since \(u\) and \(\varphi\) are Lipschitz in a neighborhood of \(x_{0}\), then there exist Lipschitz functions \(u_{0}\) and \(\varphi_{0}\) such that, for a subsequence,
\[u_{\lambda_{j}}(x)=\frac{u(x_{0}+\lambda_{j}x)}{\lambda_{j}}\to u_{0}(x), \qquad\frac{\varphi(x_{0}+\lambda_{j}x)}{\lambda_{j}}\to\varphi_{0}(x),\]
uniformly on compact sets of \(\mathbb{R}^{n}\). For simplicity we assume that the interior normal to \(\partial B\) at \(x_{0}\) is \(e_{n}\). Then
\[u_{0}(x)\geq\varphi_{0}(x)=|\nabla\varphi(x_{0})|x_{n}^{+}\ \ \text{in}\ \{x_{n}\geq 0\},\]
\[\Delta_{p_{0}}u_{0}=0\ \ \text{in}\ \{u_{0}>0\}\supset\{x_{n}>0\},\ \ \text{with}\ p_{0}=p(x_{0}).\]
Then, the application of Lemma 5.1, Case (a), at the origin, gives
\[u_{0}(x)=\gamma x_{n}^{+}+o(|x|)\ \ \text{in}\ B_{1}(e_{n}),\ \text{with}\ \gamma>0.\]
We now consider a sequence \(\mu_{j}\to 0\), \(\mu_{j}>0\). Then, there exist Lipschitz functions \(u_{00}\) and \(\varphi_{00}\) such that, for a subsequence,
\[(u_{0})_{\mu_{j}}(x)=\frac{u_{0}(\mu_{j}x)}{\mu_{j}}\to u_{00}(x),\qquad\frac{ \varphi_{0}(\mu_{j}x)}{\mu_{j}}\to\varphi_{00}(x),\]
uniformly on compact sets of \(\mathbb{R}^{n}\). There holds that
\[u_{00}(x)=\gamma x_{n}^{+}\geq\varphi_{00}(x)=|\nabla\varphi(x_{0})|x_{n}^{+} \ \ \text{in}\ \{x_{n}\geq 0\},\]
and
\[|\nabla u_{00}(x)|=\gamma\geq|\nabla\varphi(x_{0})|\ \ \text{in}\ \{x_{n}>0\}. \tag{7.2}\]
Now let
\[\alpha:=\limsup_{\begin{subarray}{c}x\to x_{0}\\ u(x)>0\end{subarray}}|\nabla u(x)|.\]
Then, by (iv) in Definition 7.1, we have
\[g(x_{0})\geq\alpha. \tag{7.3}\]
Let us see that
\[|\nabla u_{00}|\leq\alpha\ \ \text{in}\ \mathbb{R}^{n}. \tag{7.4}\]
In fact, let \(R>0\) and \(\epsilon>0\). Then, there exists \(\lambda_{0}>0\) such that \(|\nabla u(x)|\leq\alpha+\epsilon\) in \(B_{\lambda_{0}R}(x_{0})\). We thus have \(|\nabla u_{\lambda_{j}}(x)|\leq\alpha+\epsilon\) in \(B_{R}\) for \(j\) large. Passing to the limit, we obtain \(|\nabla u_{0}|\leq\alpha+\epsilon\) in \(B_{R}\) and then \(|\nabla u_{0}|\leq\alpha\) in \(\mathbb{R}^{n}\). Now also \(|\nabla(u_{0})_{\mu_{j}}|\leq\alpha\) in \(\mathbb{R}^{n}\). Passing to the limit again, we obtain (7.4).
Then, (7.3), (7.4) and (7.2) give \(g(x_{0})\geq\alpha\geq\gamma\geq|\nabla\varphi(x_{0})|\). That is, (7.1) holds.
**Case (b).** Now let \(\varphi\in C(\Omega)\), \(\varphi\in C^{2}(\overline{\Omega^{+}(\varphi)})\) such that \(\varphi^{+}\) touches \(u\) from above at \(x_{0}\in F(u)\) and \(\nabla\varphi(x_{0})\neq 0\). We want to show that
\[|\nabla\varphi(x_{0})|\geq g(x_{0}). \tag{7.5}\]
Also in this case there holds that \(\varphi^{+}\) has a \(C^{2}\) extension \(\tilde{\varphi}\) in a neighborhood of \(x_{0}\), that to simplify the notation we still denote \(\varphi\).
By the implicit function theorem, \(F(\varphi)\) is a \(C^{2}\) hypersurface in a neighborhood of \(x_{0}\). Then, \(F(\varphi)\) has a tangent ball \(B\) at \(x_{0}\), with \(B\subset\Omega\setminus\overline{\Omega^{+}(\varphi)}\) and also with \(B\subset\{u=0\}\) and \(x_{0}\in F(u)\cap\partial B\).
Now let
\[\alpha:=\limsup_{\begin{subarray}{c}x\to x_{0}\\ u(x)>0\end{subarray}}\frac{u(x)}{\operatorname{dist}(x,B)}.\]
Then, by (iv) in Definition 7.1, we have
\[g(x_{0})\leq\alpha. \tag{7.6}\]
Let \(x_{k}\to x_{0}\) with \(u(x_{k})>0\) be such that
\[\frac{u(x_{k})}{\operatorname{dist}(x_{k},B)}\to\alpha. \tag{7.7}\]
Since \(\varphi^{+}\geq u\) in a neighborhood of \(x_{0}\), then \(\varphi(x_{k})>0\). Now let \(y_{k}\in\partial B\) such that \(\operatorname{dist}(x_{k},B)=|x_{k}-y_{k}|\). Then \(\varphi(y_{k})\leq 0\) and
\[\frac{\varphi(x_{k})-\varphi(y_{k})}{|x_{k}-y_{k}|}\geq\frac{\varphi(x_{k})}{ \operatorname{dist}(x_{k},B)}\geq\frac{u(x_{k})}{\operatorname{dist}(x_{k},B)}. \tag{7.8}\]
But, for a subsequence,
\[\frac{\varphi(x_{k})-\varphi(y_{k})}{|x_{k}-y_{k}|}=\nabla\varphi(\xi_{k}) \cdot\frac{(x_{k}-y_{k})}{|x_{k}-y_{k}|}\to\nabla\varphi(x_{0})\cdot\frac{ \nabla\varphi(x_{0})}{|\nabla\varphi(x_{0})|}, \tag{7.9}\]
where for every \(k\), \(\xi_{k}\) is a point in the segment joining \(x_{k}\) and \(y_{k}\). Putting (7.7), (7.8) and (7.9) together we get \(|\nabla\varphi(x_{0})|\geq\alpha\). Now recalling (7.6), we get (7.5) which completes the proof.
Then, we obtain
**Corollary 7.3**.: _Let \(u\) be a weak solution to (1.1) in \(B_{1}\) in the sense of Definition 7.1, with \(0\in F(u)\). If \(F(u)\) is a Lipschitz graph in a neighborhood of \(0\), then \(F(u)\) is \(C^{1,\alpha}\) in a (smaller) neighborhood of \(0\)._
Proof.: The result is an immediate application of Theorem 1.2 and Proposition 7.2.
## 8. Some applications
In this section we discuss some applications of both the results obtained in the present paper and in [FL], and we draw some conclusions on them (see Remark 8.4).
The applications of our results discussed here correspond to three different minimization problems that were already studied in [LW1], [LW3] and [LW4]. Our results below rely on the thorough understanding of the properties of nonnegative local minimizers achieved in those papers. We also refer to them for the motivation and related literature.
The notation and the assumptions on \(\Omega,p\) and \(f\) will be the same as in the rest of the paper (see Subsection 1.1 and Section 2). In particular we will use the notation \(\Omega^{+}(u)\) and \(F(u)\) in (2.1).
Our first application is
**Proposition 8.1**.: _Let \(\Omega\), \(p\) and \(f\) be as above. Let \(0<\lambda_{\min}\leq\lambda(x)\leq\lambda_{\max}<\infty\) with \(\lambda\in C^{0,\beta}(\Omega)\). Let \(u\in W^{1,p(\cdot)}(\Omega)\cap L^{\infty}(\Omega)\) be a nonnegative local minimizer of the energy functional \(J(v)=\int_{\Omega}\Big{(}\frac{|\nabla v|^{p(x)}}{p(x)}+\lambda(x)\chi_{\{v>0 \}}+fv\Big{)}\,dx\) in \(\Omega\)._
_Then, \(u\) is a viscosity solution to (1.1) in \(\Omega\) with \(g(x)=(\frac{p(x)}{p(x)-1}\,\lambda(x))^{1/p(x)}\)._
_Let \(x_{0}\in F(u)\) be such that \(F(u)\) is a Lipschitz graph in a neighborhood of \(x_{0}\), then \(F(u)\) is \(C^{1,\alpha}\) in a (smaller) neighborhood of \(x_{0}\)._
_Let \(x_{0}\in F(u)\) be such that \(F(u)\) has a normal in the measure theoretic sense, then \(F(u)\) is \(C^{1,\alpha}\) in a neighborhood of \(x_{0}\)._
_Moreover, there is a subset \(\mathcal{R}\) of \(F(u)\) which is locally a \(C^{1,\alpha}\) surface. The set \(\mathcal{R}\) is open and dense in \(F(u)\) and the remainder of the free boundary has \((n-1)-\)dimensional Hausdorff measure zero._
Proof.: By Theorem 5.1 in [LW3], \(u\) is a weak solution to (1.1) in \(\Omega\) with \(g(x)=(\frac{p(x)}{p(x)-1}\,\lambda(x))^{1/p(x)}\) in the sense of Definition 7.1. Then by Proposition 7.2, \(u\) is a viscosity solution to (1.1) in \(\Omega\) in the sense of Definition 2.5, with the same \(g\).
Let \(x_{0}\in F(u)\) be such that \(F(u)\) is a Lipschitz graph in a neighborhood of \(x_{0}\). Then, from the application of Theorem 1.2, \(F(u)\) is \(C^{1,\alpha}\) in a smaller neighborhood of \(x_{0}\).
Let \(x_{0}\in F(u)\) be such that \(F(u)\) has a normal in the measure theoretic sense. Without loss of generality we assume that \(x_{0}=0\), \(g(0)=1\) and that the inward unit normal to \(F(u)\) at \(0\) in the measure theoretic sense is \(e_{n}\). Also we denote \(p(0)=p_{0}\).
Then, by Theorem 3.9 in [LW3] there holds that
\[u(x)=x_{n}^{+}+o(|x|)\ \ \text{in }\mathbb{R}^{n}. \tag{8.1}\]
By Corolary 3.2 and Theorem 3.5 in [LW3] we know that \(u\) is Lipschitz and nondegenerate in some ball \(B_{r_{0}}\), with \(0<r_{0}<1\).
Then, as in _Step II_ in the proof of Theorem 1.2, we take \(\delta_{k}>0\), \(\delta_{k}\to 0\), and consider a blow up sequence \(u_{k}\) as in (6.29). As in that theorem, our goal is to apply Theorem 6.3 to \(u_{k}\), for large \(k\). We first observe that, taking \(k\) sufficiently large, the assumption (6.4) in that theorem is satisfied for the universal constant \(\bar{\varepsilon}\). In fact, in \(B_{1}\), (6.30) holds.
Arguing again as in Theorem 1.2, we see that (6.31) holds with \(u_{0}(x)=x_{n}^{+}\) in \(\mathbb{R}^{n}\), because of (8.1). Then, reasoning as in this same theorem and using Theorem 3.6 in [11], we obtain for \(k\) sufficiently large that (6.40) holds in \(B_{1}\), for \(\bar{\varepsilon}\) the universal constant in Theorem 6.3. Therefore, Theorem 6.3 applies to \(u_{k}\), and as a consequence, \(F(u)\) is \(C^{1,\alpha}\) in a neighborhood of \(0\).
Finally, denoting \(\mathcal{R}\) the set of points in \(F(u)\) such that \(F(u)\) has a normal in the measure theoretic sense, we argue as in Theorem 5.2 in [11] and obtain that \(\mathcal{R}\) is dense in \(F(u)\) and \(\mathcal{H}^{n-1}(F(u)\setminus\mathcal{R})=0\).
Our next application is
**Proposition 8.2**.: _For \(\varepsilon>0\), let \(B_{\varepsilon}(s)=\int_{0}^{s}\bar{\beta}_{\varepsilon}(\tau)\,d\tau\) where \(\bar{\beta}_{\varepsilon}(s)=\frac{1}{\varepsilon}\bar{\beta}(\frac{s}{ \varepsilon})\), with \(\tilde{\beta}\) a Lipschitz function satisfying \(\tilde{\beta}>0\) in \((0,1)\), \(\tilde{\beta}\equiv 0\) outside \((0,1)\). Let \(\Omega\), \(p\) and \(f\) be as above, \(1<p_{\min}\leq p_{\varepsilon_{j}}(x)\leq p_{\max}<\infty\) and \(\|\nabla p_{\varepsilon_{j}}\|_{L^{\infty}}\leq L\). Let \(u^{\varepsilon_{j}}\in W^{1,p_{\varepsilon_{j}}(\cdot)}(\Omega)\) be a family of nonnegative local minimizers of the energy functional_
\[J_{\varepsilon_{j}}(v)=\int_{\Omega}\left(\frac{|\nabla v|^{p_{\varepsilon_{j}} (x)}}{p_{\varepsilon_{j}}(x)}+B_{\varepsilon_{j}}(v)+f^{\varepsilon_{j}}v \right)dx\text{ in }\Omega\text{ such that }u^{\varepsilon_{j}}\to u\text{ uniformly on compact subsets of }\Omega\text{, }f^{\varepsilon_{j}}\rightharpoonup f\ast-\text{weakly in }L^{\infty}(\Omega)\text{, }p_{\varepsilon_{j}}\to p\text{ uniformly on compact subsets of }\Omega\text{ and }\varepsilon_{j}\to 0\text{.}\]
_Then, \(u\) is a viscosity solution to (1.1) in \(\Omega\) with \(g(x)=(\frac{p(x)}{p(x)-1}\,M)^{1/p(x)}\) and \(M=\int\tilde{\beta}(s)\,ds\)._
_Let \(x_{0}\in F(u)\) be such that \(F(u)\) is a Lipschitz graph in a neighborhood of \(x_{0}\), then \(F(u)\) is \(C^{1,\alpha}\) in a (smaller) neighborhood of \(x_{0}\)._
_Let \(x_{0}\in F(u)\) be such that \(F(u)\) has a normal in the measure theoretic sense, then \(F(u)\) is \(C^{1,\alpha}\) in a neighborhood of \(x_{0}\)._
_Moreover, there is a subset \(\mathcal{R}\) of \(F(u)\) which is locally a \(C^{1,\alpha}\) surface. The set \(\mathcal{R}\) is open and dense in \(F(u)\) and the remainder of the free boundary has \((n-1)-\)dimensional Hausdorff measure zero._
Proof.: We argue exactly as in the proof of Proposition 8.1. We apply again our results in Proposition 7.2 and Theorems 1.2 and 6.3, and in this case we make use of Theorems 5.3, 4.3, 4.4 and Remark 4.2 in [11], and Theorem 5.3 in [11].
We also obtain
**Remark 8.3**.: In [11] an optimization problem with volume constraint for an energy associated to the inhomogeneous \(p(x)\)-Laplacian was considered. By means of a penalization technique, it was shown that nonnegative minimizers \(u\) are weak solutions to (1.1) in a bounded domain \(\Omega\) in the sense of Definition 7.1 with \(g(x)=(\frac{p(x)}{p(x)-1}\,\lambda_{u})^{1/p(x)}\), where \(\lambda_{u}>0\) is a constant.
Under the assumptions we made on \(p\) and \(f\) at the beginning of present section, by combining our results with those in [11], we can argue as in Propositions 8.1 and 8.2 and obtain the same conclusions for \(u\) and \(F(u)\).
**Remark 8.4**.: In Propositions 8.1 and 8.2 and Remark 8.3, our \(C^{1,\alpha}\) regularity results on \(F(u)\) under the Lipschitz assumption on \(F(u)\) follow from the application of Theorem 1.2 in the present paper and are new.
We want to point out that the rest our \(C^{1,\alpha}\) regularity results on \(F(u)\) in Propositions 8.1 and 8.2 and Remark 8.3, which follow from Theorem 6.3 (i.e., Theorem 1.1 in [FL]), were already obtained in [LW3] and [LW4], from the application of the results in [LW2], but under different assumptions on \(f\) and \(p\).
In fact, our results in [FL] --inspired in De Silva's approach (see [D])-- require that \(f\in C(\Omega)\cap L^{\infty}(\Omega)\) and \(p\in C^{1}(\Omega)\) and Lipschitz, whereas the results in [LW2] --inspired in Alt - Caffarelli's approach (see [AC])-- require that \(f\in L^{\infty}(\Omega)\cap W^{1,q}(\Omega)\) and \(p\in W^{1,\infty}(\Omega)\cap W^{2,q}(\Omega)\), for \(q>\max\{1,n/2\}\).
The reason for this difference in the assumptions relies on the fact that in De Silva's approach for viscosity solutions the estimates are obtained by comparison with suitable barriers. In Alt - Caffarelli's approach for weak (variational) solutions, certain estimates on \(|\nabla u|\) close to the free boundary are obtained by looking for an equation for \(v=|\nabla u|\), which requires more delicate computations.
## Appendix A Lebesgue and Sobolev spaces with variable exponent
Let \(p:\Omega\to[1,\infty)\) be a measurable bounded function, called a variable exponent on \(\Omega\), and denote \(p_{\max}=\operatorname{esssup}p(x)\) and \(p_{\min}=\operatorname{essinf}p(x)\). The variable exponent Lebesgue space \(L^{p(\cdot)}(\Omega)\) is defined as the set of all measurable functions \(u:\Omega\to\mathbb{R}\) for which the modular \(\varrho_{p(\cdot)}(u)=\int_{\Omega}|u(x)|^{p(x)}\,dx\) is finite. The Luxemburg norm on this space is defined by
\[\|u\|_{L^{p(\cdot)}(\Omega)}=\|u\|_{p(\cdot)}=\inf\{\lambda>0:\varrho_{p( \cdot)}(u/\lambda)\leq 1\}.\]
This norm makes \(L^{p(\cdot)}(\Omega)\) a Banach space.
There holds the following relation between \(\varrho_{p(\cdot)}(u)\) and \(\|u\|_{L^{p(\cdot)}}\):
\[\min\Big{\{}\Big{(}\int_{\Omega}|u|^{p(x)}\,dx\Big{)}^{1/p_{\min}}, \Big{(}\int_{\Omega}|u|^{p(x)}\,dx\Big{)}^{1/p_{\max}}\Big{\}} \leq\|u\|_{L^{p(\cdot)}(\Omega)}\]
Moreover, the dual of \(L^{p(\cdot)}(\Omega)\) is \(L^{p^{\prime}(\cdot)}(\Omega)\) with \(\frac{1}{p(x)}+\frac{1}{p^{\prime}(x)}=1\).
\(W^{1,p(\cdot)}(\Omega)\) denotes the space of measurable functions \(u\) such that \(u\) and the distributional derivative \(\nabla u\) are in \(L^{p(\cdot)}(\Omega)\). The norm
\[\|u\|_{1,p(\cdot)}:=\|u\|_{p(\cdot)}+\||\nabla u|\|_{p(\cdot)}\]
makes \(W^{1,p(\cdot)}(\Omega)\) a Banach space.
The space \(W^{1,p(\cdot)}_{0}(\Omega)\) is defined as the closure of the \(C^{\infty}_{0}(\Omega)\) in \(W^{1,p(\cdot)}(\Omega)\).
For further details on these spaces, see [DHHR], [KR], [RR] and their references.
## Appendix B A Liouville type result
In this Appendix we prove, for the sake of completeness, a Liouville type result for the \(p_{0}\)-Laplace operator, because we did not find it in the literature in this form. This result plays a key role in Section 6.
**Lemma B.1**.: _Let \(1<p_{0}<\infty\) be constant. Let \(u\) be Lipschitz in \(\mathbb{R}^{n}\cap\{x_{n}\geq 0\}\) and solution to_
(B.1) \[\left\{\begin{array}{ll}\Delta_{p_{0}}u=0,&\mbox{in }\{x_{n}>0\},\\ \\ u=0,&\mbox{on }\quad\{x_{n}=0\}.\end{array}\right.\]
_Then, there exists \(C\in\mathbb{R}\) such that \(u(x)=Cx_{n}\) in \(\{x_{n}\geq 0\}\)._
Proof.: We consider, for \(x=(x^{\prime},x_{n}),\,x^{\prime}\in\mathbb{R}^{n-1},\,x_{n}\in\mathbb{R},\) the extended function
\[\tilde{u}(x^{\prime},x_{n})=\left\{\begin{array}{ll}u(x^{\prime},x_{n}), \quad x_{n}\geq 0,\\ -u(x^{\prime},-x_{n}),\quad x_{n}\leq 0.\end{array}\right.\]
From the Lipschitz continuity of \(u\) in the set \(\{x_{n}\geq 0\}\) it follows that \(\tilde{u}\) is Lipschitz in \(\mathbb{R}^{n}\) and \(\tilde{u}\in W^{1,\infty}_{\rm loc}(\mathbb{R}^{n}).\) Now let \(\varphi\in C_{0}^{\infty}(\mathbb{R}^{n})\). There holds
(B.2) \[\begin{split}&\int_{\mathbb{R}^{n}}|\nabla\tilde{u}|^{p_{0}-2} \langle\nabla\tilde{u},\nabla\varphi\rangle dx=\int_{\mathbb{R}^{n}\cap\{x_{n }>0\}}|\nabla\tilde{u}|^{p_{0}-2}\langle\nabla\tilde{u},\nabla\varphi\rangle dx \\ &+\int_{\mathbb{R}^{n}\cap\{x_{n}<0\}}|\nabla\tilde{u}|^{p_{0}-2} \langle\nabla\tilde{u},\nabla\varphi\rangle dx\\ &=\int_{\mathbb{R}^{n}\cap\{x_{n}>0\}}|\nabla u|^{p_{0}-2} \langle\nabla u,\nabla\varphi\rangle dx-\int_{\mathbb{R}^{n}\cap\{x_{n}>0\}}| \nabla u|^{p_{0}-2}\langle\nabla u,\nabla\tilde{\varphi}\rangle dx\\ &=\int_{\mathbb{R}^{n}\cap\{x_{n}>0\}}|\nabla u|^{p_{0}-2}\langle \nabla u,\nabla\eta\rangle dx,\end{split}\]
where \(\tilde{\varphi}(x^{\prime},x_{n}):=\varphi(x^{\prime},-x_{n})\) and \(\eta(x):=\varphi(x^{\prime},x_{n})-\varphi(x^{\prime},-x_{n})\in C_{0}^{\infty }(\mathbb{R}^{n}).\) In particular, \(\eta(x^{\prime},0)=0\) and thus, there exists \(\{\eta_{j}\}_{j\in\mathbb{N}}\subset C_{0}^{\infty}(\mathbb{R}^{n}\cap\{x_{n}> 0\})\) such that \(\eta_{j}\to\eta\) in \(W^{1,p_{0}}(\mathbb{R}^{n}\cap\{x_{n}>0\})\) with \({\rm spt}\eta_{j},{\rm spt}\eta\subset B_{R},\) for some \(R>0\). Then,
\[\int_{\mathbb{R}^{n}\cap\{x_{n}>0\}}|\nabla u|^{p_{0}-2}\langle\nabla u,\nabla \eta_{j}\rangle dx=0,\]
since \(u\) is solution to (B.1).
We claim that
(B.3) \[\int_{\mathbb{R}^{n}\cap\{x_{n}>0\}}|\nabla u|^{p_{0}-2}\langle\nabla u,\nabla \eta\rangle dx=0\]
and therefore, by (B.2),
\[\int_{\mathbb{R}^{n}}|\nabla\tilde{u}|^{p_{0}-2}\langle\nabla\tilde{u}, \nabla\varphi\rangle dx=0.\]
That is, \(\tilde{u}\) is a weak solution to \(\Delta_{p_{0}}\tilde{u}=0\) in \(\mathbb{R}^{n}\).
In fact,
\[\begin{split}&\Big{|}\int_{\mathbb{R}^{n}\cap\{x_{n}>0\}}|\nabla u |^{p_{0}-2}\langle\nabla u,\nabla\eta_{j}-\nabla\eta\rangle dx\Big{|}\leq\int_ {\mathbb{R}^{n}\cap\{x_{n}>0\}}|\nabla u|^{p_{0}-1}|\nabla\eta_{j}-\nabla\eta |dx\\ &\leq\left(\int_{B_{R}\cap\{x_{n}>0\}}|\nabla u|^{p_{0}}dx\right) ^{\frac{p_{0}-1}{p_{0}}}\left(\int_{B_{R}\cap\{x_{n}>0\}}|\nabla\eta_{j}- \nabla\eta|^{p_{0}}dx\right)^{1/p_{0}}\to 0,\end{split}\]
thus (B.3) holds.
Hence, \(\Delta_{p_{0}}\tilde{u}=0\) and \(|\tilde{u}(x)|\leq L|x|\) in \(\mathbb{R}^{n}\), with \(L\) the Lipschitz constant of \(\tilde{u}\), and the same result holds for \(\tilde{u}_{R}(x)=\frac{\tilde{u}(Rx)}{R}\), for any \(R>0\). Moreover, by
the \(C^{1,\alpha}\) estimates for the \(p_{0}\)-Laplace operator, there exists \(\alpha\in(0,1)\) such that \(\tilde{u}_{R}\in C^{1,\alpha}(\overline{B_{1}})\) and for every \(x,y\in B_{1},\)
\[M\geq\frac{|\nabla\tilde{u}_{R}(x)-\nabla\tilde{u}_{R}(y)|}{|x-y|^{\alpha}}= \frac{|\nabla\tilde{u}(Rx)-\nabla\tilde{u}(Ry)|}{|x-y|^{\alpha}},\]
where \(M\) and \(\alpha\) depend only on \(n,p_{0}\) and \(\sup_{B_{2}}|\tilde{u}_{R}(x)|\leq 2L.\) Thus, it follows that for \(z\) and \(\kappa\) in \(B_{R},\)
\[|\nabla\tilde{u}(z)-\nabla\tilde{u}(\kappa)|\leq M\frac{|z-\kappa|^{\alpha}}{ R^{\alpha}}.\]
In particular, fixing \(z,\kappa\in B_{1}\) and letting \(R\to\infty,\) we deduce that
\[|\nabla\tilde{u}(z)-\nabla\tilde{u}(\kappa)|=0\]
for every \(z,\kappa\in B_{1}\). That is, \(\nabla\tilde{u}\) is constant and \(\tilde{u}\) is linear in \(B_{1}\).
On the other hand, for every \(\lambda>0,\) the function \(\tilde{u}_{\lambda}(x)=\frac{\tilde{u}(\lambda x)}{\lambda}\) is still a Lipschitz solution of problem (B.1). Hence, by the argument above, \(\tilde{u}_{\lambda}\) is linear in \(B_{1}\) and \(\tilde{u}_{\lambda}(x)=\langle v_{\lambda},x\rangle\) in \(B_{1}\), for some \(v_{\lambda}\in\mathbb{R}^{n}\). Thus \(\tilde{u}(\lambda x)=\langle v_{\lambda},\lambda x\rangle\) in \(B_{1}\) and therefore, \(\tilde{u}(y)=\langle v_{\lambda},y\rangle=\langle\nabla\tilde{u}(0),y\rangle\) in \(B_{\lambda}.\)
Since \(\lambda>0\) is arbitrary, \(\tilde{u}(y)=\langle\nabla\tilde{u}(0),y\rangle\) in \(\mathbb{R}^{n}\). Now, denoting \(C=\frac{\partial\tilde{u}(0)}{\partial y_{n}},\) we conclude that \(u(x)=Cx_{n}\) in \(\{x_{n}\geq 0\}.\)
## Acknowledgment
The authors wish to thank Sandro Salsa for very interesting discussions about the subject of this paper.
## Data availability
This manuscript has no associated data.
|
2306.06256 | More Differential Operators on Almost Hermitian Manifolds | In 1980 Michelsohn defined a differential operator on sections of the complex
Clifford bundle over a compact K\"ahler manifold M . This operator is a
differential and its Laplacian agrees with the Laplacian of the Dolbeault
operator on forms through a natural identification of differential forms with
sections of the Clifford bundle. Relaxing the condition that M be K\"ahler, we
introduce two differential operators on sections of the complex Clifford bundle
over a compact almost Hermitian manifold which naturally generalize the one
introduced by Michelsohn. We show surprising K\"ahler-like symmetries of the
kernel of the Laplacians of these operators in the almost Hermitian and almost
K\"ahler settings, along with a correspondence of these operators to operators
on forms which are of present interest in almost complex geometry. | Samuel Hosmer | 2023-06-09T20:57:29Z | http://arxiv.org/abs/2306.06256v1 | # More differential operators on almost Hermitian manifolds
###### Abstract.
In 1980 Michelsohn defined a differential operator on sections of the complex Clifford bundle over a compact Kahler manifold \(M\). This operator is a differential and its Laplacian agrees with the Laplacian of the Dolbeault operator on forms through a natural identification of differential forms with sections of the Clifford bundle. Relaxing the condition that \(M\) be Kahler, we introduce two differential operators on sections of the complex Clifford bundle over a compact almost Hermitian manifold which naturally generalize the one introduced by Michelsohn. We show surprising Kahler-like symmetries of the kernel of the Laplacians of these operators in the almost Hermitian and almost Kahler settings, along with a correspondence of these operators to operators on forms which are of present interest in almost complex geometry.
###### Contents
* 1 Introduction
* 2 The Clifford Algebra
* 3 Almost Hermitian Dirac Identities
* 4 The Canonical Hermitian Connections
* 5 The Canonical Hermitian Dirac Operators
* 6 Almost Hermitian Identities via the Bismut Dirac Operator
* 7 Almost Kahler identities via the Riemannian Dirac Operator
* 8 Clifford Harmonics
* Appendix: Hermitian Connections and Dirac Operators
## 1. Introduction
Given a smooth manifold \(M\) de Rham gave us a way of viewing the differential forms on \(M\) as a cochain complex computing the real cohomology of \(M\) with differential \(d\), the exterior derivative on forms. The space of forms \(\Omega(M)\) is integrally graded as the space of sections of the exterior algebra bundle on the dual of \(TM\). Additionally \(d\) is an elliptic differential operator of degree \(1\). This graded differential algebra is a fundamental object of study within geometry and topology, with Sullivan in 1960 showing that \(\big{(}\Omega(M),d\big{)}\) holds the real homotopy type of \(M\).
In the presence of a metric on \(M\) the operator \(d\) and sections of the exterior bundle \(\Omega(M)=\Gamma(\Lambda(M))\) can be replaced with the Riemannian Dirac operator \(\widetilde{D}\) and sections of the Clifford bundle \(\Gamma(Cl(M))\), respectively. The bundle \(Cl(M)\) is isomorphic to \(\Lambda(M)\) as a vector bundle, but the multiplication is not preserved by the isomorphism. The operator \(\widetilde{D}\) is an elliptic self adjoint operator on sections of \(Cl(M)\) which factorizes the Laplacian of \(d\). That is, \(\widetilde{D}=d+d^{*}\).
There are some notable consequences of this replacement: \(Cl(M)\) has no natural integral grading and the Dirac operator is generically not a differential. In effect, by this switch, we lose the cochain complex. What we gain is Hodge theory.
Recent advances in almost complex geometry show that a valuable topological insight into an almost complex manifold \(M\) is given by the harmonic forms of certain differential operators on \(\Omega(M)\). On an almost complex manifold \(M\) of dimension \(2n\) the exterior derivative \(d\) over the complexification \(\Omega_{\mathbb{C}}(M)\) decomposes into \(4\) components in bidegree. Namely,
\[d=\partial+\overline{\mu}+\overline{\partial}+\mu\]
where \(\partial,\overline{\mu},\overline{\partial},\mu\) are of bidegree \((1,0),(-1,2),(0,1),(2,-1)\) respectively.
Given a compatible metric there is a real linear isomorphism induced from complex conjugation \(c\) and a complex linear isomorphism induced by the Hodge star operator \(*\) so that
\[c:\Omega_{\mathbb{C}}(M)^{p,q} \xrightarrow{\sim}\Omega_{\mathbb{C}}(M)^{q,p}\] \[*:\Omega_{\mathbb{C}}(M)^{p,q} \xrightarrow{\sim}\Omega_{\mathbb{C}}(M)^{n-q,n-p}.\]
Letting \(\omega=\langle J\cdot,\cdot\rangle\), there is a classical \(sl(2)\) representation on the bundle \(\Omega(M)\) generated by the operators \(\big{(}L,\Lambda,H\big{)}\) where \(L=\omega\wedge\cdot,\Lambda=L^{*}\), and \(H=[\Lambda,L]\). In the case the form \(\omega\) is \(d\) closed, there are local commutation relations between the components of the exterior derivative and these operators \(L,\Lambda,H\) called the almost Kahler identities, which translate over compact manifolds via Hodge theory to a representation on these classical \(\operatorname{sl}(2)\) operators on the space of certain harmonic forms. For \(T\) an elliptic differential operator on forms or sections of the Clifford bundle we denote by \(\boldsymbol{\mathcal{H}}_{T}=\ker(\Delta_{T})\) the space of \(T\) harmonic forms.
Cirici and Wilson show in [15] that over a compact almost Kahler manifold \(M\) the operators \((L,\Lambda,H)\) define a finite dimensional \(sl(2)\) representation on
\[\bigoplus_{p,q\geq 0}\boldsymbol{\mathcal{H}}_{\partial}^{p,q}\cap\boldsymbol{ \mathcal{H}}_{\overline{\mu}}^{p,q}=\bigoplus_{p,q\geq 0}\boldsymbol{ \mathcal{H}}_{d}^{p,q}\]
where \(\boldsymbol{\mathcal{H}}_{T}^{p,q}\) are the \(T\) harmonic forms in a particular bidegree \((p,q)\).
Tardini and Tomassini in [16] consider the operator \(\delta=\partial+\overline{\mu}\) and showed the corresponding \(\mathbb{Z}\) graded result that \((L,\Lambda,H)\) define an \(sl(2)\) representation on
\[\boldsymbol{\mathcal{H}}_{\delta}=\bigoplus_{k}\boldsymbol{\mathcal{H}}_{ \delta}^{k}\]
where \(\boldsymbol{\mathcal{H}}_{\delta}^{k}\subset\boldsymbol{\mathcal{H}}_{d}^{k}\) are the harmonic forms in degree \(k\). Furthermore, in a particular bidegree, they show
\[\boldsymbol{\mathcal{H}}_{\partial}^{p,q}\cap\boldsymbol{\mathcal{H}}_{ \overline{\mu}}^{p,q}=\boldsymbol{\mathcal{H}}_{d}^{p,q}=\boldsymbol{ \mathcal{H}}_{\delta}^{p,q}.\]
Following Michelsohn [14] we introduce a bidegree decomposition of the complex Clifford bundle \(\mathbb{C}l(M)\stackrel{{\text{def}}}{{=}}Cl(M)\otimes\mathbb{C}\)
\[\mathbb{C}l(M)=\bigoplus_{r,s}\mathbb{C}l^{r,s}(M)\]
and an \(sl(2)\) representation on \(\mathbb{C}l(M)\) generated by operators \(\mathcal{L},\overline{\mathcal{L}},\mathcal{H}\). Furthermore there is an operator \(\ddagger\) called the _transpose map_ inducing complex linear isomorphisms
\[\ddagger:\mathbb{C}l^{r,s}(M)\xrightarrow{\sim}\mathbb{C}l^{r,-s}(M).\]
Generalizing Michelsohn's results in [14], we introduce two elliptic differential operators \(\mathfrak{D},\mathfrak{B}\) and \(\mathfrak{B}^{\otimes}=\ddagger\mathfrak{B}\ddagger\) on sections of \(\mathbb{C}l(M)\). Additionally we define an operator \(\mathfrak{e}=\partial-i\rho_{\partial}\) on \(\Omega_{\mathbb{C}}(M)\) where \(\rho_{\partial}\) is a zero order operator vanishing if and only if \(\partial\omega=\overline{\partial}\omega=0\). We prove local commutation relations between these operators and the generators of the respective \(sl(2)\) representations. We show
**Proposition** (Proposition 8.4).: _Let \(M\) be a compact almost Hermitian manifold. Complex conjugation \(c\) on \(\mathbb{C}l(M)\) induces complex anti-linear isomorphisms_
\[c:\boldsymbol{\mathcal{H}}_{\mathfrak{B}}^{r,s}\cap\boldsymbol{ \mathcal{H}}_{\mathfrak{B}^{\otimes}}^{r,s}\rightarrow\boldsymbol{\mathcal{H}} _{\mathfrak{B}}^{-r,-s}\cap\boldsymbol{\mathcal{H}}_{\mathfrak{B}^{\otimes}}^{- r,-s},\] \[c:\boldsymbol{\mathcal{H}}_{\mathcal{D}}^{r,s}\longrightarrow \boldsymbol{\mathcal{H}}_{\mathcal{D}}^{-r,-s}.\]
**Proposition** (Proposition 8.5).: _Let \(M\) be a compact almost Hermitian manifold. The transpose map \(\curlywedge\) induces complex linear isomorphisms_
\[\begin{split}&\vartriangleright:\boldsymbol{\mathcal{H}}_{ \mathfrak{B}^{\circ}}^{r,s}\cap\boldsymbol{\mathcal{H}}_{\mathfrak{B}^{ \circ}}^{r,s}\to\boldsymbol{\mathcal{H}}_{\mathfrak{B}}^{r,-s}\cap \boldsymbol{\mathcal{H}}_{\mathfrak{B}^{\circ}}^{r,-s},\\ &\vartriangleright:\boldsymbol{\mathcal{H}}_{\mathfrak{D}}^{r,s} \longrightarrow\boldsymbol{\mathcal{H}}_{\mathfrak{D}}^{r,-s}.\end{split}\]
**Theorem** (Theorem 8.6).: _Let \(M\) be a compact almost Hermitian manifold. The Lie algebra generated by \(\big{(}\mathcal{H},\mathcal{L},\overline{\mathcal{L}}\big{)}\) on \(\Gamma\mathbb{C}l(M)\) defines a finite dimensional \(sl(2)\) representation on the space \(\boldsymbol{\mathcal{H}}_{\mathfrak{B}}\cap\boldsymbol{\mathcal{H}}_{ \mathfrak{B}^{\circ}}\)._
**Theorem** (Theorem 8.7).: _Let \(M\) be a compact almost Hermitian manifold. Through the isomorphism \(\Gamma\mathbb{C}l(M)\cong\Omega_{\mathbb{C}}(M)\) we have_
\[\boldsymbol{\mathcal{H}}_{\mathfrak{B}}\cap\boldsymbol{\mathcal{H}}_{ \mathfrak{B}^{\circ}}\cong\boldsymbol{\mathcal{H}}_{\varepsilon}\cap \boldsymbol{\mathcal{H}}_{\overline{\varepsilon}}.\]
**Theorem** (Theorem 8.8).: _Let \(M\) be a compact almost Kahler manifold. The Lie algebra generated by \(\big{(}\mathcal{H},\mathcal{L},\overline{\mathcal{L}}\big{)}\) on \(\Gamma\mathbb{C}l(M)\) defines a finite dimensional \(sl(2)\) representation on the space \(\boldsymbol{\mathcal{H}}_{\mathfrak{D}}\)._
**Theorem** (Theorem 8.9).: _Let \(M\) be a compact almost Kahler manifold. Through the isomorphism \(\Gamma\mathbb{C}l(M)\cong\Omega_{\mathbb{C}}(M)\) we have_
\[\boldsymbol{\mathcal{H}}_{\mathfrak{D}}\cong\boldsymbol{\mathcal{H}}_{ \delta}.\]
**Corollary** (Corollary 8.10).: _On any compact almost Hermitian manifold \(M\) there is a map \(g\), called the Hodge automorphism, inducing an isomorphism_
\[\boldsymbol{\mathcal{H}}_{\mathfrak{B}}^{q-p,n-p-q}\cap\boldsymbol{\mathcal{H} }_{\mathfrak{B}^{\circ}}^{q-p,n-p-q}\stackrel{{ g}}{{\cong}} \boldsymbol{\mathcal{H}}_{\varepsilon}^{p,q}\cap\boldsymbol{\mathcal{H}}_{ \overline{\varepsilon}}^{p,q}.\]
_Moreover, if \(M\) is almost Kahler we have that_
\[\boldsymbol{\mathcal{H}}_{\mathfrak{D}}^{q-p,n-p-q}\stackrel{{ g}}{{\cong}}\boldsymbol{\mathcal{H}}_{\delta}^{p,q}.\]
**Layout.** In Section 2 we gather the necessary algebraic material on Clifford algebras, following and expanding on [14]. Section 3 elaborates on the multiplicative interplay between Dirac operators defined using Hermitian connections and Michelsohn's \(sl(2,\mathbb{C})\) operators. In Sections 4 and 5 we discuss canonical Hermitian connections a la Gauduchon and establish some results involving Dirac operators defined by these connections. Sections 7 and 8 prove the theorems mentioned above and transfer the results to operators on forms. The appendix provides an exposition of some fundamental results in almost Hermitian geometry, following mostly [1].
### Acknowledgements
This work constituted the author's PhD thesis at the City University of New York under the supervision of Luis Fernandez, whom he thanks for his patient guidance.
## 2. The Clifford Algebra
Let \(V\) be a finite dimensional inner product space. We define the _Clifford Algebra of \(V\)_, denoted by \(Cl(V)\), to be the vector space \(\Lambda(V)\), with multiplication given by
\[v\cdot\varphi=v\wedge\varphi-v\lrcorner\ \varphi=E_{v}(\varphi)-I_{v}(\varphi)\]
for any \(v\in V\) and \(\varphi\in Cl(V)\), where \(E_{v}^{*}=I_{v}\) is the adjoint of left exterior multiplication by \(v\).
Let \(V^{\vee}\) be the dual space to \(V\). There is an _algebra isomorphism_
\[\Lambda(V)\cong\Lambda(V^{\vee})\]
while, in general, the most one can hope for is a _vector space isomorphism_
\[Cl(V)\cong\Lambda(V^{\vee}).\]
### Two Useful Involutions
The _antipodal map_\(\alpha:V\to V\) given by \(\alpha(v)=-v\) extends to an algebra automorphism of \(Cl(V)\) by \(\alpha(v_{1}\cdots v_{p})=\alpha(v_{1})\cdots\alpha(v_{p})\). This is made precise by noting that \(\alpha\) is an isometry of \(V\), which are one-to-one with algebra automorphisms of \(Cl(V)\) preserving \(V\).
The _transpose map_ is the algebra anti-automorphism \(\mathrsfs{X}\) of \(Cl(V)\) reversing the order of multiplication, so that \(\mathrsfs{X}(v_{1}\cdots v_{p})=v_{p}\cdots v_{1}\) and \(\mathrsfs{X}(v)=v\) for \(v\in V\). It is worth noting that the Hodge star operator \(*\) on forms (sections of \(\Lambda(V)\)) takes the shape \(*(\varphi)=\mathrsfs{X}\alpha(\varphi)\cdot\mathrm{vol}\) where \(\mathrm{vol}=v_{1}\cdots v_{\mathrm{dim}(V)}\).
### Almost Complex Structures
An _almost complex structure_\(J\) on a real vector space \(V\) is an operator on \(V\) so that \(J^{2}=-\mathrm{Id}\). Such an operator bestows \(V\) with a scalar multiplication making it a complex vector space, and conversely every complex vector space has a \(J\) given by scalar multiplication by \(i=\sqrt{-1}\). There are two useful ways to extend an orthogonal \(J\) as an endomorphism of \(Cl(V)\). One way is as an _algebra automorphism_
\[J_{\mathrm{alg}}(v_{1}\cdots v_{p})=J(v_{1})\cdots J(v_{p})\]
the other is as a _derivation of the algebra_
\[J_{\mathrm{der}}(v_{1}\cdots v_{p})=\sum_{j}v_{1}\cdots Jv_{j}\cdots v_{p}.\]
These two extensions are related as follows: The family of orthogonal transformations of \(V\) given by \(J_{t}=\cos(t)\mathrm{Id}+\sin(t)J\) induce a family of algebra automorphisms \(J_{\mathrm{alg}}(t)\) of \(Cl(V)\). Differentiating \(J_{\mathrm{alg}}(t)\) at the identity one obtains \(J_{\mathrm{der}}\). Let \(e_{1},\ldots,e_{n},Je_{1},\ldots,Je_{n}\) be an orthonormal basis. The element \(\omega\in\Lambda^{2}(V^{\vee})\) defined by
\[\omega(v,w)=\langle Jv,w\rangle\]
can be viewed as an element of \(Cl(V)\) by writing
\[\omega=\sum_{j}e_{j}\cdot Je_{j}\]
sometimes referred to as the _fundamental 2-form_.
### The Complex Clifford Algebra
We recall the usual orthogonal direct sum decomposition of the complexification of \(V\)
\[V\otimes\mathbb{C}=V^{1,0}\oplus V^{0,1}\]
where \(V^{1,0}\) is the \(i\)-eigenspace of \(J\) complex linearly extended to \(V\otimes\mathbb{C}\) and \(V^{0,1}=\overline{V^{1,0}}\). We define
\[\mathbb{C}l(V)=Cl(V)\otimes\mathbb{C}.\]
When \(V\) has an orthogonal almost complex structure \(J\), a \(J\)-adapted orthonormal basis \(e_{1},\ldots,e_{n},Je_{1},\ldots,Je_{n}\) of \(V\) gives rise to an orthogonal basis of \(\mathbb{C}l(V)\)
\[\epsilon_{1},\ldots,\epsilon_{n},\overline{\epsilon}_{1},\ldots,\overline{ \epsilon}_{n}\]
where \(\epsilon_{j}=\frac{1}{2}(e_{j}-iJe_{j})\) and \(\overline{\epsilon}_{j}=\frac{1}{2}(e_{j}+iJe_{j})\) for \(j=1,\ldots,n\).
All of these complex vectors anti-commute in \(\mathbb{C}l(V)\) except for pairs of the form \(\epsilon_{k},\overline{\epsilon}_{k}\) where one has the remaining relation
\[\epsilon_{k}\overline{\epsilon}_{k}+\overline{\epsilon}_{k}\epsilon_{k}=-1\]
### Michelsohn's Algebraic Results
Michelsohn defined the operators \(\mathcal{L}\) and \(\overline{\mathcal{L}}\) for any \(\varphi\in\mathbb{C}l(V)\) by
\[\mathcal{L}(\varphi)=-\sum_{k}\epsilon_{k}\cdot\varphi\cdot\overline{\epsilon }_{k}\qquad\quad\text{and}\qquad\quad\overline{\mathcal{L}}(\varphi)=-\sum_{k }\overline{\epsilon}_{k}\cdot\varphi\cdot\epsilon_{k}.\]
and defined their commutator
\[\mathcal{H}=[\mathcal{L},\overline{\mathcal{L}}].\]
**Theorem 2.5**.: _[_M80_]_ _The operators \(\mathcal{L},\overline{\mathcal{L}}\), and \(\mathcal{H}\) satisfy the relations_
\[[\mathcal{L},\overline{\mathcal{L}}]=\mathcal{H},\ \ [\mathcal{H},\mathcal{L}]=2 \mathcal{L},\ \ \text{and}\ [\mathcal{H},\overline{\mathcal{L}}]=-2\overline{\mathcal{L}}\]
_In particular the subalgebra of \(\mathrm{End}(\mathbb{C}l(V))\) generated by the operators \(\mathcal{L},\overline{\mathcal{L}},\mathcal{H}\) is an \(sl(2,\mathbb{C})\) representation on \(\mathbb{C}l(V)\)._
_Letting \(\omega_{0}=\frac{1}{2l}\omega\) and \(\mathcal{J}=-iJ_{\mathrm{der}}\) Michelsohn showed that for any \(\varphi\in\mathbb{C}l(V)\)_
\[\mathcal{H}(\varphi) =\omega_{0}\cdot\varphi+\varphi\cdot\omega_{0}.\] \[\mathcal{J}(\varphi) =\omega_{0}\cdot\varphi-\varphi\cdot\omega_{0}\]
_in particular \(\mathcal{J}\) and \(\mathcal{H}\) commute. Furthermore since_
\[\mathcal{J}\mathcal{L}(\varphi)=-\sum_{j}\mathcal{J}(\epsilon_{j})\cdot \varphi\cdot\overline{\epsilon}_{j}+\mathcal{L}\mathcal{J}(\varphi)-\sum_{j} \epsilon_{j}\cdot\varphi\cdot\mathcal{J}(\overline{\epsilon}_{j})=\mathcal{L }\mathcal{J}(\varphi)\]
_we have that \(\mathcal{J}\) commutes with all of the generators of the \(sl(2)\) representation on \(\mathbb{C}l(V)\)._
_We define_
\[\mathbb{C}l^{r,s}(V)=\{\varphi\in\mathbb{C}l(V)\ |\ \mathcal{J}(\varphi)=r \varphi\text{ and }\mathcal{H}(\varphi)=s\varphi\}\]
_and there is a bigrading_
\[\mathbb{C}l(V)=\bigoplus_{r,s=-n}^{n}\mathbb{C}l^{r,s}(V).\]
_We state some facts regarding \(\mathbb{C}l^{r,s}(V)\), the last of which we state as a Theorem (All borne from Michelsohn's article [M80])_
* \(\mathbb{C}l^{r,s}(V)=\{0\}\) _if_ \(r,s>n\) _or if_ \(r,s<-n\)__
* \(\mathbb{C}l^{r,s}(V)=\{0\}\) _for_ \(r+s\) _not congruent to_ \(n=\frac{1}{2}\dim(V)\) _mod_ \(2\)_._
* \(J\) _on_ \(V\) _induces_ \(-J^{\vee}\) _on_ \(V^{\vee}\)_. Through_ \(Cl(V)\cong\Lambda(V^{\vee})\) _we have_ \(J_{\mathrm{der}}=-(J^{\vee})_{\mathrm{der}}\)__
**Theorem 2.6**.: _[_M80_]_ _Through the isomorphism \(\mathbb{C}l(V)\cong\Lambda_{\mathbb{C}}(V^{\vee})\) induced by complexification we have_
\[\bigoplus_{s=-n}^{n}\mathbb{C}l^{r,s}(V)\cong\bigoplus_{r=q-p}\Lambda_{ \mathbb{C}}^{p,q}(V^{\vee}).\]
_Recall the operators \(L,\Lambda,H\), defined on \(\Lambda_{\mathbb{C}}(V^{\vee})\) by_
\[L(\varphi)=\omega\wedge\varphi,\ \ \ \ \Lambda=L^{*},\ \ \ \ \text{and}\ \ \ \ H(\varphi)=\sum_{p=0}^{2n}(n-p)\Pi_{p}(\varphi).\]
_Here \(H=[\Lambda,L]\) and \(\Pi_{p}:\Lambda_{\mathbb{C}}(V^{\vee})\to\Lambda_{\mathbb{C}}^{p}(V^{\vee})\) is the usual projection._
**Theorem 2.7**.: _[_M80_]_ _Through the isomorphism \(\mathbb{C}l(V)\cong\Lambda_{\mathbb{C}}(V^{\vee})\) one has_
\[\mathcal{H}=i(\Lambda-L)\ \ \ \ \ \mathcal{L}+\overline{\mathcal{L}}=\alpha H\ \ \ \ \mathcal{L}-\overline{\mathcal{L}}=-i\alpha\big{(}\Lambda+L\big{)}\]
_where \(\alpha\) is the antipodal involution._
**Theorem 2.8**.: _[_M80_]_ _Let \(g=\exp(\frac{-\pi i}{4}H)\mathrm{exp}(\frac{\pi}{4}(\Lambda-L))\in SL(2, \mathbb{C})\). Then we have the following identities on \(\mathbb{C}l(V)\cong\Lambda_{\mathbb{C}}(V^{\vee})\)_
\[\mathcal{H}=gHg^{-1}\ \ \ \ \ \ \ \ \ \alpha\mathcal{L}=g\Lambda g^{-1}\ \ \ \ \ \ \text{and}\ \alpha\overline{\mathcal{L}}=gLg^{-1}\]
_The operator \(g\) she called the Hodge automorphism. By the previous Theorem we also write_
\[g=\exp\big{(}-\tfrac{\pi i}{4}\alpha(\mathcal{L}+\overline{\mathcal{L}})\big{)} \mathrm{exp}\big{(}-\tfrac{\pi i}{4}\mathcal{H}\big{)}.\]
**Theorem 2.9**.: _The Hodge automorphism induces an isomorphism_
\[g^{-1}:\mathbb{C}l^{q-p,n-p-q}(V)\xrightarrow{\sim}\Lambda^{p,q}(V^{\vee}).\]
Proof.: Suppose \(\varphi\in\mathbb{C}l^{r,s}(V)\subset\mathbb{C}l(V)\cong\Lambda_{\mathbb{C}}(V^{ \vee})\) with \(n\equiv r+s\) mod \(2\). \(n-(r+s)=2p\) and \(n-(s-r)=2q\) for some \(p,q\). I.e. \(r=q-p\) and \(s=n-p-q\). We have
\[sg^{-1}\varphi =g^{-1}\mathcal{H}\varphi=g^{-1}gHg^{-1}\varphi=Hg^{-1}\varphi\] \[rg^{-1}\varphi =g^{-1}\mathcal{J}\varphi=\mathcal{J}g^{-1}\varphi=-\mathcal{J} ^{\vee}g^{-1}\varphi\]
The result follows by observing
\[\Lambda^{p,q}(V^{\vee})=\left\{\psi\in\Lambda_{\mathbb{C}}(V^{\vee})\ |\ \mathcal{J}^{\vee}(\psi)=-r\psi\text{ and }H(\psi)=s\psi\right\}.\qed\]
**Definition 2.10**.: Let \(M\) be a Riemannian manifold. The vector bundle \(Cl(M)\) is the associated bundle to the tangent bundle \(TM\) with fibers \(Cl(T_{x}(M))\) for every \(x\in M\). We define the _complex Clifford bundle_\(\mathbb{C}l(M)\) by
\[\mathbb{C}l(M)=Cl(M)\otimes\mathbb{C}.\]
If \(\nabla\) is an affine metric connection, \(\nabla\) extends as a derivation over \(\Gamma(Cl(M))\) and \(\Gamma\big{(}\Lambda(M))\).
### Some Vector Valued 2-forms
Take \(M\) to be an _almost Hermitian manifold_, i.e. \(M\) is Riemannian with an orthogonal \(J\). We will be interested in affine metric connections \(\nabla\) so that \(\nabla(J)=0\). Such connections are somewhat confusingly said to be _Hermitian_.
The _torsion_ of an affine connection \(T=T_{\nabla}\) is given by
\[T(X,Y)=\nabla_{X}(Y)-\nabla_{Y}(X)-[X,Y]\]
and the _Nijenhuis tensor_, \(N\), is defined as
\[N(X,Y)=\tfrac{1}{4}([JX,JY]-J[JX,Y]-J[X,JY]-[X,Y]).\]
### The Riemannian Dirac Operator
**Definition 2.13**.: Let \(v_{1},\dots,v_{2n}\) be a local orthonormal frame of \(T(M)\) and let \(\widetilde{\nabla}\) be the Levi-Civita connection on \(\Gamma(TM)\). We define the _Riemannian Dirac operator_ on any \(\varphi\in\Gamma(Cl(M))\) by
\[\widetilde{D}(\varphi)=\sum_{j}v_{j}\cdot\widetilde{\nabla}_{v_{j}}\varphi.\]
Let \(d\) be the exterior derivative on \(\Omega(M)\).
**Theorem 2.14**.: Through the isomorphism \(\Gamma(Cl(M))\cong\Omega(M)\) we have
\[\widetilde{D}=d+d^{*}.\]
Let \(\pi_{p,q}:\Omega_{\mathbb{C}}(M)\to\Omega_{\mathbb{C}}^{p,q}(M)\) be the natural projection onto bidegree. Complex linearly extending the exterior derivative \(d\) to \(\Omega_{\mathbb{C}}(M)\) we have a decomposition
\[d=\partial+\overline{\mu}+\overline{\partial}+\mu\]
where for \(\varphi\in\Omega^{p,q}(M)\)
\[\partial\varphi=\pi_{p+1,q}\circ d(\varphi),\ \ \ \ \ \overline{\partial}\varphi=\pi_{p,q+1}\circ d( \varphi),\ \ \ \ \mu\varphi=\pi_{p+2,q-1}\circ d(\varphi),\ \ \ \ \ \overline{\mu}\varphi=\pi_{p-1,q+2}\circ d(\varphi).\]
Let \(\delta=\partial+\overline{\mu}\) and \(\overline{\delta}=\overline{\partial}+\mu\) so that on the complexification \(\Gamma(\mathbb{C}l(M))\cong\Omega_{\mathbb{C}}(M)\) we have
\[\widetilde{D}=\delta+\overline{\delta}+\delta^{*}+\overline{\delta}^{*}.\]
**Proposition 2.15**.: _[_11_]_ _For an almost complex manifold \(M\), the following relations among the operators \(\partial,\overline{\partial},\mu,\overline{\mu}\) along with their adjoint identities obtain:_
\[\mu^{2} =0\] \[\mu\partial+\partial\mu =0\] \[\mu\overline{\partial}+\overline{\partial}\mu+\partial^{2} =0\] \[\mu\overline{\mu}+\overline{\mu}\mu+\partial\overline{\partial}+ \overline{\partial}\partial =0\] \[\overline{\mu}\partial+\partial\overline{\mu}+\overline{\partial }^{2} =0\] \[\overline{\mu}\overline{\partial}+\overline{\partial}\overline{\mu} =0\] \[\overline{\mu}^{2} =0.\]
## 3. Almost Hermitian Dirac Identities
**Definition 3.1**.: Let \(\nabla\) be a Hermitian connection. We define a _Hermitian Dirac operator_ on \(\Gamma(Cl(M))\) by
\[D(\varphi)=\sum_{j}v_{j}\cdot\nabla_{v_{j}}\varphi.\]
For an operator \(T\) on \(\Gamma(Cl(M))\cong\Omega(M)\), we denote
\[T_{c}\stackrel{{\mathrm{def}}}{{=}}J_{\mathrm{alg}}^{-1}TJ_{ \mathrm{alg}}\quad\text{and }T^{\alpha}\stackrel{{\mathrm{def}}}{{=}} \rtimes T\rtimes.\]
On \(\mathbb{C}l(M)\)
\[\mathcal{L}_{c}=\overline{\mathcal{L}},\quad\mathcal{H}_{c}=\mathcal{H},\quad \mathcal{J}_{c}=\mathcal{J},\quad\mathcal{J}^{\alpha}=\mathcal{J},\quad \mathcal{H}^{\alpha}=-\mathcal{H},\quad\mathcal{L}^{\alpha}=\overline{ \mathcal{L}}.\]
**Theorem 3.2**.: Let \(D\) be a Hermitian Dirac operator. Then on any almost Hermitian manifold \(M\) we have the following **Almost Hermitian Dirac identities** on \(\Gamma(\mathbb{C}l(M))\)
\[[D,\mathcal{H}]= [D,\mathcal{J}] =-iD_{c} [D_{c},\mathcal{H}]= [D_{c},\mathcal{J}]= iD\] \[=-[D^{\alpha},\mathcal{J}]= iD_{c}^{\alpha} [D_{c}^{\alpha},\mathcal{H}]=-[D_{c}^{\alpha},\mathcal{J}]=-iD^{ \alpha}.\]
Proof.: For any \(\varphi\in\Gamma(\mathbb{C}l(M))\), we have
\[[D,\mathcal{H}]\varphi=D(\omega_{0}\cdot\varphi)+D(\varphi\cdot\omega_{0})- \omega_{0}\cdot D(\varphi)-D(\varphi)\cdot\omega_{0}.\]
As \(\nabla\) is metric and \(J\) parallel, we have \(\nabla(\omega_{0})=0\). Using that \(\nabla\) is a derivation, and that
\[\omega_{0}\cdot v_{j}-v_{j}\cdot\omega_{0}=-iJv_{j}\]
we obtain
\[D(\omega_{0}\cdot\varphi) =\sum_{j}e_{j}\cdot\omega_{0}\cdot\nabla_{e_{j}}(\varphi)+\sum_{ j}Je_{j}\cdot\omega_{0}\cdot\nabla_{Je_{j}}(\varphi),\quad\text{and similarly}\] \[D(\varphi\cdot\omega_{0}) =\sum_{j}e_{j}\cdot\nabla_{e_{j}}(\varphi)\cdot\omega_{0}+\sum_{ j}Je_{j}\cdot\nabla_{Je_{j}}(\varphi)\cdot\omega_{0}=D(\varphi)\cdot\omega_{0}\]
which gives the first identity. The remaining identities are obtained by conjugating with the operators \(\rtimes\) and \(J_{\mathrm{alg}}\).
**Remark 3.3**.: Recall \(J_{\mathrm{der}}\stackrel{{\mathrm{def}}}{{=}}i\mathcal{J}\) is the usual extension of \(J\) to a derivation on the bundles \(\mathbb{C}l(M)\cong\Lambda_{\mathbb{C}}(M)\). We observe that \([d,J_{\mathrm{der}}]=d_{c}\) if and only if the manifold \(M\) is complex.
One can check this by noting that
\[[\mu,J_{\mathrm{der}}]=-3i\mu\quad\text{while}\quad J_{\mathrm{alg}}^{-1}\mu J _{\mathrm{alg}}=\quad i\mu.\]
Similarly
\[[\overline{\mu},J_{\mathrm{der}}]= 3i\overline{\mu}\quad\text{while}\quad J_{\mathrm{alg}}^{-1} \overline{\mu}J_{\mathrm{alg}}=-i\overline{\mu}.\]
The expressions
\[[\partial,J_{\rm der}]=-i\partial=J_{\rm alg}^{-1}\partial J_{\rm alg}\quad\text{ and}\quad[\overline{\partial},J_{\rm der}]=i\overline{\partial}=J_{\rm alg}^{-1} \overline{\partial}J_{\rm alg}\]
agree in general.
Contrarily, we've observed above that \([D,J_{\rm der}]=D_{c}\) for any almost-Hermitian manifold.
## 4. The Canonical Hermitian Connections
Let \(M\) be an almost Hermitian manifold of dimension \(2n\). We denote by \(\Omega^{2}(TM)\) the space of vector valued \(2\) forms on \(M\). That is
\[\Omega^{2}(TM)\stackrel{{\rm def}}{{=}}\Gamma\big{(}{\rm Hom}( \Lambda^{2}(M),TM)\big{)}.\]
This space has many important elements, including the _torsion_ of a connection and the Nijenhuis tensor, \(N\). It is a well known theorem of Newlander and Nirenberg [18] that an almost complex manifold is complex if and only if \(N=0\). We observe the following less well known fact, cf. [10],
**Lemma 4.1**.: The Nijenhuis tensor, \(N\), complex bilinearly extended, is the dual of \(\mu+\overline{\mu}\) on \(1\)-forms. That is \(\mu+\overline{\mu}=N^{\vee}\).
The space \(\Omega^{2}(TM)\) is often identified through the metric isomorphism \(TM\cong TM^{\vee}\) with the space of \(3\) tensors on \(M\) which are skew-symmetric in the last two entries. More explicitly if \(\phi\in\Omega^{2}(TM)\) then for any \(Y,Z\in TM\) we have \(\phi(Y,Z)\in TM\). Through the metric isomorphism \(TM\cong TM^{\vee}\) we have \(\phi(Y,Z)\cong\langle\cdot,\phi(Y,Z)\rangle\). We identify
\[\phi(X,Y,Z)=\langle X,\phi(Y,Z)\rangle.\]
In particular, this identification through the metric allows us to view other important objects as elements of \(\Omega^{2}(TM)\). One special element of interest in almost Hermitian geometry is the \(3\)-form \(d\omega(X,Y,Z)\) which vanishes if and only if \(M\) is _almost Kahler_. Another is \((\widetilde{\nabla}\omega)(X,Y,Z)=\widetilde{\nabla}_{X}\omega(Y,Z)\) which can be rewritten in the following form
\[(\widetilde{\nabla}_{X}\omega)(Y,Z)=\left\langle(\widetilde{\nabla}_{X}J)Y,Z \right\rangle.\]
There are two distinguished projections of \(\Omega^{2}(TM)\) onto differential forms of degree \(1\) and \(3\). There is the map \(\mathcal{P}:\Omega^{2}(TM)\to\Omega^{3}(M)\) defined by
\[\mathcal{P}(\phi)(X,Y,Z)=\tfrac{1}{3}\big{(}\phi(X,Y,Z)+\phi(Y,Z,X)+\phi(Z,X,Y )\big{)}\]
and the map \(r:\Omega^{2}(TM)\to\Omega^{1}(M)\) defined by
\[r(\phi)(X)=\sum_{j}\phi(v_{j},v_{j},X)\]
In fact, [12] showed that the space \(\Omega^{2}(TM)\) canonically decomposes as
\[\Omega^{2}(TM)\cong\Omega^{1}(M)\oplus\Omega^{3}(M)\oplus\big{(}\ker(r)\cap \ker(\mathcal{P})\big{)}.\]
(See Proposition 8.21 for a proof.) We write the component of \(\phi\in\Omega^{2}(TM)\) both the kernel of \(r\) and \(\mathcal{P}\) as \(\phi_{0}\).
The space of \(3\)-forms \(\Omega^{3}(M)\) is decomposed into
\[E^{+}={\rm Re}\big{(}\Omega^{2,1}(M)\oplus\Omega^{1,2}(M)\big{)},\]
\[E^{-}={\rm Re}\big{(}\Omega^{3,0}(M)\oplus\Omega^{0,3}(M)\big{)}\]
so that \(\Omega^{3}(M)=E^{-}\oplus E^{+}.\) For a \(3\)-form \(\varphi\in\Omega^{3}(M)\) we set \(\varphi^{+}\) and \(\varphi^{-}\) to be its components in \(E^{+}\) and \(E^{-}\) respectively.
**Lemma 4.2**.: Let \(N\) be the Nijenhuis tensor. Then \(r(N)=0\) and so \(N=\mathcal{P}(N)+N_{0}\).
Proof.: Let \(v_{1},\dots,v_{2n}=e_{1},\dots,e_{n},Je_{1},\dots,Je_{n}\) be a \(J\) adapted local orthonormal frame. Then for any \(X\in TM\) we have
\[r(N)(X)=\sum_{j=1}^{2n}N(v_{j},v_{j},X)=\sum_{j=1}^{2n}\langle v_{j},N(v_{j},X)\rangle\]
But
\[\sum_{j=1}^{2n}\langle v_{j},N(v_{j},X)=\sum_{j=1}^{n}\langle e_{j},N(e_{j},X) \rangle+\sum_{j=1}^{n}\langle Je_{j},N(Je_{j},X)\rangle\]
Using that \(N(Jv,w)=-JN(v,w)\) and that \(J\) is orthogonal we obtain
\[\sum_{j=1}^{n}\langle e_{j},N(e_{j},X)\rangle+\sum_{j=1}^{n}\langle Je_{j},N( Je_{j},X)\rangle=\sum_{j=1}^{n}\langle e_{j},N(e_{j},X)\rangle-\sum_{j=1}^{n} \langle e_{j},N(e_{j},X)\rangle=0.\qed\]
Perhaps the most important element of \(\Omega^{2}(TM)\), for our purposes, is the potential \(A^{\nabla}\) of an affine metric connection \(\nabla\), defined by
\[A^{\nabla}(X,Y,Z)=\big{\langle}\nabla_{X}(Y)-\widetilde{\nabla}_{X}(Y),Z\big{\rangle}.\]
We shall drop the superscript \(\nabla\) of the potential and torsion when the referent connection is clear.
\(A^{\nabla}\) is related to the torsion \(T^{\nabla}\) by the following identity (See Proposition 8.22)
\[A^{\nabla}+T^{\nabla}=3\mathcal{P}(A^{\nabla})=\tfrac{3}{2}\mathcal{P}(T^{ \nabla})\]
Gauduchon [10] defined an affine line of 'canonical' Hermitian connections on an almost Hermitian manifold. This is the set of Hermitian connections, \(\nabla^{t}\), uniquely defined by their torsion \(T^{t}=T^{\nabla^{t}}\) satisfying
\[T^{t}=N+\tfrac{3t-1}{4}d_{c}\omega^{+}-\tfrac{t+1}{4}\mathcal{M}(d_{c}\omega^{ +}). \tag{1}\]
Where \(\mathcal{M}\) is the operator on \(\Omega^{2}(TM)\) defined by
\[\mathcal{M}(\phi)(X,Y,Z)=\phi(X,JY,JZ).\]
For example, a natural choice of Hermitian connection is the _Chern connection_\(\nabla^{\mathrm{Ch}}\) characterized on a _Hermitian manifold_ by the projection onto the \((0,1)\) component agreeing with the Dolbeault Operator. It is obtained by setting \(t=1\).
In the case that \(M\) is a Kahler manifold, we have the well known identity \(\nabla^{\mathrm{ch}}=\widetilde{\nabla}\), which is true for all \(\nabla^{t}\). Moreover all of the canonical Hermitian connections agree in the case that \(d\omega=0\), despite being distinct from the Levi-Civita connection.
Another natural choice is the _Bismut connection_\(\nabla^{\mathrm{Bm}}=\nabla^{-1}\) characterized on a Hermitian manifold by its torsion - viewed as a \(3\)-tensor - being totally skew symmetric. For all \(t\in\mathbb{R}\) one has
\[\nabla^{t}=\tfrac{1+t}{2}\nabla^{1}+\tfrac{1-t}{2}\nabla^{-1}.\]
We denote the potential of a canonical Hermitian connection by \(A^{t}\). By the above relation between the torsion and the potential we observe
\[A^{t}=-T^{t}+\tfrac{3}{2}\mathcal{P}T^{t}.\]
We then have
**Proposition 4.3**.: _[_10_]_ _Let \(M\) be an almost Hermitian manifold and let \(\nabla^{t}\) be a canonical Hermitian connection in the sense of Gauduchon. Then_
\[A^{t}=-N+\tfrac{3}{2}\mathcal{P}N+\tfrac{t-1}{4}d_{c}\omega^{+}+\tfrac{t+1}{4} \mathcal{M}(d_{c}\omega^{+}).\]
## 5. The Canonical Hermitian Dirac Operators
**Definition 5.1**.: Given a local orthonormal frame \(v_{1},\dots,v_{2n}\) of \(TM\) and a \(t\in\mathbb{R}\) we define the _canonical Hermitian Dirac operator_\(D_{t}\) by
\[D_{t}=\sum_{j}v_{j}\cdot\nabla_{v_{j}}^{t}.\]
For any \(v\in TM\) and any \(\varphi\in\Gamma(Cl(M))\) we define
\[a_{v}^{t}(\varphi)\stackrel{{\mathrm{def}}}{{=}}\nabla_{v}^{t}( \varphi)-\widetilde{\nabla}_{v}(\varphi).\]
Evidently \(a_{v}^{t}\) is a derivation, and for vector fields \(X,Y,Z\) we have that, by definition
\[A^{t}(X,Y,Z)=\left\langle a_{X}^{t}(Y),Z\right\rangle.\]
For the time being, we suppress the parameter \(t\), taking \(A=A^{t}\) and \(a=a^{t}\).
We define the operator \(D_{A}\) on \(\Gamma(Cl(M))\) by
\[D_{A}=\sum_{j}v_{j}\cdot a_{v_{j}}\]
so that \(D_{t}=\widetilde{D}+D_{A}\). The 1-form \(r(A)\) is given, for any \(v\in TM\), by
\[r(A)(v)=\sum_{j}A(v_{j},v_{j},v)=\sum_{j}\langle a_{v_{j}}(v_{j}),v\rangle\]
so that \(r(A)^{\sharp}=\sum_{j}a_{v_{j}}(v_{j})\). Lastly, for any vector \(v\in TM\) we define \(L_{v}\) on \(Cl(M)\) by
\[L_{v}(\varphi)=v\cdot\varphi\]
and we will write \(L_{r(A)}\) to denote \(L_{r(A)^{\sharp}}\).
### A Result Concerning Self-Adjointness of the Canonical Hermitian Dirac Operators
Since both \(\nabla^{t}\) and \(\widetilde{\nabla}\) are metric connections we have, for any \(v\in TM\), that the adjoint \(a_{v}^{*}=-a_{v}\). Furthermore for a unital vector \(v\) we have \(L_{v}\) is an isometry.
**Proposition 5.3**.: On any almost Hermitian manifold we have \(D_{A}^{*}=D_{A}+L_{r(A)}\).
Proof.: For any \(\varphi,\psi\in\Gamma(Cl(M))\) we have
\[\left\langle D_{A}(\varphi),\psi\right\rangle=\left\langle\sum_{j }v_{j}\cdot a_{v_{j}}(\varphi),\psi\right\rangle =-\sum_{j}\left\langle a_{v_{j}}\varphi,v_{j}\cdot\psi\right\rangle\] \[= \sum_{j}\left\langle\varphi,a_{v_{j}}(v_{j}\cdot\psi)\right\rangle\] \[= \sum_{j}\left\langle\varphi,a_{v_{j}}(v_{j})\cdot\psi\right\rangle +\left\langle\varphi,v_{j}\cdot a_{v_{j}}\psi\right\rangle\] \[= \left\langle\varphi,(D_{A}+L_{r(A)})(\psi)\right\rangle.\qed\]
We observe that \(r(A)=r\big{(}-N+\frac{3}{2}\mathcal{P}N+\frac{t-1}{4}d_{c}\omega^{+}+\frac{t+ 1}{4}\mathcal{M}(d_{c}\omega^{+})\big{)}\). Since \(r(N)=0\) by Section 4, and \(r\) vanishes on any 3-form we obtain
\[r(A)=\tfrac{t+1}{4}r\big{(}\mathcal{M}(d_{c}\omega^{+})\big{)}. \tag{2}\]
**Definition 5.4**.: We define the _Lee form_ to be the 1-form \(\theta=\Lambda(d\omega)\).
Observe that, for bidegree reasons, \(\Lambda(\mu\omega)=\Lambda(\overline{\mu}\omega)=0\) on \(\Omega_{\mathbb{C}}(M)\), so that \(\theta=\Lambda(d\omega^{+})\). Furthermore, it is well known [10] that \(\theta=Jd^{*}\omega=\Lambda(d\omega)\).
**Lemma 5.5**.: \(r\big{(}\mathcal{M}(d_{c}\omega^{+})\big{)}=2\theta\)_._
Proof.: Let \(v_{1},\ldots,v_{2n}=e_{1},\ldots,e_{n},Je_{1},\ldots Je_{n}\) be a \(J\)-adapted local orthonormal frame. For any \(X\in TM\) we have
\[r\big{(}\mathcal{M}(d_{c}\omega^{+})\big{)}(X)= \sum_{j=1}^{2n}\mathcal{M}(d_{c}\omega^{+})(v_{j},v_{j},X)= \sum_{j=1}^{2n}(d_{c}\omega)^{+}(v_{j},Jv_{j},JX)\] \[= \sum_{j=1}^{n}(d_{c}\omega)^{+}(e_{j},Je_{j},JX)- \sum_{j=1}^{n}(d_{c}\omega)^{+}(Je_{j},e_{j},JX)\] \[= 2\sum_{j=1}^{n}(d_{c}\omega)^{+}(e_{j},Je_{j},JX)=-2\sum_{j=1}^{ n}(d\omega)^{+}(Je_{j},e_{j},X)\] \[=-2\big{(}\omega_{\cdot}\,d\omega^{+}(X)\big{)} = 2\Lambda(d\omega^{+})(X)\ \ =\ \ 2\theta(X).\qed\]
Hence, by the identity (2) we have that
\[r(A)=\tfrac{t+1}{2}\theta.\]
Together with Proposition 5.3 we obtain
\[D_{t}^{*}-\widetilde{D}^{*}=D_{A}^{*}=D_{A}+L_{r(A)}=D_{t}-\widetilde{D}+ \tfrac{t+1}{2}L_{\theta}.\]
As the Riemannian Dirac operator \(\widetilde{D}\) is self-adjoint we conclude the
**Proposition 5.6**.: For any \(t\in\mathbb{R}\) the adjoint of the canonical Hermitian Dirac operator \(D_{t}\) is given by
\[D_{t}^{*}=D_{t}+\tfrac{t+1}{2}L_{\theta}.\]
Following [10] we recall a condition on the metric determined by requiring \(d^{*}\omega=0\). Such metrics are said to be 'balanced'. We note that the balanced condition obtains if and only if \(Jd^{*}\omega=\theta=0\). Thus Proposition 5.6 implies the following
**Corollary 5.7**.: For all \(t\neq-1\) we have \(D_{t}\) is self-adjoint if and only if the metric is balanced.
For \(t=-1\) the Dirac operator \(D_{-1}\) is self-adjoint for any almost Hermitian manifold, sans any requirement on the metric. This directly corresponds to an observation made by Bismut [1], [1] regarding Hermitian Dirac operators on the spinor bundle induced by the connection \(\nabla^{-1}\) being self-adjoint. Presently, the connection \(\nabla^{-1}\) is widely referred to as the _Bismut connection_. We define
\[B\stackrel{{\mathrm{def}}}{{=}}D_{-1}.\]
We also define the operator \(\mathfrak{d}_{t}\) on \(\Gamma\big{(}\mathbb{C}l(M)\big{)}\) by
\[\mathfrak{d}_{t}\stackrel{{\mathrm{def}}}{{=}}\tfrac{1}{2}\big{(} D_{t}+i(D_{t})_{c}\big{)}=2\sum_{j}\epsilon_{j}\cdot\nabla_{\overline{ \epsilon}_{j}}^{t} \tag{3}\]
We observe (cf. [11]) that the operator \(\mathfrak{d}_{t}\) is of Clifford bidegree \((1,1)\) as left Clifford multiplication by an element of \(T^{1,0}(M)\) is of Clifford bidegree \((1,1)\) and any Hermitian connection is both \(\mathcal{H}\) and \(\mathcal{J}\) parallel.
In her seminal paper [10] Michelsohn showed that for a _Hermitian_ manifold \(M\) the operator \(\mathfrak{d}_{1}\) has adjoint \(\overline{\mathfrak{d}_{1}}\) if and only if the metric is balanced (Proposition 2.3 in that paper). We extend this result to the almost Hermitian setting and to all \(\mathfrak{d}_{t}\). By the above
\[\mathfrak{d}_{t}^{*}=\tfrac{1}{2}\left(D_{t}^{*}-i(D_{t})_{c}^{*}\right)= \overline{\mathfrak{d}_{t}}+\tfrac{t+1}{4}L_{\tfrac{\theta+iJ\theta}{2}}.\]
Since \(\tfrac{1}{2}(\theta+iJ\theta)\) is the projection of \(\theta\in T_{\mathbb{C}}(M)\) to \(T^{0,1}(M)\) we conclude
**Proposition 5.8**.: Let \(M\) be a compact almost Hermitian manifold. The operator \(\mathfrak{B}\stackrel{{\mathrm{def}}}{{=}}\mathfrak{d}_{-1}\) is conjugate self-adjoint on \(\Gamma(\mathbb{C}l(M))\). Moreover, for all \(t\neq-1\)\(\mathfrak{d}_{t}\) is conjugate self adjoint if and only if the metric is balanced.
### Bochner Identities
Using only that \(\nabla^{t}\) is Hermitian, and the local expression (3) of \(\mathfrak{d}_{t}\) via a local frame of \(T_{\mathbb{C}}(M)\) we have
\[\tfrac{1}{4}\mathfrak{d}_{t}^{2}=\sum_{j,k}\epsilon_{j}\cdot\epsilon_{k}\cdot \big{(}\nabla^{t}_{\overline{\epsilon}_{j}}\nabla^{t}_{\overline{\epsilon}_{k }}-\nabla^{t}_{\nabla^{t}_{\overline{\epsilon}_{j}}\overline{\epsilon}_{k}} \big{)}\]
which we rewrite as
\[\tfrac{1}{4}\mathfrak{d}_{t}^{2}=\sum_{j<k}\epsilon_{j}\cdot\epsilon_{k}\cdot \big{(}\nabla^{t}_{\overline{\epsilon}_{j},\overline{\epsilon}_{k}}-\nabla^{t }_{\overline{\epsilon}_{k},\overline{\epsilon}_{j}}\big{)}\]
where
\[\nabla_{X,Y}\stackrel{{\text{def}}}{{=}}\nabla_{X}\nabla_{Y}- \nabla_{\nabla_{X}Y}\]
is the second covariant derivative with respect to an affine connection \(\nabla\). The curvature tensor \(R^{\nabla}\) defined by
\[R^{\nabla}(X,Y)=[\nabla_{X},\nabla_{Y}]-\nabla_{[X,Y]}\]
is related to the second covariant derivative and the torsion of \(\nabla\) by
\[R^{\nabla}(X,Y)=\nabla_{X,Y}-\nabla_{Y,X}-\nabla_{T(X,Y)}\]
so that
\[\tfrac{1}{4}\mathfrak{d}_{t}^{2}=\sum_{j<k}\epsilon_{j}\cdot\epsilon_{k}\cdot \big{(}R^{t}(\overline{\epsilon}_{j},\overline{\epsilon}_{k})+\nabla^{t}_{T^{ *}(\overline{\epsilon}_{j},\overline{\epsilon}_{k})}\big{)}.\]
Complex bilinearly extending the metric, we observe
\[T^{t}(\overline{\epsilon}_{j},\overline{\epsilon}_{k})=\sum_{i}T^{t}(\epsilon _{i},\overline{\epsilon}_{j},\overline{\epsilon}_{k})\overline{\epsilon}_{i}+ T^{t}(\overline{\epsilon}_{i},\overline{\epsilon}_{j},\overline{\epsilon}_{k}) \epsilon_{i}\]
so that by the definition of a canonical Hermitian connection (1)
\[T^{t}(\overline{\epsilon}_{j},\overline{\epsilon}_{k})=\sum_{i}N(\overline{ \epsilon}_{i},\overline{\epsilon}_{j},\overline{\epsilon}_{k})\epsilon_{i}+ \tfrac{3t-1}{4}d_{c}\omega^{+}(\epsilon_{i},\overline{\epsilon}_{j}, \overline{\epsilon}_{k})\overline{\epsilon}_{i}-\tfrac{t+1}{4}\mathcal{M}(d_{ c}\omega^{+})(\epsilon_{i},\overline{\epsilon}_{j},\overline{\epsilon}_{k}) \overline{\epsilon}_{i}\]
where we use that \(N\) sends elements of \(\Lambda^{0,2}TM\) to \(T^{1,0}M\) and that \(d_{c}\omega^{+}\) vanishes on sections of \(\Lambda^{3,0}TM\oplus\Lambda^{0,3}TM\) (by definition of the \(+\) component of a \(3\)-form). Furthermore, by definition of \(\mathcal{M}\) we observe
\[\mathcal{M}(d_{c}\omega^{+})(\epsilon_{i},\overline{\epsilon}_{j},\overline{ \epsilon}_{k})=d_{c}\omega^{+}(\epsilon_{i},J\overline{\epsilon}_{j},J \overline{\epsilon}_{k})=-d_{c}\omega^{+}(\epsilon_{i},\overline{\epsilon}_{j },\overline{\epsilon}_{k})\]
and thus
\[T^{t}(\overline{\epsilon}_{j},\overline{\epsilon}_{k})=\sum_{i}N(\overline{ \epsilon}_{i},\overline{\epsilon}_{j},\overline{\epsilon}_{k})\epsilon_{i}+ td_{c}\omega^{+}(\epsilon_{i},\overline{\epsilon}_{j},\overline{\epsilon}_{k}) \overline{\epsilon}_{i}.\]
We conclude
**Proposition 5.10**.: Let \(M\) be an almost Hermitian manifold. On \(\Gamma(\mathbb{C}l(M))\) we have
\[\tfrac{1}{4}\mathfrak{d}_{t}^{2}=\sum_{j<k}\epsilon_{j}\cdot\epsilon_{k}\cdot R ^{t}(\overline{\epsilon}_{j},\overline{\epsilon}_{k})+\epsilon_{j}\cdot \epsilon_{k}\cdot\big{(}\sum_{i}N(\overline{\epsilon}_{i},\overline{\epsilon}_ {j},\overline{\epsilon}_{k})\nabla^{t}_{\epsilon_{i}}+td_{c}\omega^{+}(\epsilon _{i},\overline{\epsilon}_{j},\overline{\epsilon}_{k})\nabla^{t}_{\overline{ \epsilon}_{i}}\big{)}.\]
Let \(t=1\) and \(N=0\), the curvature tensor \(R=R^{1}\) of the Chern connection is of type (1,1) and hence \(R(\overline{\epsilon}_{j},\overline{\epsilon}_{k})=0\) for all \(j,k\). In this case, the operator \(\mathfrak{d}_{1}\) is a differential if and only if \(d_{c}\omega=d_{c}\omega^{+}=0\), i.e. if and only if \(M\) is Kahler. This was first shown by Michelsohn ([13] Proposition 2.1). Evidently if \(M\) is Kahler then \(\mathfrak{d}_{t}=\mathfrak{d}_{1}\) and so \(\mathfrak{d}_{t}\) is a differential.
A more involved, but completely analogous computation to the above reveals
\[\mathfrak{d}_{t}\overline{\mathfrak{d}_{t}}+\overline{\mathfrak{d}_{t}} \mathfrak{d}_{t}=-\sum_{j}\nabla^{t}_{\overline{\epsilon}_{j},\epsilon_{j}}+ \sum_{j,k}\overline{\epsilon}_{k}\cdot\epsilon_{j}\big{(}R^{t}(\epsilon_{k}, \overline{\epsilon}_{j})-\nabla^{t}_{T^{*}(\epsilon_{k},\overline{\epsilon}_{j}) }\big{)}.\]
Following Michelsohn [12] we define the operators
\[\boldsymbol{\nabla}^{*}\boldsymbol{\nabla}_{t}=-\sum_{j}\nabla^{t}_{\overline{ \epsilon}_{j},\epsilon_{j}}\text{ and }\mathcal{R}_{t}=\sum_{j,k}\overline{\epsilon}_{k}\cdot\epsilon_{j}\cdot R^{t} (\epsilon_{k},\overline{\epsilon}_{j})\]
so that
\[\mathfrak{d}_{t}\overline{\mathfrak{d}_{t}}+\overline{\mathfrak{d}_{t}}\mathfrak{d}_ {t}=\boldsymbol{\nabla}^{*}\boldsymbol{\nabla}_{t}+\mathcal{R}_{t}-\tfrac{t-1} {2}\sum_{i,j,k}\overline{\epsilon}_{k}\cdot\epsilon_{j}\cdot\big{(}d_{c} \omega^{+}(\epsilon_{i},\epsilon_{k},\overline{\epsilon}_{j})\nabla^{t}_{ \overline{\epsilon}_{i}}+d_{c}\omega^{+}(\overline{\epsilon}_{i},\epsilon_{k}, \overline{\epsilon}_{j})\nabla^{t}_{\epsilon_{i}}\big{)}.\]
For \(t=-1\) this produces an elegant description of the Laplacian of \(\mathfrak{B}\) over a compact almost Hermitian manifold, while for \(t=1\) the terms involving \(d_{c}\omega^{+}\) vanish, yielding Michelsohn's Bochner identity in [10] in the almost Hermitian setting.
### The Canonical Operators on Forms
Proceeding with our study of canonical Hermitian Dirac operators, we begin with the following
**Definition 5.12**.: Let \(v_{1},\ldots,v_{2n}\) be a local orthonormal frame. We define the operator \(d_{A}\) on \(\Omega(M)\) by
\[d_{A}=\sum_{j}v_{j}\wedge a_{v_{j}}.\]
Since for any \(v\in TM\) we have \(a_{v}\) is a derivation on \(\Omega(M)\) it follows that \(d_{A}\) is a graded derivation on \(\Omega(M)\).
Furthermore, we observe for any \(Y\in\Gamma(TM)\cong\Omega^{1}(M)\)
\[d_{A}(Y)=\sum_{j}v_{j}\wedge a_{v_{j}}(Y)=\sum_{j,k}\langle a_{v_{j}}(Y),v_{k }\rangle v_{j}\wedge v_{k}=\sum_{j,k}A(v_{j},Y,v_{k})v_{j}\wedge v_{k}\]
and so by Proposition 4.3 we see that
\[d_{A}(Y)=\sum_{j,k}\Big{(}\llcorner(N-\tfrac{3}{2}\mathcal{P}N)(v_{j},Y,v_{k })+\tfrac{t-1}{4}d_{c}\omega^{+}(v_{j},Y,v_{k})+\tfrac{t+1}{4}\mathcal{M}(d_{ c}\omega^{+})(v_{j},Y,v_{k})\Big{)}v_{j}\wedge v_{k}. \tag{4}\]
With regard to the adjoint of \(d_{A}\) we have the following
**Lemma 5.13**.: Let \(M\) be an almost Hermitian manifold. For any \(\varphi\in\Omega(M)\) the adjoint of \(d_{A}\) is given by
\[d_{A}^{*}(\varphi)=-r(A)\lrcorner\ \varphi-\sum_{j}v_{j}\lrcorner\ a_{v_{j}}( \varphi).\]
Proof.: Let \(\varphi\in\Omega(M)\). As interior multiplication by a vector is the adjoint of exterior multiplication, and \(a_{v}\) is anti-self-adjoint we have that
\[d_{A}^{*}(\varphi)=-\sum_{j}a_{v_{j}}(v_{j}\lrcorner\ \varphi).\]
Using that, for any \(v\in TM\), \(a_{v}\) is a \(C^{\infty}\)-linear derivation, it is also a derivation over interior multiplication by vectors. That is
\[a_{v}(v\lrcorner\ \varphi)=v\lrcorner\ a_{v}(\varphi)+a_{v}(v)\lrcorner\ \varphi.\]
Hence
\[\sum_{j}a_{v_{j}}(v_{j}\lrcorner\ \varphi)=\sum_{j}v_{j}\lrcorner\ a_{v_{j}}( \varphi)+\sum_{j}a_{v_{j}}(v_{j})\lrcorner\ \varphi\]
and the result follows.
**Proposition 5.14**.: For any \(\varphi\in\Gamma(Cl(M))\cong\Omega(M)\) we have
\[D_{t}(\varphi)-\widetilde{D}(\varphi)=D_{A}(\varphi)\cong d_{A}(\varphi)+d_{ A}^{*}(\varphi)+r(A)\lrcorner\ \varphi.\]
Proof.: For any \(\varphi\in\Gamma(Cl(M))\cong\Omega(M)\) we have
\[D_{A}(\varphi)=\sum_{j}v_{j}\cdot a_{v_{j}}(\varphi)\cong\sum_{j}v_{j} \wedge a_{v_{j}}(\varphi)-\sum_{j}v_{j}\lrcorner\ a_{v_{j}}(\varphi)=d_{A}( \varphi)+d_{A}^{*}(\varphi)+r(A)\lrcorner\ \varphi.\ \ \Box\]
We now define a host of operators on forms. Most of these operators will likely be familiar to the reader, with one exception. We begin by defining the operator \(\lambda^{+}:\Omega(M)\to\Omega(M)\) for any \(\varphi\in\Omega(M)\) by
\[\lambda^{+}(\varphi)=d\omega^{+}\wedge\varphi\]
and the operator \(\tau^{+}\) on \(\Omega(M)\) by
\[\tau^{+}=[\Lambda,\lambda^{+}].\]
Lastly we define the novel operator \(\rho^{+}:\Omega(M)\to\Omega(M)\), for any \(\varphi=\varphi_{1}\wedge\cdots\wedge\varphi_{k}\in\Omega^{k}(M)\), by
\[\rho^{+}(\varphi)=\sum_{j=1}^{k}(-1)^{j}\varphi_{1}\wedge\cdots\wedge(\varphi _{j\cdot}\,d\omega^{+})\wedge\cdots\wedge\varphi_{k}.\]
Note that, as usual, by \(\varphi_{j\cdot}\,d\omega\) we mean \(\varphi_{j\cdot}^{\sharp}\,d\omega\) where \(\sharp:T^{\vee}(M)\to T(M)\) is the metric isomorphism. Letting \(v_{1},\ldots,v_{2n}\) be a \(J\)-adapted local orthornormal frame of \(TM\), we can also express
\[\rho^{+}(\varphi)=-\sum_{j}(v_{j}\lrcorner\,d\omega^{+})\wedge(v_{j\cdot} \,\varphi)\qquad\text{for any $\varphi\in\Omega(M)$}.\]
Complex linearly extending \(\rho^{+}\) we restrict to components of \(d\omega^{+}=\partial\omega+\overline{\partial}\omega\) and define for \(\varphi=\varphi_{1}\wedge\cdots\wedge\varphi_{k}\in\Omega^{p,q}_{\mathbb{C}}(M)\subset \Omega^{k}_{\mathbb{C}}(M)\)
\[\rho_{\gamma}(\varphi)=\sum_{j=1}^{k}(-1)^{j}\varphi_{1}\wedge\cdots\wedge( \varphi_{j\cdot}\,\gamma\omega)\wedge\cdots\wedge\varphi_{k}\text{ for $\gamma= \partial,\overline{\partial}$}.\]
In the complexification by \(\varphi_{j\cdot}\,d\omega\) we mean \(\overline{\varphi_{j\cdot}^{\sharp}}\,d\omega\). We also define the complex linearly extended operators \(\lambda_{\gamma}\) and \(\tau_{\gamma}\) for \(\gamma=\partial,\overline{\partial}\) by
\[\lambda_{\gamma}(\varphi)\stackrel{{\mathrm{def}}}{{=}}\gamma( \omega)\wedge\varphi\qquad\qquad\text{and}\qquad\qquad\tau_{\gamma}(\varphi) \stackrel{{\mathrm{def}}}{{=}}[\Lambda,\lambda_{\gamma}]\varphi.\]
It is quick to check that \(\rho_{\partial},\ \rho_{\overline{\partial}}\) have bidegree \((1,0)\) and \((0,1)\) respectively, and that \(\rho_{\overline{\partial}}=\overline{\rho_{\partial}}\). Moreover, for \(\varphi\in\Omega^{p,q}(M)\) we have
\[J_{\mathrm{alg}}^{-1}\rho_{\partial}J_{\mathrm{alg}}(\varphi)=\frac{ip^{-q}} {ip^{+1-q}}\rho_{\partial}(\varphi)=-i\rho_{\partial}(\varphi)\quad\text{and }\qquad\qquad J_{\mathrm{alg}}^{-1}\overline{\rho_{\partial}}J_{\mathrm{alg} }(\varphi)=i\overline{\rho_{\partial}}(\varphi).\]
Note that as operators on \(\Omega_{\mathbb{C}}(M)\) we have
\[\rho^{+}=\rho_{\partial}+\overline{\rho_{\partial}}\qquad\qquad\text{and} \qquad\qquad\tau^{+}=\tau_{\partial}+\overline{\tau_{\partial}}.\]
By the last two remarks we also have
\[\rho_{c}^{+}=J_{\mathrm{alg}}^{-1}\rho^{+}J_{\mathrm{alg}}=i(\overline{\rho_ {\partial}}-\rho_{\partial}).\]
The following result allows us to express our canonical Hermitian Dirac operators in terms of the above operators on forms
**Theorem 5.15**.: For any almost Hermitian manifold \(M\) we have through the canonical vector bundle isomorphism \(\Omega_{\mathbb{C}}(M)\cong\Gamma(\mathbb{C}l(M))\) that
\[D_{t}=\partial+\overline{\partial}+\partial^{*}+\overline{\partial}^{*}+ \tfrac{t+1}{4}(\tau_{\partial}+\overline{\tau_{\partial}}+\tau_{\partial}^{*} +\overline{\tau_{\partial}}^{*}-E_{\theta}+I_{\theta})+\tfrac{3t-1}{4}i(\rho_ {\partial}-\overline{\rho_{\partial}}-\rho_{\partial}^{*}+\overline{\rho_{ \partial}}^{*}).\]
In order to prove the theorem, we will require a few lemmas, starting with
**Lemma 5.16**.: For \(Y\in\Gamma(T_{\mathbb{C}}M)\cong\Omega^{1}_{\mathbb{C}}(M)\) we have
\[\sum_{j,k}(N-\tfrac{3}{2}\mathcal{P}N)(v_{j},Y,v_{k})v_{j}\wedge v_{k}=(\mu+ \overline{\mu})(Y).\]
Proof.: By Section 4 we observe \(N=\mathcal{P}N+N_{0}\) and so
\[N-\tfrac{3}{2}\mathcal{P}N=\mathcal{P}N+N_{0}-\tfrac{3}{2}\mathcal{P}N=N_{0}- \tfrac{1}{2}\mathcal{P}N.\]
Thus for \(Y\in\Gamma(TM)\cong\Omega^{1}(M)\) we have
\[\sum_{j,k}(N-\tfrac{3}{2}\mathcal{P}N)(v_{j},Y,v_{k})v_{j}\wedge v_{k}=\sum_{j,k}(N_{0}-\tfrac{1}{2}\mathcal{P}N)(v_{j},Y,v_{k})v_{j}\wedge v_{k}.\]
By definition \(\mathcal{P}N_{0}=0\), so that
\[\sum_{j,k}N_{0}(v_{j},Y,v_{k})v_{j}\wedge v_{k}=\sum_{j,k}N_{0}(Y,v_{j},v_{k}) v_{j}\wedge v_{k}-\sum_{j,k}N_{0}(v_{j},Y,v_{k})v_{j}\wedge v_{k}\]
and thus
\[\sum_{j,k}N_{0}(v_{j},Y,v_{k})v_{j}\wedge v_{k}=\tfrac{1}{2}\sum_{j,k}N_{0}(Y, v_{j},v_{k})v_{j}\wedge v_{k}.\]
Finally using that \(\mathcal{P}N\) is a 3-form we conclude
\[\sum_{j,k}(N-\tfrac{3}{2}\mathcal{P}N)(v_{j},Y,v_{k})v_{j}\wedge v_{k}=\tfrac{ 1}{2}\sum_{j,k}(N_{0}+\mathcal{P}N)(Y,v_{j},v_{k})v_{j}\wedge v_{k}=\tfrac{1}{ 2}\sum_{j,k}N(Y,v_{j},v_{k})v_{j}\wedge v_{k}.\]
Observing that \(N^{\vee}=\mu+\overline{\mu}\) on \(\Omega_{\mathbb{C}}(M)\) gives the result.
**Lemma 5.17**.: For \(Y\in\Gamma(TM)\cong\Omega^{1}(M)\) we have
\[\sum_{j,k}\mathcal{M}(d_{c}\omega^{+})(v_{j},Y,v_{k})v_{j}\wedge v_{k}=\tfrac {1}{2}\sum_{j,k}\big{(}d_{c}\omega^{+}(v_{j},Y,v_{k})v_{j}\wedge v_{k}+d\omega ^{+}(v_{j},JY,v_{k})v_{j}\wedge v_{k}\big{)}.\]
Proof.: We have for any \(\varphi^{+}\in E^{+}\)
\[\varphi^{+}(X,Y,Z)=\varphi^{+}(JX,JY,Z)+\varphi^{+}(X,JY,JZ)+\varphi^{+}(JX,Y,JZ)\]
(see Lemma 8.34 for a proof). Thus
\[\sum_{j,k}\mathcal{M}(d_{c}\omega^{+})(v_{j},Y,v_{k})v_{j}\wedge v _{k} =-\sum_{j,k}d\omega^{+}(Jv_{j},Y,v_{k})v_{j}\wedge v_{k}\] \[=-\sum_{j,k}\big{(}d\omega^{+}(Jv_{j},JY,Jv_{k})-d\omega^{+}(v_{ j},Y,Jv_{k})-d\omega^{+}(v_{j},JY,v_{k})\big{)}v_{j}\wedge v_{k}.\]
Subtracting the expression \(\sum_{j,k}d\omega^{+}(v_{j},Y,Jv_{k})v_{j}\wedge v_{k}\) from both sides of the last equality we obtain
\[\sum_{j,k}\mathcal{M}(d_{c}\omega^{+})(v_{j},Y,v_{k})v_{j}\wedge v _{k} =\tfrac{1}{2}\sum_{j,k}\big{(}-d\omega^{+}(Jv_{j},JY,Jv_{k})+d \omega^{+}(v_{j},JY,v_{k})\big{)}v_{j}\wedge v_{k}\] \[=\tfrac{1}{2}\sum_{j,k}\big{(}d_{c}\omega^{+}(v_{j},Y,v_{k})v_{j} \wedge v_{k}+d\omega^{+}(v_{j},JY,v_{k})v_{j}\wedge v_{k}\big{)}.\qed\]
**Lemma 5.18**.: For \(Y\in\Gamma(TM)\cong\Omega^{1}(M)\) we have
\[\tfrac{1}{2}\sum_{j,k}d\omega^{+}(v_{j},JY,v_{k})v_{j}\wedge v_{k}=\tau^{+}(Y) -\theta\wedge Y.\]
Proof.: Observe that for \(Y\in\Gamma(TM)\cong\Omega^{1}(M)\)
\[\tau^{+}(Y)=[\Lambda,\lambda^{+}](Y)=-\Lambda(Y\wedge d\omega^{+})=\sum_{j=1}^ {n}e_{j^{\perp}}\,Je_{j^{\perp}}\,(Y\wedge d\omega^{+}).\]
Using that interior multiplication is a graded derivation we obtain
\[\sum_{j=1}^{n}e_{j^{\perp}}Je_{j^{\perp}}\left(Y\wedge d\omega^{+}\right) =\sum_{j}e_{j^{\perp}}\left((Y,Je_{j})d\omega^{+}-Y\wedge(Je_{j^{ \perp}}\,d\omega^{+})\right)\] \[=\sum_{j}\left((Y,Je_{j})e_{j^{\perp}}\,d\omega^{+}-(Y,e_{j})Je_{j ^{\perp}}\,d\omega^{+}+Y\wedge(e_{j^{\perp}}\,Je_{j^{\perp}}\,d\omega^{+})\right)\] \[=\sum_{j}\big{(}-\langle JY,e_{j}\rangle e_{j^{\perp}}\,d\omega^{+ }-\langle JY,Je_{j}\rangle Je_{j^{\perp}}\,d\omega^{+}\big{)}+Y\wedge\sum_{j}e_ {j^{\perp}}\,Je_{j^{\perp}}\,d\omega^{+}\] \[=-JY\lrcorner\,d\omega^{+}+\Lambda(d\omega^{+})\wedge Y.\]
As \(\Lambda(d\omega^{+})=\Lambda(d\omega)=\theta\), the result follows.
**Lemma 5.19**.: For \(Y\in\Gamma(TM)\cong\Omega^{1}(M)\) we have
\[\tfrac{1}{2}\sum_{j,k}d_{c}\omega^{+}(v_{j},Y,v_{k})v_{j}\wedge v_{k}=-\rho_{ c}^{+}(Y).\]
Proof.: For \(Y\in\Gamma(TM)\cong\Omega^{1}(M)\) we have
\[\rho^{+}(Y)=\tfrac{1}{2}\sum_{j,k}d\omega^{+}(v_{j},Y,v_{k})v_{j}\wedge v_{k}\]
and hence
\[\rho_{c}^{+}(Y) =\tfrac{1}{2}J_{\mathrm{alg}}^{-1}\big{(}\sum_{j,k}d\omega^{+}(v _{j},JY,v_{k})v_{j}\wedge v_{k}\big{)}\] \[=\quad\tfrac{1}{2}\sum_{j,k}d\omega^{+}(v_{j},JY,v_{k})Jv_{j} \wedge Jv_{k}\] \[=\quad\tfrac{1}{2}\sum_{j,k}d\omega^{+}(Jv_{j},JY,Jv_{k})v_{j} \wedge v_{k}\] \[=-\tfrac{1}{2}\sum_{j,k}d_{c}\omega^{+}(v_{j},Y,v_{k})v_{j}\wedge v _{k}.\qed\]
Combining lemmas lemma 5.17, lemma 5.18 and lemma 5.19 we obtain
**Proposition 5.20**.: For \(Y\in\Gamma(TM)\cong\Omega^{1}(M)\) we have
\[\sum_{j,k}\mathcal{M}(d_{c}\omega^{+})(v_{j},Y,v_{k})v_{j}\wedge v_{k}=\tau^{ +}(Y)-\theta\wedge Y-\rho_{c}^{+}(Y).\]
We now prove Theorem 5.15:
Proof.: (of Theorem 5.15) Complex linearly extending \(d_{A}\) to \(\Omega_{\mathbb{C}}(M)\), we recall that for any \(Y\in\Omega^{1}_{\mathbb{C}}(M)\)
\[d_{A}(Y)=\sum_{j,k}\Big{(}\!-\!(N-\tfrac{3}{2}\mathcal{P}N)(v_{j},Y,v_{k})+ \tfrac{t-1}{4}d_{c}\omega^{+}(v_{j},Y,v_{k})+\tfrac{t+1}{4}\mathcal{M}(d_{c} \omega^{+})(v_{j},Y,v_{k})\Big{)}v_{j}\wedge v_{k}.\]
By Proposition 5.20, Lemma 5.16, Lemma 5.19 we have
\[d_{A}(Y)=-\big{(}\mu+\overline{\mu}\big{)}(Y)+\tfrac{t+1}{4}\big{(}\tau_{ \partial}(Y)+\overline{\tau_{\partial}}(Y)-\theta\wedge Y\big{)}+\tfrac{3t-1} {4}i\big{(}\rho_{\partial}(Y)-\overline{\rho_{\partial}}(Y)\big{)}.\]
As \(d_{A}\) is a graded derivation, vanishing on smooth functions on \(M\), we have that for any \(\varphi\in\Omega_{\mathbb{C}}(M)\)
\[d_{A}(\varphi)=-\big{(}\mu+\overline{\mu}\big{)}(\varphi)+\tfrac{t+1}{4}\big{(} \tau_{\partial}(\varphi)+\overline{\tau_{\partial}}(\varphi)-\theta\wedge \varphi\big{)}+\tfrac{3t-1}{4}i\big{(}\rho_{\partial}(\varphi)-\overline{ \rho_{\partial}}(\varphi)\big{)}.\]
The result then follows by Proposition 5.14 and the identity \(r(A)=\tfrac{t+1}{2}\theta\)
## 6. Almost Hermitian Identities via the Bismut Dirac Operator
We have shown above that the canonical Hermitian and Riemannian Dirac operators on \(\Gamma\mathbb{C}l(M)\) agree up to a tensorial expression. In particular we have by Theorem 5.15
\[D_{t}-\widetilde{D}\cong-(\mu+\overline{\mu}+\mu^{*}+\overline{\mu}^{*})+\tfrac {t+1}{4}(\tau+\overline{\tau}+\tau^{*}+\overline{\tau}^{*}-E_{\theta}+I_{ \theta})+\tfrac{3t-1}{4}i(\rho_{\partial}-\overline{\rho_{\partial}}-\rho_{ \partial}^{*}+\overline{\rho_{\partial}}^{*}).\]
Since \(\nabla^{t}=\widetilde{\nabla}\) if and only if \(d\omega=0=N\) we have \(\widetilde{D}=D_{t}\) if and only if the almost Hermitian manifold is Kahler.
We recall that for any Dirac operator \(D\) defined in terms of a Hermitian connection we have by Proposition Theorem 3.2
\[[D,\mathcal{H}]=-iD_{c}.\]
Moreover since \(D_{t}\) is a Hermitian Dirac operator defined in terms of a Gauduchon connection we obtain the following
**Proposition 6.1**.: Let \(M\) be an almost Hermitian manifold. We have the following identities
\[\begin{array}{ccc}[D_{t},\mathcal{H}]=-i(D_{t})_{c}&&[(D_{t})_{c},\mathcal{ H}]=&iD_{t}\\ [D_{t}^{\otimes},\mathcal{H}]=&i(D_{t})_{c}^{\otimes}&&[(D_{t})_{c}^{\otimes },\mathcal{H}]=-iD_{t}^{\otimes}.\end{array}\]
Using the expression previously obtained for the adjoint of \(D_{t}\) in Proposition 5.6 we observe that
\[\left(D_{t}\right)_{c}=\left(D_{t}\right)_{c}^{*}+\tfrac{t+1}{4}L_{J\theta}\]
and thus
\[[D_{t},\mathcal{H}]=-i\left(D_{t}\right)_{c}^{*}-\tfrac{i(t+1)}{4}L_{J\theta}.\]
In particular for any almost Hermitian manifold \(M\) we have the Dirac operator \(B\stackrel{{\mathrm{def}}}{{=}}D_{-1}\) is self-adjoint and so
\[[B,\mathcal{H}]=-iB_{c}=-iB_{c}^{*}.\]
**Definition 6.2**.: We define the operators \(\varepsilon\) and \(\overline{\varepsilon}\) on \(\Omega_{\mathbb{C}}(M)\) by
\[\varepsilon=\partial-i\rho_{\partial}\quad\text{and}\quad\overline{ \varepsilon}=\overline{\partial}+i\overline{\rho_{\partial}}.\]
Following [17] we also define the differential operators \(\delta\) and \(\overline{\delta}\) on \(\Omega_{\mathbb{C}}(M)\) by
\[\delta=\partial+\overline{\mu}\quad\text{and}\quad\overline{\delta}= \overline{\partial}+\mu\]
where \(d=\delta+\overline{\delta}\) is the exterior derivative on \(\Omega_{\mathbb{C}}(M)\).
We note that through the canonical isomorphism \(\Gamma\mathbb{C}l(M)\cong\Omega_{\mathbb{C}}(M)\) we have
\[\widetilde{D}=\delta+\overline{\delta}+\delta^{*}+\overline{\delta}^{*}\qquad \text{and}\qquad B=\varepsilon+\overline{\varepsilon}+\varepsilon^{*}+ \overline{\varepsilon}^{*}. \tag{5}\]
The zero-order term \(\mu\) vanishes if and only if \(M\) is integrable, and symmetrically we show below in Corollary 6.5 that \(\rho_{\partial}\) vanishes if and only if \(\partial\omega=\overline{\partial}\omega=0\).
Note that \(J\mapsto-J^{\vee}\) by the metric isomorphism \(T(M)\cong T^{\vee}(M)\). This has the effect of flippling the sign through the isomorphism \(\Gamma(\mathbb{C}l(M))\cong\Omega_{\mathbb{C}}(M)\) when conjugating by \(J_{\mathrm{alg}}\). From this we obtain the
**Lemma 6.3**.: On any almost Hermitian manifold we have
\[\widetilde{D}_{c}\cong-d_{c}-d_{c}^{*}=i(\delta-\overline{\delta}+\overline{ \delta}^{*}-\delta^{*})\]
and
\[B_{c}\cong i(\varepsilon-\overline{\varepsilon}+\overline{\varepsilon}^{*}- \varepsilon^{*}).\]
Proof.: The first identity is clear from the above remark. The second identity follows from the observation \(J_{\mathrm{alg}}^{-1}\rho_{\partial}J_{\mathrm{alg}}=-i\rho_{\partial}\) so that \(\varepsilon_{c}=\partial_{c}-i\rho_{\partial c}=-i(\partial-i\rho_{\partial})= -i\varepsilon\) and similarly \(\overline{\varepsilon}_{c}=i\overline{\varepsilon}\).
The relation \([B,\mathcal{H}]=-iB_{c}\) implies the following identities of operators on \(\Omega(M)\).
**Proposition 6.4**.: Let \(M\) be an almost Hermitian manifold. We obtain the following identities of operators on \(\Omega_{\mathbb{C}}(M)\):
\[[\Lambda,\overline{\varepsilon}] =-i\varepsilon^{*} [L,\varepsilon^{*}] =i\overline{\varepsilon}\] \[=i\overline{\varepsilon}^{*} [L,\overline{\varepsilon}^{*}] =-i\varepsilon\] \[0 =[\Lambda,\varepsilon^{*}] =[\Lambda,\overline{\varepsilon}^{*}] [L,\varepsilon] =[L,\overline{\varepsilon}]=0.\]
Proof.: The identity \([B,\mathcal{H}]=-iB_{c}\) implies
\[[\varepsilon+\overline{\varepsilon}+\varepsilon^{*}+\overline{\varepsilon}^{ *},i(\Lambda-L)]=\varepsilon-\overline{\varepsilon}+\overline{\varepsilon}^{* }-\varepsilon^{*}.\]
That is
\[i[\varepsilon,\Lambda]+i[\overline{\varepsilon},\Lambda]+i[\varepsilon^{*}, \Lambda]+i[\overline{\varepsilon}^{*},\Lambda]-i[\varepsilon,L]-i[\overline{ \varepsilon},L]-i[\varepsilon^{*},L]-i[\overline{\varepsilon}^{*},L]= \varepsilon-\overline{\varepsilon}+\overline{\varepsilon}^{*}-\varepsilon^ {*}.\]
The result then follows by comparing bidegree of the operators in the above equality. For example, \([\overline{\varepsilon}^{*},L]\) has bidegree \((1,0)\), as does \(\varepsilon\) and so the equality implies \(i\varepsilon=[\overline{\varepsilon}^{*},L]\).
**Corollary 6.5**.: Let \(M\) be an almost Hermitian manifold. Then \(\rho_{\partial}=0\) if and only if \(\partial\omega=\overline{\partial}\omega=0\).
Proof.: Evidently if \(\partial\omega=0\) then \(\rho_{\partial}=0\), and by conjugating by complex conjugation \(\partial\omega=0\) if and only if \(\overline{\partial}\omega=\overline{\partial}\overline{\omega}=\overline{ \partial}\omega=0\). Conversely, by the identity
\[[L,\varepsilon]=0\]
we have
\[[L,\partial]=i[L,\rho_{\partial}].\]
Since both \(\partial\) and \(\rho_{\partial}\) are graded derivations, by evaluating at \(1\), we see \(i\rho_{\partial}(\omega)=\partial\omega\). Hence if \(\rho_{\partial}=0\) we have \(\partial\omega=0\).
Conjugating our Dirac operators by the transpose map \(\dot{\rtimes}\) we obtain the following correspondences to operators on forms. For the antipodal involution \(\alpha\) on \(Cl(M)\) we have \(\alpha\nabla=\nabla\alpha\) for any metric connection \(\nabla\). In particular \(B\alpha=-\alpha B\) and \(\widetilde{D}\alpha=-\alpha\widetilde{D}\). Evidently the operators \(\varepsilon\) and \(\delta\) also anticommute with \(\alpha\).
**Lemma 6.6**.: On any almost Hermitian manifold we have
\[\widetilde{D}^{\circ}\cong d\alpha-d^{*}\alpha=(\delta+\overline{\delta}- \delta^{*}-\overline{\delta}^{*})\alpha\]
and
\[B^{\oplus}\cong(\varepsilon+\overline{\varepsilon}-\varepsilon^{*}- \overline{\varepsilon}^{*})\alpha.\]
Proof.: Let \(\varphi\in\Gamma(Cl(M))\cong\Omega(M)\). Recall that for any \(v\in TM\) we have
\[v\cdot\varphi\cong v\wedge\varphi-v\lrcorner\ \varphi\ \text{and}\ \varphi\cdot v\cong\big{(}v\wedge \alpha\varphi+v\lrcorner\ \alpha\varphi\big{)}.\]
Since \(d=\sum_{k}v_{k}\wedge\widetilde{\nabla}_{v_{k}}\) and \(d^{*}=-\sum_{k}v_{k\lrcorner\ \widetilde{\nabla}_{v_{k}}}\) on \(\Omega(M)\) we see that
\[\widetilde{D}^{\oplus}(\varphi)=\sum_{k}\widetilde{\nabla}_{v_{k}}(\varphi) \cdot v_{k}\cong\sum_{k}v_{k}\wedge\widetilde{\nabla}_{v_{k}}(\alpha\varphi) +\sum_{k}v_{k\lrcorner\ \widetilde{\nabla}_{v_{k}}}(\alpha\varphi)=d(\alpha\varphi)-d^{*}(\alpha \varphi).\]
Moreover
\[D^{\oplus}_{A}(\varphi)=\sum_{j}a_{v_{j}}(\varphi)\cdot v_{j}\cong\sum_{j}v_{ j}\wedge a_{v_{j}}(\alpha\varphi)+\sum_{j}v_{j\lrcorner\ J}\ a_{v_{j}}(\alpha\varphi)\]
and thus
\[B^{\oplus}(\varphi)-\widetilde{D}^{\oplus}(\varphi)\cong d_{A}(\alpha\varphi )-d^{*}_{A}(\alpha\varphi).\]
Since for \(t=-1\), by the proof of Theorem 5.15 we have
\[d_{A}=-\mu-\overline{\mu}-i\rho_{\partial}+i\overline{\rho_{\partial}}\ \ \ \ \text{and hence}\ \ \ \ d^{*}_{A}=-\mu^{*}-\overline{\mu}^{*}+i\rho^{*}_{\partial}-i\overline{\rho_{ \partial}}^{*},\]
we conclude \(B^{\oplus}(\varphi)\cong\varepsilon(\alpha\varphi)+\overline{\varepsilon}( \alpha\varphi)-\varepsilon^{*}(\alpha\varphi)-\overline{\varepsilon}^{*}(\alpha\varphi)\) as desired.
For operators \(A,B\) on \(\Gamma(\mathbb{C}l(M))\cong\Omega_{\mathbb{C}}(M)\) we define the anti-commutator
\[\left\{A,B\right\}=AB+BA.\]
**Remark 6.7**.: By the bracket \([\cdot,\cdot]\) of operators on sections of the Clifford or exterior algebra bundles we will always mean the vanilla (ungraded) commutator. Much of the discussion can be rewritten via a \(\mathbb{Z}_{2}\) graded commutator, though there are some inconveniences one is forced to address when adopting this approach.
**Proposition 6.8**.: On any almost Hermitian manifold \(M\) we have the following identities
\[\left\{\mathcal{L}+\overline{\mathcal{L}},B\right\} =B^{\mbox{\tiny\char 12}} \left\{\mathcal{L}-\overline{\mathcal{L}},B\right\} =-iB_{c}^{\mbox{\tiny\char 12}}\] \[\left\{\mathcal{L}+\overline{\mathcal{L}},B^{\mbox{\tiny\char 12 }}\right\} =B \left\{\mathcal{L}-\overline{\mathcal{L}},B^{\mbox{\tiny\char 12}} \right\} =\quad iB_{c}\] \[\left\{\mathcal{L}+\overline{\mathcal{L}},B_{c}\right\} =B_{c}^{\mbox{\tiny\char 12}} \left\{\mathcal{L}-\overline{\mathcal{L}},B_{c}\right\} =\quad iB^{\mbox{\tiny\char 12}}\]
Proof.: By Section 2.4 we have through the identification \(\Gamma(\mathbb{C}l(M))\cong\Omega_{\mathbb{C}}(M)\) that
\[\left\{\mathcal{L}+\overline{\mathcal{L}},B\right\} =\left\{\alpha H,\varepsilon+\overline{\varepsilon}+\varepsilon^{ \ast}+\overline{\varepsilon}^{\ast}\right\}\] \[=\left[\varepsilon+\overline{\varepsilon}+\varepsilon^{\ast}+ \overline{\varepsilon}^{\ast},H\right]\alpha\] \[=\left(\varepsilon+\overline{\varepsilon},H\right]\alpha+\left[ \varepsilon^{\ast}+\overline{\varepsilon}^{\ast},H\right]\alpha\] \[=\left(\varepsilon+\overline{\varepsilon}-\varepsilon^{\ast}- \overline{\varepsilon}^{\ast}\right)\alpha\] \[=B^{\mbox{\tiny\char 12}}\]
while by Proposition 6.4 and Section 2.4 we observe
\[\left\{\mathcal{L}-\overline{\mathcal{L}},B\right\} =i\left[\Lambda+L,\varepsilon+\overline{\varepsilon}+\varepsilon^{ \ast}+\overline{\varepsilon}^{\ast}\right]\alpha\] \[=i\left[\Lambda+L,\varepsilon+\overline{\varepsilon}\right] \alpha+i\left[\Lambda+L,\varepsilon^{\ast}+\overline{\varepsilon}^{\ast} \right]\alpha\] \[=\left(\varepsilon-\overline{\varepsilon}+\varepsilon^{\ast}- \overline{\varepsilon}^{\ast}\right)\alpha\] \[=-iB_{c}^{\mbox{\tiny\char 12}}.\]
The remaining identities are obtained by conjugating with the operators \(J_{\mathrm{alg}}\) and \(\mbox{\tiny\char 12}\). By the identities \(\mathcal{L}_{c}=\mathcal{L}\) and \(\overline{\mathcal{L}}_{c}=\overline{\mathcal{L}}\) we see
\[\left\{\mathcal{L}+\overline{\mathcal{L}},B_{c}\right\}=J_{\mathrm{alg}}^{-1} \left\{\mathcal{L}+\overline{\mathcal{L}},B\right\}J_{\mathrm{alg}}=J_{ \mathrm{alg}}^{-1}B^{\mbox{\tiny\char 12}}J_{\mathrm{alg}}=B_{c}^{\mbox{\tiny \char 12}}.\qed\]
Summing the above identities gives rise to new objects of interest. For example
\[\left\{\mathcal{L},B^{\mbox{\tiny\char 12}}\right\} =\tfrac{1}{2}\big{(}\left\{\mathcal{L}+\overline{\mathcal{L}},B^{ \mbox{\tiny\char 12}}\right\}+\left\{\mathcal{L}-\overline{\mathcal{L}},B^{ \mbox{\tiny\char 12}}\right\}\big{)}=\tfrac{1}{2}(B+iB_{c}).\]
We define
\[\mathfrak{B} =\tfrac{1}{2}(B+iB_{c}) \overline{\mathfrak{B}} =\tfrac{1}{2}(B-iB_{c})\]
and
\[\mathfrak{D} =\tfrac{1}{2}(\widetilde{D}+i\widetilde{D}_{c}) \overline{\mathfrak{D}} =\tfrac{1}{2}(\widetilde{D}-i\widetilde{D}_{c}).\]
More generally we recall the operator \(\mathfrak{d}_{t}\) on \(\Gamma\big{(}\mathbb{C}l(M)\big{)}\) defined by
\[\mathfrak{d}_{t}=\tfrac{1}{2}\big{(}D_{t}+i(D_{t})_{c}\big{)}.\]
Notice that for any \(\varphi\in\Gamma(\mathbb{C}l(M))\) and for any \(J\)-adapted local orthonormal frame
\(v_{1},\ldots,v_{2n}=e_{1},\ldots,e_{n},Je_{1},\ldots,Je_{n}\) of \(TM\) we have
\[\mathfrak{d}_{t}(\varphi)=\tfrac{1}{2}\big{(}\sum_{j=1}^{2n}v_{j}\cdot\nabla_{ v_{j}}^{t}(\varphi)-iJv_{j}\cdot\nabla_{v_{j}}^{t}(\varphi)\big{)}=2\sum_{j=1}^{n} \epsilon_{j}\cdot\nabla_{\overline{\epsilon}_{j}}^{t}(\varphi).\]
We observe (cf. [13]) that the operator \(\mathfrak{d}_{t}\) is of Clifford bidegree \((1,1)\) as left Clifford multiplication by an element of \(T^{1,0}(M)\) is of Clifford bidegree \((1,1)\) and any Hermitian connection is both \(\mathcal{H}\) and \(\mathcal{J}\) parallel. Contrarily the operator \(\mathfrak{D}\) on sections of \(\mathbb{C}l(M)\) is not generically an operator of pure Clifford bidegree.
Focusing attention once again on our operators \(\mathfrak{D}\) and \(\mathfrak{B}\), we state the below immediate consequence of the identity (Equation (5)) along with Lemmas lemma 6.3 and lemma 6.6.
**Proposition 6.9**.: On any almost Hermitian manifold \(M\) we have the following correspondences of operators through the isomorphism \(\Gamma\mathbb{C}l(M)\cong\Omega_{\mathbb{C}}(M)\)
\[\mathfrak{D} =\overline{\delta}+\delta^{*} \overline{\mathfrak{D}} =\delta+\overline{\delta}^{*}\] \[\mathfrak{D}^{\mathbb{M}} =\overline{\delta}\alpha-\delta^{*}\alpha \overline{\mathfrak{D}}^{\mathbb{D}} =\delta\alpha-\overline{\delta}^{*}\alpha\] \[\mathfrak{B} =\overline{\varepsilon}+\varepsilon^{*} \overline{\mathfrak{B}} =\varepsilon+\overline{\varepsilon}^{*}\] \[\mathfrak{B}^{\mathbb{M}} =\overline{\varepsilon}\alpha-\varepsilon^{*}\alpha \overline{\mathfrak{B}}^{\mathbb{M}} =\varepsilon\alpha-\overline{\varepsilon}^{*}\alpha\]
Proof.: We observe that by the identity (5) we have on \(\Gamma\mathbb{C}l(M)\cong\Omega_{\mathbb{C}}(M)\) that
\[\widetilde{D}=\delta+\overline{\delta}+\delta^{*}+\overline{\delta}^{*}\]
and by Lemma 6.3 we have \(\widetilde{D}_{c}=i(\delta-\overline{\delta}+\overline{\delta}^{*}-\delta^{*})\) so that
\[\mathfrak{D}=\tfrac{1}{2}(D+iD_{c})=\overline{\delta}+\delta^{*}.\]
The other identities are similarly obtained by use of (eq. (5)) along with Lemmas lemma 6.3 and lemma 6.6.
**Lemma 6.10**.: Let \(M\) be an almost Hermitian manifold. Then we have
\[[\widetilde{D},\widetilde{D}^{\mathbb{D}}]=[\widetilde{D}_{c},\widetilde{D}_{ c}^{\mathbb{D}}]=0.\]
Furthermore if \(M\) is Kahler then
\[[B,B^{\mathbb{M}}]=[B_{c},B_{c}^{\mathbb{D}}]=0.\]
Proof.: We observe
\[\widetilde{D}\widetilde{D}^{\mathbb{D}}\cong(d+d^{*})(d\alpha-d^{*}\alpha)=d^{* }d\alpha-dd^{*}\alpha=(d\alpha-d^{*}\alpha)(d+d^{*})\cong\widetilde{D}^{ \mathbb{D}}.\]
It follows that \([\widetilde{D},\widetilde{D}^{\mathbb{D}}]=0\). Conjugating this identity by \(J_{\mathrm{alg}}\) gives \([\widetilde{D}_{c},\widetilde{D}_{c}^{\mathbb{D}}]=0\).
If \(M\) is Kahler then \(B=\widetilde{D}\) and hence
\[[B,B^{\mathbb{M}}]=[B_{c},B_{c}^{\mathbb{D}}]=0.\qed\]
**Definition 6.11**.: For an operator \(T\), we define the _Laplacian of \(T\)_ by
\[\Delta_{T}=TT^{*}+T^{*}T.\]
**Proposition 6.12**.: On any almost Hermitian manifold \(M\) we have \(\Delta_{\mathfrak{D}}=\Delta_{\mathfrak{D}^{\mathbb{D}}}\). Furthermore \(\Delta_{\mathfrak{D}}=\Delta_{\delta}+\Delta_{\overline{\delta}}\) through the isomorphism \(\Gamma(\mathbb{C}l(M))\cong\Omega_{\mathbb{C}}(M)\).
Proof.: The identity \(\Delta_{\mathfrak{D}}=\Delta_{\mathfrak{D}^{\mathbb{D}}}\) can be checked by observing
\[\Delta_{\mathfrak{D}} =(\overline{\delta}+\delta^{*})(\delta+\overline{\delta}^{*})+( \delta+\overline{\delta}^{*})(\overline{\delta}+\delta^{*})\] \[=\Delta_{\delta}+\Delta_{\overline{\delta}}+\left\{\delta, \overline{\delta}\right\}+\left\{\delta^{*},\overline{\delta}^{*}\right\}\]
while
\[\Delta_{\mathfrak{D}^{\mathbb{D}}} =(\overline{\delta}\alpha-\delta^{*}\alpha)(\delta\alpha- \overline{\delta}^{*}\alpha)+(\delta\alpha-\overline{\delta}^{*}\alpha)( \overline{\delta}\alpha-\delta^{*}\alpha)\] \[=\Delta_{\delta}+\Delta_{\overline{\delta}}-\left\{\delta, \overline{\delta}\right\}-\left\{\delta^{*},\overline{\delta}^{*}\right\}.\]
By Proposition 2.15 we see that
\[\left\{\delta,\overline{\delta}\right\}=\left\{\partial,\overline{\partial} \right\}+\left\{\mu,\overline{\mu}\right\}+\left\{\partial,\mu\right\}+\left\{ \overline{\partial},\overline{\mu}\right\}=\left\{\partial,\overline{\partial }\right\}-\left\{\partial,\overline{\partial}\right\}+0+0=0\]
and so \(\Delta_{\mathfrak{D}}=\Delta_{\mathfrak{D}^{\mathbb{D}}}\).
Repeating the above argument for \(\Delta_{\mathfrak{B}}\) and \(\Delta_{\mathfrak{B}^{\mathbb{D}}}\) we subsequently obtain the
**Proposition 6.13**.: On any almost Hermitian manifold \(M\) we have \(\Delta_{\mathfrak{B}}+\Delta_{\mathfrak{B}^{\circ}}\cong 2\left(\Delta_{ \varepsilon}+\Delta_{\overline{\varepsilon}}\right)\) through the isomorphism \(\Gamma(\mathbb{C}l(M))\cong\Omega_{\mathbb{C}}(M)\). Furthermore \(\Delta_{\mathfrak{B}}=\Delta_{\mathfrak{B}^{\circ}}\) if \(M\) is Kahler.
Proof.: By the identities in Proposition 6.9 we compute
\[\Delta_{\mathfrak{B}} =\Delta_{\varepsilon}+\Delta_{\overline{\varepsilon}}+\left\{ \varepsilon,\overline{\varepsilon}\right\}+\left\{\varepsilon^{*},\overline{ \varepsilon}^{*}\right\}\,\text{and}\] \[\Delta_{\mathfrak{B}^{\circ}} =\Delta_{\varepsilon}+\Delta_{\overline{\varepsilon}}-\left\{ \varepsilon,\overline{\varepsilon}\right\}-\left\{\varepsilon^{*},\overline{ \varepsilon}^{*}\right\}.\]
Hence \(\Delta_{\mathfrak{B}}-\Delta_{\mathfrak{B}^{\circ}}=0\) when \(\varepsilon=\partial\) as then \(\left\{\varepsilon,\overline{\varepsilon}\right\}=\left\{\partial-i\rho_{ \partial},\overline{\partial}+i\overline{\rho_{\partial}}\right\}=\left\{ \partial,\overline{\partial}\right\}=0\).
We recall the operator of complex conjugation \(c:\mathbb{C}l^{r,s}(M)\to\mathbb{C}l^{-r,-s}(M)\) (see [10]). As \(\widetilde{D}\) and \(B\) are real operators, they commute with \(c\). Thus we have the following
**Lemma 6.14**.: On an almost Hermitian manifold \(M\) we have
\[c\ \mathfrak{D}c=\overline{\mathfrak{D}},\qquad c\ \mathfrak{B}c=\overline{ \mathfrak{B}},\qquad c\ \mathfrak{B}^{\circ}c=\overline{\mathfrak{B}^{\circ}}.\]
Proof.: As \(\currightharpoonup\) and \(J_{\text{alg}}\) are also real operators, both commute with \(c\). Hence
\[c\ \mathfrak{D}c=\tfrac{1}{2}\big{(}c\widetilde{D}c-icJ_{\text{alg}}^{-1} \widetilde{D}J_{\text{alg}}c\big{)}=\tfrac{1}{2}\big{(}\widetilde{D}-i \widetilde{D}_{c}\big{)}=\overline{\mathfrak{D}}.\]
The remaining identities are obtained similarly.
We now turn to investigating the relationship between the intrinsic \(sl(2)\) operators on \(\mathbb{C}l(M)\) with our operators \(\mathfrak{B}\) and \(\mathfrak{D}\).
**Proposition 6.15**.: On any almost Hermitian manifold \(M\) the following identities obtain
\[\left\{\mathfrak{B},\mathcal{L}\right\}=0 \left\{\overline{\mathfrak{B}},\overline{\mathcal{L}}\right\}=0\] \[\left\{\mathfrak{B},\overline{\mathcal{L}}\right\}=\mathfrak{B}^ {\circ} \left\{\overline{\mathfrak{B}},\mathcal{L}\right\}=\overline{ \mathfrak{B}}^{\circ}\] \[\left[\mathcal{H},\!\mathfrak{B}\right]=\mathfrak{B}.\]
Proof.: We observe that by Proposition 6.1,
\[\left[\mathcal{H},\mathfrak{B}\right] =\left[\mathcal{H},\tfrac{1}{2}(B+iB_{c})\right]=\tfrac{1}{2}( \left[\mathcal{H},B\right]+i[\mathcal{H},B_{c}])\] \[=\tfrac{1}{2}(B+iB_{c})=\mathfrak{B}\]
giving the last identity. For the first identity we observe by Proposition 6.8
\[\left\{\mathfrak{B},\mathcal{L}\right\} =\tfrac{1}{2}\big{(}(B+iB_{c})\mathcal{L}+\mathcal{L}(B+iB_{c}) \big{)}=\tfrac{1}{2}\big{(}\left\{B,\mathcal{L}\right\}+i\left\{B_{c}, \mathcal{L}\right\}\big{)}\] \[=\tfrac{1}{4}\big{(}(B^{\circ}-iB_{c}^{\circ})+i(B_{c}^{\circ}+iB ^{\circ})\big{)}=0.\]
A similar argument, again using Proposition 6.8 yields \(\left\{\mathfrak{B},\overline{\mathcal{L}}\right\}=\mathfrak{B}^{\circ}\). The remaining identities are obtained by conjugating by complex conjugation. For example by Lemma 6.14 we have
\[\left\{\overline{\mathfrak{B}},\mathcal{L}\right\}=\left\{c\ \mathfrak{B}c,c\ \overline{\mathcal{L}}c\right\}=c\left\{\mathfrak{B},\overline{\mathcal{L}} \right\}c=c\ \mathfrak{B}^{\circ}c=\overline{\mathfrak{B}}^{\circ}.\qed\]
**Proposition 6.16**.: On any almost Hermitian manifold \(M\) we have
\[\left[\mathcal{H},\Delta_{\mathfrak{B}}\right] =0 \left[\mathcal{H},\Delta_{\mathfrak{B}^{\circ}}\right] =0\] \[\left[\mathcal{L},\Delta_{\mathfrak{B}}\right] =\left[\overline{\mathfrak{B}}^{\circ},\mathfrak{B}\right] \left[\mathcal{L},\Delta_{\mathfrak{B}^{\circ}}\right] =\left[\mathfrak{B},\overline{\mathfrak{B}}^{\circ}\right]\] \[\left[\overline{\mathcal{L}},\Delta_{\mathfrak{B}}\right] =\left[\mathfrak{B}^{\circ},\overline{\mathfrak{B}}\right] \left[\overline{\mathcal{L}},\Delta_{\mathfrak{B}^{\circ}}\right] =\left[\overline{\mathfrak{B}},\mathfrak{B}^{\circ}\right].\]
Proof.: We compute by Proposition 6.15
\[\left[\mathcal{H},\Delta_{\mathfrak{B}}\right] =\mathcal{H}\mathfrak{B}\overline{\mathfrak{B}}+\mathcal{H} \overline{\mathfrak{B}}\mathfrak{B}-\mathfrak{B}\overline{\mathfrak{B}} \mathcal{H}-\overline{\mathfrak{B}}\mathcal{B}\mathcal{H}+\mathfrak{B} \mathcal{H}\overline{\mathfrak{B}}-\mathfrak{B}\mathcal{H}\overline{ \mathfrak{B}}\] \[=\left(\mathcal{H}\mathfrak{B}-\mathfrak{B}\mathcal{H}\right) \overline{\mathfrak{B}}+\mathfrak{B}(\mathcal{H}\overline{\mathfrak{B}}- \overline{\mathfrak{B}}\mathcal{H})+\mathcal{H}\overline{\mathfrak{B}} \mathfrak{B}-\overline{\mathfrak{B}}\mathfrak{B}\mathcal{H}\] \[=\mathcal{H}\overline{\mathfrak{B}}\mathfrak{B}-\overline{\mathfrak{B}} \mathfrak{B}\mathcal{H}\] \[=\mathcal{H}\overline{\mathfrak{B}}\mathfrak{B}-\overline{\mathfrak{B}} \mathfrak{B}\mathcal{H}+\overline{\mathfrak{B}}\mathcal{H}\mathfrak{B}- \overline{\mathfrak{B}}\mathcal{H}\mathfrak{B}\] \[=\left(\mathcal{H}\overline{\mathfrak{B}}-\overline{\mathfrak{B}} \mathcal{H}\right)\mathfrak{B}+\overline{\mathfrak{B}}(\mathcal{H}\mathfrak{B}- \mathfrak{B}\mathcal{H})=0.\]
Similarly
\[[\mathcal{L},\Delta_{\mathfrak{B}}] =\mathcal{L}\mathfrak{B}\overline{\mathfrak{B}}+\mathcal{L} \overline{\mathfrak{B}}\mathfrak{B}-\mathfrak{B}\overline{\mathfrak{B}} \mathcal{L}-\overline{\mathfrak{B}}\mathfrak{B}\mathcal{L}\] \[=\mathcal{L}\mathfrak{B}\overline{\mathfrak{B}}+\mathcal{L} \overline{\mathfrak{B}}\mathfrak{B}-\mathfrak{B}\overline{\mathfrak{B}} \mathcal{L}-\overline{\mathfrak{B}}\mathfrak{B}\mathcal{L}+\mathfrak{B} \mathcal{L}\overline{\mathfrak{B}}-\mathfrak{B}\mathcal{L}\overline{ \mathfrak{B}}\] \[=\left((\mathcal{L}\mathfrak{B}+\mathfrak{B}\mathcal{L}) \overline{\mathfrak{B}}-\mathfrak{B}(\overline{\mathfrak{B}}\mathcal{L}+ \mathcal{L}\overline{\mathfrak{B}})+\mathcal{L}\overline{\mathfrak{B}} \mathfrak{B}-\overline{\mathfrak{B}}\mathfrak{B}\mathcal{L}\right)\] \[=\left(-\mathfrak{B}\overline{\mathfrak{B}}^{\oplus}+(\mathcal{L }\overline{\mathfrak{B}}+\overline{\mathfrak{B}}\mathcal{L})\mathfrak{B}- \overline{\mathfrak{B}}(\mathfrak{B}\mathcal{L}+\mathcal{L}\mathfrak{B})\right)\] \[=[\overline{\mathfrak{B}}^{\oplus},\mathfrak{B}].\]
The remaining identities are obtained by conjugating with \(\rtimes\) and complex conjugation.
## 7. Almost Kahler identities via the Riemannian Dirac Operator
**Lemma 7.1**.: Let \(M\) be an almost Kahler manifold. For any \(v\in TM\) and any \(\varphi\in\Gamma\mathbb{C}l(M)\) we have
\[\left[\widetilde{\nabla}_{v},\mathcal{J}\right]\varphi=i\left(J_{\mathrm{alg} }^{-1}\widetilde{\nabla}_{Jv}(J_{\mathrm{alg}}\varphi)-\widetilde{\nabla}_{Jv }\varphi\right).\]
Proof.: We recall that \(\mathcal{J}=-iJ_{\mathrm{der}}\) on \(\mathbb{C}l(M)\). As the commutator of two derivations is a derivation, and conjugating a derivation by an algebra automorphism is a derivation we observe \([\widetilde{\nabla}_{v},J_{\mathrm{der}}]+J_{\mathrm{alg}}^{-1}\widetilde{ \nabla}_{Jv}\circ J_{\mathrm{alg}}\) is a derivation. Thus, it suffices to check the identity
\[[\widetilde{\nabla}_{v},J_{\mathrm{der}}]+J_{\mathrm{alg}}^{-1}\widetilde{ \nabla}_{Jv}\circ J_{\mathrm{alg}}=\widetilde{\nabla}_{Jv}\]
on functions and vector fields. Noticing that \(J_{\mathrm{der}}\) vanishes on functions, and that \(J_{\mathrm{alg}}\) is the identity map on functions, we see that for a smooth function \(f\) on \(M\) we have
\[[\widetilde{\nabla}_{v},J_{\mathrm{der}}](f)+J_{\mathrm{alg}}^{-1}\widetilde{ \nabla}_{Jv}(J_{\mathrm{alg}}f)=J_{\mathrm{alg}}^{-1}\widetilde{\nabla}_{Jv}( J_{\mathrm{alg}}f)=\widetilde{\nabla}_{Jv}(f).\]
If \(X\in\Gamma(TM)\) then
\[[\widetilde{\nabla}_{v},J_{\mathrm{der}}](X)+J_{\mathrm{alg}}^{-1}\widetilde{ \nabla}_{Jv}(J_{\mathrm{alg}}X)=\widetilde{\nabla}_{v}JX-J\widetilde{\nabla}_ {v}X+J^{-1}\widetilde{\nabla}_{Jv}JX=\widetilde{\nabla}_{v}JX-J\widetilde{ \nabla}_{v}X-J\widetilde{\nabla}_{Jv}JX.\]
When \(d\omega=0\) we have (see, for instance, [12]) the identity \((\widetilde{\nabla}_{v}J)=J(\widetilde{\nabla}_{Jv}J)\). That is
\[\widetilde{\nabla}_{v}JX-J\widetilde{\nabla}_{v}X=J\big{(}\widetilde{\nabla}_ {Jv}JX-J\widetilde{\nabla}_{Jv}X\big{)}\]
and hence
\[[\widetilde{\nabla}_{v},J_{\mathrm{der}}](X)+J_{\mathrm{alg}}^{-1}\widetilde{ \nabla}_{Jv}(J_{\mathrm{alg}}X)=\widetilde{\nabla}_{v}JX-J\widetilde{\nabla}_ {v}X-J\widetilde{\nabla}_{Jv}JX=-J^{2}\widetilde{\nabla}_{Jv}X=\widetilde{ \nabla}_{Jv}X.\]
The result follows by substituting the identity \(\mathcal{J}=-iJ_{\mathrm{der}}\) on the complex Clifford bundle \(\mathbb{C}l(M)\).
**Proposition 7.2**.: On any almost Kahler manifold \(M\) we have the following identities
\[[\widetilde{D},\mathcal{H}]=-i\widetilde{D}_{c} [\widetilde{D}_{c},\mathcal{H}]= i\widetilde{D}\] \[[\widetilde{D}^{\oplus},\mathcal{H}]= i\widetilde{D}_{c}^{\oplus} [\widetilde{D}_{c}^{\oplus},\mathcal{H}]=-i\widetilde{D}^{\oplus}.\]
Proof.: Let \(v_{1},\dots,v_{2n}\) be a \(J\)-adapted orthonormal frame of \(TM\). We first compute \([\widetilde{D},\mathcal{J}]\). For any \(\varphi\in\Gamma(\mathbb{C}l(M))\) we have
\[[\widetilde{D},\mathcal{J}](\varphi) =\sum_{j}\big{(}v_{j}\cdot\widetilde{\nabla}_{v_{j}}\mathcal{J} \varphi-\mathcal{J}(v_{j}\cdot\widetilde{\nabla}_{v_{j}}\varphi)\big{)}\] \[=\sum_{j}\big{(}v_{j}\cdot\widetilde{\nabla}_{v_{j}}\mathcal{J} \varphi+iJv_{j}\cdot\widetilde{\nabla}_{v_{j}}\varphi-v_{j}\cdot\mathcal{J}( \widetilde{\nabla}_{v_{j}}\varphi)\big{)}\] \[=\sum_{j}\big{(}v_{j}\cdot\left[\widetilde{\nabla}_{v_{j}}, \mathcal{J}\right](\varphi)+iJv_{j}\cdot\widetilde{\nabla}_{v_{j}}\varphi \big{)}.\]
By the identity \(\left[\widetilde{\nabla}_{v},\mathcal{J}\right]\varphi=i\left(J_{\mathrm{alg} }^{-1}\widetilde{\nabla}_{Jv}J_{\mathrm{alg}}\varphi-\widetilde{\nabla}_{Jv} \varphi\right)\) above, we have
\[[\widetilde{D},\mathcal{J}](\varphi) =i\sum_{j}\left(v_{j}\cdot J_{\mathrm{alg}}^{-1}\widetilde{\nabla}_ {Jv_{j}}J_{\mathrm{alg}}\varphi-v_{j}\cdot\widetilde{\nabla}_{Jv_{j}}\varphi+Jv _{j}\cdot\widetilde{\nabla}_{v_{j}}\varphi\right)\] \[=i\widetilde{D}_{c}(\varphi)-2i\sum_{j}v_{j}\cdot\widetilde{ \nabla}_{Jv_{j}}\varphi.\]
Let \(\widetilde{\mathrm{\Omega}}(\varphi)=\sum_{j}v_{j}\cdot\widetilde{\nabla}_{Jv _{j}}\varphi\) and let \(L_{\omega_{0}}\) be left-multiplication by \(\omega_{0}\). Clearly \(\mathcal{H}+\mathcal{J}=2L_{\omega_{0}}\) and \([\widetilde{D},\mathcal{J}]=i\widetilde{D}_{c}-2i\widetilde{\mathrm{\Omega}}\). Furthermore, for any \(\varphi\in\Gamma(\mathbb{C}l(M))\)
\[\widetilde{D}(\omega_{0}\cdot\varphi) =\sum_{j}\left(v_{j}\cdot(\widetilde{\nabla}_{v_{j}}\omega_{0}) \cdot\varphi+v_{j}\cdot\omega_{0}\cdot\widetilde{\nabla}_{v_{j}}\varphi\right)\] \[=\widetilde{D}(\omega_{0})\cdot\varphi+\sum_{j}\left((\omega_{0} \cdot v_{j}+iJv_{j})\cdot\widetilde{\nabla}_{v_{j}}(\varphi)\right)\] \[=\omega_{0}\cdot\widetilde{D}(\varphi)-i\widetilde{\mathrm{\Omega }}(\varphi).\]
Thus we have that \([\widetilde{D},L_{\omega_{0}}]=-i\widetilde{\mathrm{\Omega}}\). But then
\[[\widetilde{D},\mathcal{H}]+[\widetilde{D},\mathcal{J}]=2[\widetilde{D},L_{ \omega_{0}}]=-2i\widetilde{\mathrm{\Omega}}\]
and so \([\widetilde{D},\mathcal{H}]=-i\widetilde{D}_{c}\). The remaining identities are obtained by conjugating with \(J_{\mathrm{alg}}\) and \(\rtimes\) as in the proof of Theorem 3.2.
**Lemma 7.3**.: Let \(M\) be an almost Kahler manifold. Then \([\widetilde{D}_{c},\widetilde{D}^{\oplus}]=[\widetilde{D},\widetilde{D}_{c}^{ \oplus}]\).
Proof.: By the identites in Proposition 7.2 we have
\[[\widetilde{D}_{c},\widetilde{D}^{\oplus}]=i[[\widetilde{D},\mathcal{H}], \widetilde{D}^{\oplus}]\]
and
\[[\widetilde{D},\widetilde{D}_{c}^{\oplus}]=-i[\widetilde{D},[\widetilde{D}^{ \oplus},\mathcal{H}]].\]
By use of Lemma 6.10 it is quickly checked that the difference
\[[\widetilde{D}_{c},\widetilde{D}^{\oplus}]-[\widetilde{D},\widetilde{D}_{c}^{ \oplus}]=i[[\widetilde{D},\widetilde{D}^{\oplus}],\mathcal{H}]=0.\qed\]
As a consequence of Proposition 7.2 we recover the _almost Kahler identities_. (See [10], [11]).
**Proposition 7.4**.: Let \(M\) be an almost Kahler manifold. We have the following generalizations of the Kahler identities:
\[[\Lambda,\overline{\delta}] =-i\delta^{*} [L,\delta^{*}] =i\overline{\delta}\] \[=i\overline{\delta}^{*} [L,\overline{\delta}^{*}] =-i\delta\] \[0 =[\Lambda,\delta^{*}] =[\Lambda,\overline{\delta}^{*}] [L,\delta] =[L,\overline{\delta}]=0.\]
Proof.: By the identity \([\widetilde{D},\mathcal{H}]=-i\widetilde{D}_{c}\) we have that \([d+d^{*},i(\Lambda-L)]=i(d_{c}+d_{c}^{*})\) or equivalently
\[i[d,\Lambda]+i[d^{*},\Lambda]-i[d,L]-i[d^{*},L]=id_{c}+id_{c}^{*}.\]
By considering degree of each operator in the above equality we obtain
\[[d,\Lambda]=d_{c}^{*}\ \ \ \ [d^{*},L]=-d_{c}\ \ \ \ \ \text{and}\ \ \ \ [d,L]=[d^{*},\Lambda]=0.\]
Furthermore since
\[[\Lambda,d]=[\Lambda,\delta+\overline{\delta}]=[\Lambda,\partial+\overline{ \mu}+\overline{\partial}+\mu]=[\Lambda,\partial]+[\Lambda,\overline{\mu}]+[ \Lambda,\overline{\partial}]+[\Lambda,\mu]\]
the above implies \([\Lambda,d]=-id_{c}^{*}=i(\overline{\delta}^{*}-\delta^{*})=i(\overline{ \partial}^{*}+\mu^{*}-\partial^{*}-\overline{\mu}^{*})\) as well. The argument is again completed by considering the effect of each operator in the equality on bidegree.
It is quick to check, using the identities of Proposition 7.4 that \(\Delta_{\delta}=\Delta_{\overline{\delta}}\) in the almost Kahler setting. (See, for instance, [14, Proposition 6.2]). Hence by Proposition 6.12 we have
**Corollary 7.5**.: For an almost Kahler manifold \(M\) we have through the identification \(\Gamma(\mathbb{C}l(M))\cong\Omega_{\mathbb{C}}(M)\) that
\[\Delta_{\mathfrak{D}}=2\Delta_{\delta}.\]
**Proposition 7.6**.: On any almost Kahler manifold \(M\) we have the following identities
\[\left\{\mathcal{L}+\overline{\mathcal{L}},\widetilde{D}\right\} =\widetilde{D}^{\oplus} \left\{\mathcal{L}-\overline{\mathcal{L}},\widetilde{D}\right\} =-i\widetilde{D}_{c}^{\oplus}\] \[\left\{\mathcal{L}+\overline{\mathcal{L}},\widetilde{D}^{ \oplus}\right\} =\widetilde{D} \left\{\mathcal{L}-\overline{\mathcal{L}},\widetilde{D}^{\oplus}\right\} =\ i\widetilde{D}_{c}\] \[\left\{\mathcal{L}+\overline{\mathcal{L}},\widetilde{D}_{c}\right\} =\widetilde{D}_{c}^{\oplus} \left\{\mathcal{L}-\overline{\mathcal{L}},\widetilde{D}_{c}\right\} =\ i\widetilde{D}^{\oplus}.\]
Proof.: We have through the identification \(\Gamma(\mathbb{C}l(M))\cong\Omega_{\mathbb{C}}(M)\) that
\[\left\{\mathcal{L}+\overline{\mathcal{L}},\widetilde{D}\right\} =\alpha H(d+d^{*})+(d+d^{*})\alpha H\] \[=\alpha[H,d+d^{*}]=\alpha([H,d]+[H,d^{*}])\] \[=d\alpha-d^{*}\alpha\] \[=\widetilde{D}^{\oplus}\]
and
\[\left\{\mathcal{L}-\overline{\mathcal{L}},\widetilde{D}\right\} =-i\left(\alpha(\Lambda+L)(d+d^{*})+(d+d^{*})\alpha(\Lambda+L) \right)=-i\alpha[(\Lambda+L),d+d^{*}]\] \[=-i\alpha[\Lambda,d]-i\alpha[L,d^{*}]=i\alpha d_{c}^{*}-i\alpha d_ {c}\] \[=-i(d_{c}^{*}\alpha-d_{c}\alpha).\]
Hence \(\left\{\mathcal{L}-\overline{\mathcal{L}},\widetilde{D}\right\}=-i\widetilde{ D}_{c}^{\oplus}\). The remaining identities are again obtained by conjugating with the operators \(J_{\mathrm{alg}}\) and \(\mathcal{H}\).
**Proposition 7.7**.: On any almost Kahler manifold \(M\) the following identities obtain
\[\left\{\mathfrak{D},\mathcal{L}\right\} =0 \left\{\overline{\mathfrak{D}},\overline{\mathcal{L}}\right\} =0\] \[\left[\mathcal{H},\mathfrak{D}\right] =\mathfrak{D}.\]
Proof.: We observe that by Proposition 7.6
\[\left\{\mathfrak{D},\mathcal{L}\right\} =\tfrac{1}{2}\big{(}(\widetilde{D}+i\widetilde{D}_{c})\mathcal{L} +\mathcal{L}(\widetilde{D}+i\widetilde{D}_{c})\big{)}=\tfrac{1}{2}\big{(} \left\{\widetilde{D},\mathcal{L}\right\}+i\left\{\widetilde{D}_{c},\mathcal{ L}\right\}\big{)}\] \[=\tfrac{1}{4}\big{(}(\widetilde{D}^{\oplus}-i\widetilde{D}_{c}^{ \oplus})+i(\widetilde{D}_{c}^{\oplus}+i\widetilde{D}^{\oplus})\big{)}=0.\]
The argument \(\left\{\mathfrak{D},\overline{\mathcal{L}}\right\}=\mathfrak{D}^{\oplus}\) follows similarly. By Proposition 7.2
\[\left[\mathcal{H},\mathfrak{D}\right] =\left[\mathcal{H},\tfrac{1}{2}(\widetilde{D}+i\widetilde{D}_{c} )\right]=\tfrac{1}{2}([\mathcal{H},\widetilde{D}]+i[\mathcal{H},\widetilde{D} _{c}])\] \[=\tfrac{1}{2}(\widetilde{D}+i\widetilde{D}_{c})=\mathfrak{D}.\]
The remaining identities follow by conjugating by complex conjugation and Lemma 6.14.
**Proposition 7.8**.: On any almost Kahler manifold \(M\) we have
\[\left[\mathcal{H},\Delta_{\mathfrak{D}}\right]=0,\qquad\left[\mathcal{L},\Delta _{\mathfrak{D}}\right]=0\quad\text{and}\quad\left[\overline{\mathcal{L}}, \Delta_{\mathfrak{D}}\right]=0.\]
Proof.: Using Proposition 7.7 and an identical argument to the proof of Proposition 6.16 we obtain
\[[\mathcal{H},\Delta_{\mathfrak{D}}]=0\text{ and }[\mathcal{L},\Delta_{ \mathfrak{D}}]=[\overline{\mathfrak{D}}^{\circ},\mathfrak{D}].\]
By definition of \(\mathfrak{D}\) and \(\overline{\mathfrak{D}}^{\circ}\) we see that
\[[\overline{\mathfrak{D}}^{\circ},\mathfrak{D}]=\tfrac{1}{4}\left([\widetilde {D}^{\circ},\widetilde{D}]+[\widetilde{D}_{c}^{\circ},\widetilde{D}_{c}]-i[ \widetilde{D}_{c}^{\circ},\widetilde{D}]+i[\widetilde{D}^{\circ},\widetilde{D }_{c}]\right)\]
and so the result follows by Lemmas 6.10 and 7.3. That is
\[[\widetilde{D},\widetilde{D}^{\circ}]=0\quad\text{ and }\quad[\widetilde{D}_{c}, \widetilde{D}^{\circ}]-[\widetilde{D}_{c}^{\circ},\widetilde{D}]=0.\]
## 8. Clifford Harmonics
Let \(M\) be a compact almost Hermitian manifold. For any finite collection of operators \(T_{j}\), for \(j=1,\ldots,n\), on \(\Gamma\mathbb{C}l(M)\cong\Omega_{\mathbb{C}}(M)\) we have
\[\varphi\in\ker\left(\sum_{j=1}^{n}\Delta_{T_{j}}\right)\text{ \ if any only if \ \ }0=\sum_{j=1}^{n}||T_{j}\varphi||^{2}+||T_{j}^{*}\varphi||^{2}.\]
I.e. \(\varphi\in\ker\left(\sum_{j=1}^{n}\Delta_{T_{j}}\right)\) if and only if \(\varphi\in\bigcap_{j=1}^{n}\ker(T_{j})\cap\ker(T_{j}^{*})\).
**Definition 8.1**.: For an elliptic differential operator \(T\) on \(\mathcal{A}=\Gamma\mathbb{C}l(M),\Omega_{\mathbb{C}}(M)\) we define
\[\boldsymbol{\mathcal{H}}_{T}=\ker(\Delta_{T}).\]
Furthermore we define
\[\boldsymbol{\mathcal{H}}_{T}^{r,s}=\boldsymbol{\mathcal{H}}_{T}\cap\mathcal{A }^{r,s}.\]
Recall from Theorem 2.8 the Hodge automorphism \(g=\exp(\frac{-\pi i}{4}H)\exp(\frac{\pi}{4}(\Lambda-L))\). By Section 2.4 we rewrite
\[g=\exp\big{(}-\tfrac{\pi i}{4}\alpha(\mathcal{L}+\overline{\mathcal{L}}) \big{)}\text{exp}\big{(}-\tfrac{\pi i}{4}\mathcal{H}\big{)}.\]
**Proposition 8.2**.: Let \(M\) be a compact almost Hermitian manifold. The action of the Hodge automorphism \(g\) on \(\Gamma(\mathbb{C}l(M))\) preserves \(\boldsymbol{\mathcal{H}}_{\mathfrak{B}}\cap\boldsymbol{\mathcal{H}}_{ \mathfrak{B}^{\circ}}\).
Proof.: By Proposition 6.16 we have
\[[\alpha(\mathcal{L}+\overline{\mathcal{L}}),\Delta_{\mathfrak{B}}]=\alpha[ \mathfrak{B}^{\circ},\overline{\mathfrak{B}}]+\alpha(\overline{\mathfrak{B}}^ {\circ},\mathfrak{B}],\qquad\quad[\alpha(\mathcal{L}+\overline{\mathcal{L}}), \Delta_{\mathfrak{B}^{\circ}}]=-\alpha[\mathfrak{B}^{\circ},\overline{ \mathfrak{B}}]-\alpha[\overline{\mathfrak{B}}^{\circ},\mathfrak{B}]\]
\[\text{ and }\quad[\mathcal{H},\Delta_{\mathfrak{B}}]=0.\]
The first two equalities imply \([\alpha(\mathcal{L}+\overline{\mathcal{L}}),\Delta_{\mathfrak{B}}+\Delta_{ \mathfrak{B}^{\circ}}]=0\). Since \(\Delta_{\mathfrak{B}}+\Delta_{\mathfrak{B}^{\circ}}\) commutes with \(\alpha(\mathcal{L}+\overline{\mathcal{L}})\) and \(\mathcal{H}\), and as \(g\) is the product of the exponentials of these operators, we have
\[g\big{(}\Delta_{\mathfrak{B}}+\Delta_{\mathfrak{B}^{\circ}}\big{)}=\big{(} \Delta_{\mathfrak{B}}+\Delta_{\mathfrak{B}^{\circ}}\big{)}g.\]
This gives the result.
**Proposition 8.3**.: Let \(M\) be a compact almost Kahler manifold. The action of the Hodge automorphism \(g\) on \(\Gamma(\mathbb{C}l(M))\) preserves \(\boldsymbol{\mathcal{H}}_{\mathfrak{D}}\).
Proof.: By Proposition 7.8 we have
\[[\alpha(\mathcal{L}+\overline{\mathcal{L}}),\Delta_{\mathfrak{D}}]=0\quad\quad \text{ and }\quad[\mathcal{H},\Delta_{\mathfrak{D}}]=0.\]
Since \(\Delta_{\mathfrak{D}}\) commutes with \(\alpha(\mathcal{L}+\overline{\mathcal{L}})\) and \(\mathcal{H}\), and as \(g\) is the product of the exponentials of these operators, we have
\[g(\Delta_{\mathfrak{D}})=(\Delta_{\mathfrak{D}})g.\]
This again gives the result.
**Proposition 8.4**.: Let \(M\) be a compact almost Hermitian manifold. Complex conjugation \(c\) on \(\mathbb{C}l(M)\) induces isomorphisms
\[c:\boldsymbol{\mathcal{H}}_{\mathfrak{B}}^{r,s}\cap\boldsymbol{\mathcal{H}}_{ \mathfrak{B}^{\circ}}^{r,s}\to\boldsymbol{\mathcal{H}}_{\mathfrak{B}}^{-r,-s} \cap\boldsymbol{\mathcal{H}}_{\mathfrak{B}^{\circ}}^{-r,-s},\]
\[c:\boldsymbol{\mathcal{H}}_{\mathfrak{D}}^{r,s}\longrightarrow\boldsymbol{ \mathcal{H}}_{\mathfrak{D}}^{-r,-s}.\]
Proof.: We recall that by Lemma 6.14
\[c\ \mathfrak{D}c=\overline{\mathfrak{D}},\qquad c\ \mathfrak{B}c=\overline{ \mathfrak{B}},\qquad c\ \mathfrak{B}^{\circ}c=\overline{\mathfrak{B}^{\circ}}.\]
Hence \(c\ \big{(}\Delta_{\mathfrak{D}}\big{)}c=\Delta_{\mathfrak{D}}\) and \(c\ \big{(}\Delta_{\mathfrak{B}}+\Delta_{\mathfrak{B}^{\circ}}\big{)}c=\Delta_{ \mathfrak{B}}+\Delta_{\mathfrak{B}^{\circ}}\). That is
\[c:\Gamma(\mathbb{C}l^{r,s}(M))\to\Gamma(\mathbb{C}l^{-r,-s}(M))\]
descends to complex antilinear isomorphisms
\[c:\boldsymbol{\mathcal{H}}_{\mathfrak{D}}^{r,s}\to\boldsymbol{\mathcal{H}}_{ \mathfrak{D}}^{-r,-s}\ \text{and}\quad c:\boldsymbol{\mathcal{H}}_{\mathfrak{B}}^{r,s}\cap \boldsymbol{\mathcal{H}}_{\mathfrak{B}^{\circ}}^{r,s}\to\boldsymbol{ \mathcal{H}}_{\mathfrak{B}}^{-r,-s}\cap\boldsymbol{\mathcal{H}}_{\mathfrak{B}^ {\circ}}^{-r,-s}.\qed\]
**Proposition 8.5**.: Let \(M\) be a compact almost Hermitian manifold. The transpose map \(\rtimes\) induces isomorphisms
\[\begin{split}\rtimes&:\boldsymbol{\mathcal{H}}_{ \mathfrak{B}}^{r,s}\cap\boldsymbol{\mathcal{H}}_{\mathfrak{B}^{\circ}}^{r,s} \to\boldsymbol{\mathcal{H}}_{\mathfrak{B}}^{r,-s}\cap\boldsymbol{\mathcal{H} }_{\mathfrak{B}^{\circ}}^{r,-s},\\ \rtimes&:\boldsymbol{\mathcal{H}}_{\mathfrak{D}}^{r,s }\longrightarrow\boldsymbol{\mathcal{H}}_{\mathfrak{D}}^{r,-s}.\end{split}\]
Proof.: We observe \(\rtimes\big{(}\Delta_{\mathfrak{D}}\big{)}\rtimes=\Delta_{\mathfrak{D}^{ \circ}}\) and that \(\rtimes\big{(}\Delta_{\mathfrak{B}}+\Delta_{\mathfrak{B}^{\circ}}\big{)} \rtimes=\Delta_{\mathfrak{B}}+\Delta_{\mathfrak{B}^{\circ}}\). By Proposition 6.12 we have \(\Delta_{\mathfrak{D}}=\Delta_{\mathfrak{D}^{\circ}}\) and so the complex algebra anti-automorphism
\[\rtimes:\Gamma(\mathbb{C}l^{r,s})\to\Gamma(\mathbb{C}l^{r,-s})\]
descends to complex linear isomorphisms
\[\rtimes:\boldsymbol{\mathcal{H}}_{\mathfrak{D}}^{r,s}\to\boldsymbol{\mathcal{ H}}_{\mathfrak{D}}^{r,-s}\ \text{and}\quad\rtimes:\boldsymbol{\mathcal{H}}_{\mathfrak{B}}^{r,s}\cap \boldsymbol{\mathcal{H}}_{\mathfrak{B}^{\circ}}^{r,s}\to\boldsymbol{\mathcal{ H}}_{\mathfrak{B}}^{r,-s}\cap\boldsymbol{\mathcal{H}}_{\mathfrak{B}^{\circ}}^{r,-s}.\qed\]
**Theorem 8.6**.: Let \(M\) be a compact almost Hermitian manifold. The Lie algebra generated by \(\big{(}\mathcal{H},\mathcal{L},\overline{\mathcal{L}}\big{)}\) on \(\Gamma\mathbb{C}l(M)\) defines a finite dimensional \(sl(2)\) representation on the space \(\boldsymbol{\mathcal{H}}_{\mathfrak{B}}\cap\boldsymbol{\mathcal{H}}_{ \mathfrak{B}^{\circ}}\).
Proof.: By Proposition 6.16 we have
\[[\mathcal{L},\Delta_{\mathfrak{B}}+\Delta_{\mathfrak{B}^{\circ}}]=0,\ \ \ [ \overline{\mathcal{L}},\Delta_{\mathfrak{B}}+\Delta_{\mathfrak{B}^{\circ}}]=0, \ \ \ \text{and}\ [\mathcal{H},\Delta_{\mathfrak{B}}+\Delta_{\mathfrak{B}^{\circ}}]=0.\]
Hence each of the operators \(\mathcal{L},\overline{\mathcal{L}},\mathcal{H}\) preserve \(\boldsymbol{\mathcal{H}}_{\mathfrak{B}}\cap\boldsymbol{\mathcal{H}}_{ \mathfrak{B}^{\circ}}\).
**Theorem 8.7**.: Let \(M\) be a compact almost Hermitian manifold. Through the isomorphism
\[\Gamma\mathbb{C}l(M)\cong\Omega_{\mathbb{C}}(M)\]
we have
\[\boldsymbol{\mathcal{H}}_{\mathfrak{B}}\cap\boldsymbol{\mathcal{H}}_{ \mathfrak{B}^{\circ}}\cong\boldsymbol{\mathcal{H}}_{\varepsilon}\cap \boldsymbol{\mathcal{H}}_{\overline{\varepsilon}}.\]
Proof.: By Proposition 6.13 we have \(2\big{(}\Delta_{\varepsilon}+\Delta_{\overline{\varepsilon}}\big{)}=\Delta_{ \mathfrak{B}}+\Delta_{\mathfrak{B}^{\circ}}\).
**Theorem 8.8**.: Let \(M\) be a compact almost Kahler manifold. The Lie algebra generated by \(\big{(}\mathcal{H},\mathcal{L},\overline{\mathcal{L}}\big{)}\) on \(\Gamma\mathbb{C}l(M)\) defines a finite dimensional \(sl(2)\) representation on the space \(\boldsymbol{\mathcal{H}}_{\mathfrak{D}}\).
Proof.: By Proposition 7.8 we have
\[[\mathcal{L},\Delta_{\mathfrak{D}}]=0,\ \ [\overline{\mathcal{L}},\Delta_{ \mathfrak{D}}]=0,\ \ \ \text{and}\ [\mathcal{H},\Delta_{\mathfrak{D}}]=0.\]
Hence \(\mathcal{L},\overline{\mathcal{L}},\mathcal{H}\) preserve \(\boldsymbol{\mathcal{H}}_{\mathfrak{D}}\).
**Theorem 8.9**.: Let \(M\) be a compact almost Kahler manifold. Through the isomorphism
\[\Gamma\mathbb{C}l(M)\cong\Omega_{\mathbb{C}}(M)\]
we have
\[\boldsymbol{\mathcal{H}}_{\mathfrak{D}}\cong\boldsymbol{\mathcal{H}}_{\delta}.\]
Proof.: By Corollary 7.5 we have \(2\Delta_{\delta}=\Delta_{\mathfrak{D}}\).
**Corollary 8.10**.: On any compact almost Hermitian manifold \(M\) we have that the Hodge automorphism induces an isomorphism
\[\boldsymbol{\mathcal{H}}_{\mathfrak{B}}^{q-p,n-p-q}\cap\boldsymbol{\mathcal{H}}_ {\mathfrak{B}^{\varepsilon}}^{q-p,n-p-q}\cong\boldsymbol{\mathcal{H}}_{ \varepsilon}^{p,q}\cap\boldsymbol{\mathcal{H}}_{\overline{\varepsilon}}^{p,q}.\]
Moreover, on any compact almost Kahler manifold \(M\) we have that
\[\boldsymbol{\mathcal{H}}_{\mathfrak{D}}^{q-p,n-p-q}\cong\boldsymbol{\mathcal{ H}}_{\delta}^{p,q}.\]
Proof.: By Theorem 2.8 we have
\[\Gamma(\mathbb{C}l^{q-p,n-p-q}(M))\stackrel{{ g}}{{\cong}} \Omega_{\mathbb{C}}^{p,q}(M).\]
The result then follows from Proposition 8.3 and the previous two theorems.
**Example 8.11**.: Let \(M\) be the Kodaira-Thurston manifold. Let \(x,y,z,w\) be an orthonormal left-invariant coframe on \(M\) with \(dz=x\wedge y\) and \(dx=dy=dw=0\). We define an almost complex structure \(J\) by \(Jy=x\) and \(Jz=w\). We compute \(d\omega=x\wedge y\wedge w\neq 0\) and \(N_{J}=0\) so that this almost complex structure is integrable and not an almost Kahler structure on \(M\).
Recalling the operators \(\varepsilon=\partial-i\rho_{\partial}\), \(\overline{\varepsilon}=\overline{\partial}+i\overline{\rho_{\partial}}\) and the spaces
\[\boldsymbol{\mathcal{H}}_{\varepsilon}^{p,q}(M)\cap\boldsymbol{\mathcal{H}}_ {\overline{\varepsilon}}^{p,q}(M)=\ker(\varepsilon)\cap\ker(\overline{ \varepsilon})\cap\ker(\varepsilon^{*})\cap\ker(\overline{\varepsilon}^{*}) \cap\Omega_{\mathbb{C}}^{p,q}(M),\]
we compute the "Hodge diamond" of their dimensions to be:
\[\begin{array}{ccccc}&&\mathbf{1}&&\\ &&\mathbf{1}&&\\ &&\mathbf{0}&&\mathbf{2}&&\mathbf{0}\\ &&\mathbf{1}&&\mathbf{1}&&\\ &&\mathbf{1}&&\mathbf{1}&&\\ &&\mathbf{1}&&\mathbf{1}&&\mathbf{1}\\ &&\mathbf{1}&&\mathbf{1}&&\end{array}\]
## Appendix: Hermitian Connections and Dirac Operators
For the convenience of the reader we give here an exposition, in our notation, of the parts of Gauduchon's paper [10] relevant for us.
**Lemma 8.12**.: For any vector fields \(X,Y,Z\) on an almost Hermitian manifold \(M\) we have
\[(\widetilde{\nabla}_{X}\omega)(Y,Z)=\left\langle(\widetilde{\nabla}_{X}J)Y,Z \right\rangle.\]
Proof.: For vector fields \(X,Y,Z\in\Gamma(TM)\) we observe
\[(\widetilde{\nabla}_{X}\omega)(Y\wedge Z) =X(\omega(Y\wedge Z))-\omega(\widetilde{\nabla}_{X}(Y\wedge Z))\] \[=X\big{(}\langle JY,Z\rangle\big{)}-\omega\big{(}(\widetilde{ \nabla}_{X}Y)\wedge Z+Y\wedge(\widetilde{\nabla}_{X}Z)\big{)}\] \[=\langle\widetilde{\nabla}_{X}JY,Z\rangle+\langle JY,\widetilde{ \nabla}_{X}Z\rangle-\langle J\widetilde{\nabla}_{X}Y,Z\rangle-\langle JY, \widetilde{\nabla}_{X}Z\rangle\] \[=\langle\widetilde{\nabla}_{X}JY,Z\rangle-\langle J\widetilde{ \nabla}_{X}Y,Z\rangle=\left\langle(\widetilde{\nabla}_{X}J)Y,Z\right\rangle.\qed\]
In fact, the elements \(N,d\omega\) and \(\widetilde{\nabla}\omega\) of \(\Omega^{2}(TM)\) are related by the following well known fundamental identity of almost Hermitian geometry (see for instance [11] where disparities in the conventional definitions of \(N\) and the exterior derivative \(d\) affect the coefficients in the statement)
**Lemma 8.13**.: [11] On any almost Hermitian manifold \(M\) we have
\[\tfrac{1}{2}\widetilde{\nabla}\omega(X,Y,Z)=\tfrac{1}{4}d\omega(X,Y,Z)-\tfrac {1}{4}d\omega(X,JY,JZ)+N(JX,Y,Z).\]
**Definition 8.14**.: Let \(\nabla\) be an affine metric connection. We define the _potential_\(A^{\nabla}\in\Omega^{2}(TM)\) of \(\nabla\) by
\[A^{\nabla}(X,Y,Z)=\big{\langle}\nabla_{X}(Y)-\widetilde{\nabla}_{X}(Y),Z\big{\rangle}.\]
Note that an affine connection \(\nabla\) is metric if and only if \(A^{\nabla}\in\Omega^{2}(TM)\). We write \(A=A^{\nabla}\) when the referent connection is apparent.
**Proposition 8.15**.: [10] An affine metric connection \(\nabla\) on \(M\) is Hermitian if and only if
\[A(X,JY,Z)+A(X,Y,JZ)=-(\widetilde{\nabla}\omega)(X,Y,Z).\]
Proof.: By Lemma 8.12 we have for all vector fields \(X,Y,Z\) that
\[(\widetilde{\nabla}\omega)(X,Y,Z)=\big{\langle}(\widetilde{\nabla}_{X}J)Y,Z \big{\rangle}.\]
Using that \(J\) is orthogonal and the definition of \(A\) we observe
\[A(X,JY,Z)+A(X,Y,JZ)=\big{\langle}(\nabla_{X}J)Y,Z\big{\rangle}-\big{\langle}( \widetilde{\nabla}_{X}J)Y,Z\big{\rangle}.\]
Consequently
\[A(X,JY,Z)+A(X,Y,JZ)+(\widetilde{\nabla}\omega)(X,Y,Z)=0\text{ if and only if }\big{\langle}(\nabla_{X}J)Y,Z\big{\rangle}=0\]
for all vector fields \(X,Y,Z\). That is, if and only if \((\nabla J)=0\).
**Definition 8.16**.: [10] The subspaces \(\Omega^{2,0}(TM),\ \Omega^{0,2}(TM),\ \Omega^{1,1}(TM)\) of \(\Omega^{2}(TM)\) are defined by
\[\phi\in\Omega^{2,0}(TM)\text{ if and only if }\phi(X,JY,Z)=-\phi(JX,Y,Z).\] \[\phi\in\Omega^{0,2}(TM)\text{ if and only if }\phi(X,JY,Z)=\ \ \phi(JX,Y,Z).\] \[\phi\in\Omega^{1,1}(TM)\text{ if and only if }\phi(X,JY,JZ)=\phi(X,Y,Z).\]
There are projections
\[p_{j,k}:\Omega(TM)\to\Omega^{j,k}(TM)\text{ with }j,k\in\{0,1\}\text{ and }j+k=2\]
defined for any \(\phi\in\Omega(TM)\) by
\[p_{2,0}(\phi)(X,Y,Z) =\tfrac{1}{4}\big{(}\phi(X,Y,Z)-\phi(X,JY,JZ)+\phi(JX,Y,JZ)+\phi( JX,JY,Z)\big{)},\] \[p_{0,2}(\phi)(X,Y,Z) =\tfrac{1}{4}\big{(}\phi(X,Y,Z)-\phi(X,JY,JZ)-\phi(JX,Y,JZ)-\phi( JX,JY,Z)\big{)},\] \[p_{1,1}(\phi)(X,Y,Z) =\tfrac{1}{2}\big{(}\phi(X,Y,Z)+\phi(X,JY,JZ)\big{)},\]
so that the space \(\Omega^{2}(TM)\) is endowed with a natural orthogonal splitting
\[\Omega^{2}(TM)=\Omega^{2,0}(TM)\oplus\Omega^{1,1}(TM)\oplus\Omega^{0,2}(TM).\]
As an example, the Nijenhuis tensor \(N\in\Omega^{0,2}(TM)\) as \(N(X,JY,Z)=N(JX,Y,Z)\).
**Definition 8.17**.: For \(\phi\in\Omega^{2}(TM)\) we set \(p_{j,k}(\phi)=\phi^{j,k}\).
**Lemma 8.18**.: [10] For any \(\phi\in\Omega^{1,1}(TM)\) we have \(\phi(X,JY,Z)+\phi(X,Y,JZ)=0\). Consequently
\[(\widetilde{\nabla}\omega)^{1,1}=0.\]
Proof.: We have \(\phi(X,JY,Z)=\phi(X,J^{2}Y,JZ)=-\phi(X,Y,JZ)\) giving the first result. Writing \(A=A^{2,0}+A^{1,1}+A^{0,2}\), Proposition 8.15 implies
\[0=A^{1,1}(X,JY,Z)+A^{1,1}(X,Y,JZ)=-(\widetilde{\nabla}\omega)^{1,1}(X,Y,Z).\qed\]
The space \(\Omega^{2}(TM)\) also has two other distinguished projections which, apriori, do not involve the almost complex structure \(J\).
**Definition 8.19**.: _[_G97_]_ _We define the operator \(\mathcal{P}\) on \(\Omega^{2}(TM)\) by_
\[\mathcal{P}(\phi)(X,Y,Z)=\tfrac{1}{3}\big{(}\phi(X,Y,Z)+\phi(Y,Z,X)+\phi(Z,X,Y) \big{)}\]
_and the operator \(\mathcal{Q}\), defined by_
\[\mathcal{Q}(\phi)(X,Y,Z)=\tfrac{1}{2n-1}\big{(}\sum_{j}\phi(v_{j},v_{j},Z) \langle X,Y\rangle-\sum_{j}\phi(v_{j},v_{j},Y)\langle Z,X\rangle\big{)}\]
_where \(v_{1},\ldots,v_{2n}\) is a local orthonormal frame._
Evidently \(\mathcal{P}\) is the identity on the restriction to \(\Omega^{3}(M)\subset\Omega^{2}(TM)\) and so \(\Omega^{3}(M)\) can be identified with the image of \(\mathcal{P}\) on \(\Omega^{2}(TM)\). We also note that the map \(\mathcal{Q}\) is actually the composition of the map onto \(1\) forms, \(r:\Omega^{2}(TM)\to\Omega^{1}(M)\) defined by
\[r(\phi)(X)=\sum_{j}\phi(v_{j},v_{j},X)\]
with the map \(i:\Omega^{1}(M)\to\Omega^{2}(TM)\) defined by
\[i(\varphi)(X,Y,Z)=\tfrac{1}{2n-1}\big{(}\varphi(Z)\langle X,Y\rangle-\varphi( Y)\langle Z,X\rangle\big{)}\]
so that \(i\circ r=\mathcal{Q}\). It is also easy to see that \(r\circ i\) is the identity map on \(\Omega^{1}(M)\), so that \(i\) is an inclusion of \(1\)-forms into \(\Omega(TM)\) and \(r\) is a retraction of \(\Omega^{2}(TM)\) onto \(1\)-forms. We conclude \(\mathcal{P}^{2}=\mathcal{P}\) and \(\mathcal{Q}^{2}=(ir)(ir)=i(ri)r=ir=\mathcal{Q}\) so that \(\mathcal{P}\) and \(\mathcal{Q}\) are projections.
**Lemma 8.20**.: \(\mathcal{P}\mathcal{Q}=\mathcal{Q}\mathcal{P}=0\)_._
Proof.: Evidently \(\mathcal{Q}\mathcal{P}=0\) as \(\mathcal{Q}\) vanishes on elements of \(\Omega^{3}(M)\). That \(\mathcal{P}\mathcal{Q}=0\) follows from
\[(\mathcal{P}\circ\mathcal{Q})\,(\phi)(X,Y,Z)=\mathcal{Q}(\phi)(X,Y,Z)+ \mathcal{Q}(\phi)(Y,Z,X)+\mathcal{Q}(\phi)(Z,X,Y)\]
and applying the definition of \(\mathcal{Q}(\phi)\).
Given two projections \(\mathcal{P},\mathcal{Q}\) on a vector space \(V\), satisfying \(\mathcal{P}\mathcal{Q}=\mathcal{Q}\mathcal{P}=0\) one can conclude \(V=\operatorname{Im}(\mathcal{P})\oplus\operatorname{Im}(\mathcal{Q})\oplus \big{(}\ker(\mathcal{Q})\cap\ker(\mathcal{P})\big{)}\). Thus any \(\phi\in\Omega(TM)\) can be uniquely written as
\[\phi=\mathcal{P}\phi+\mathcal{Q}\phi+\phi_{0}\]
where \(\phi_{0}\) is the component of \(\phi\) in \(\ker(\mathcal{P})\cap\ker(\mathcal{Q})\). Moreover we have the following identification
**Proposition 8.21**.: _[_G97_]__\(\Omega^{2}(TM)\cong\Omega^{1}(M)\oplus\Omega^{3}(M)\oplus\big{(}\ker( \mathcal{Q})\cap\ker(\mathcal{P})\big{)}\)._
Proof.: This follows from the above remarks, along with the identifications \(\Omega^{1}(M)\cong\operatorname{Im}(\mathcal{Q})\) and \(\Omega^{3}(M)\cong\operatorname{Im}(\mathcal{P})\).
**Proposition 8.22**.: _[_G97_]_ _Let \(T\) be the torsion of an affine metric connection and \(A\) the potential. Then \(A+T=3\mathcal{P}(A)=\frac{3}{2}\mathcal{P}(T)\)._
Proof.: We observe
\[T(X,Y,Z)=\langle X,T(Y,Z)\rangle=\langle X,\nabla_{Y}Z-\nabla_{Z}Y-\widetilde {\nabla}_{Y}Z+\widetilde{\nabla}_{Z}Y\rangle=A(Y,Z,X)+A(Z,X,Y)\]
so that
\[A(X,Y,Z)+T(X,Y,Z)=A(X,Y,Z)+A(Y,Z,X)+A(Z,X,Y)=3\mathcal{P}(A)(X,Y,Z).\]
Using again that \(T(X,Y,Z)=A(Y,Z,X)+A(Z,X,Y)\) and the definition of \(\mathcal{P}\) one obtains \(3\mathcal{P}(A)=\frac{3}{2}\mathcal{P}(T)\).
Let \(J\) be an almost complex structure on the complexified cotangent bundle \(T_{\mathbb{C}}M^{\vee}\) and let \(E_{\lambda}(J_{\mathrm{der}})\) denote the \(\lambda\) eigenspace of \(J_{\mathrm{der}}\) where \(J_{\mathrm{der}}\) is \(J\) extended as a derivation on forms. We recall for \(k=0,\ldots,2n\) we have
\[E_{ik}(J_{\mathrm{der}})=\bigoplus_{k=p-q}\Omega^{p,q}(M).\]
We define the _real_ subspaces \(E^{+}\) and \(E^{-}\) of \(\Omega^{3}(M)\) by
\[E^{+}=\operatorname{Re}\bigl{(}E_{i}(J_{\mathrm{der}})\oplus E_{-i}(J_{\mathrm{der }})\bigr{)}\ =\ \operatorname{Re}\bigl{(}\Omega^{2,1}(M)\oplus\Omega^{1,2}(M)\bigr{)},\]
\[E^{-}=\operatorname{Re}\bigl{(}E_{3i}(J_{\mathrm{der}})\oplus E_{-3i}(J_{ \mathrm{der}})\bigr{)}=\operatorname{Re}\bigl{(}\Omega^{3,0}(M)\oplus\Omega^{0,3}(M)\bigr{)}\]
so that the space of real \(3\)-forms \(\Omega^{3}(M)=E^{-}\oplus E^{+}\).
**Definition 8.23**.: [G97] For a \(3\)-form \(\varphi\in\Omega^{3}(M)\) we set \(\varphi^{+}\) and \(\varphi^{-}\) to be its components in \(E^{+}\) and \(E^{-}\) respectively.
We note [G97] that on the space \(\Omega^{3}(M)\) the above decompositions are related as follows. For any \(3\)-form \(\varphi\in\Omega^{3}(M)\) one can check that
\[\varphi^{-}=\varphi^{0,2},\]
\[\varphi^{+}=\varphi^{2,0}+\varphi^{1,1}.\]
To illustrate this, if \(\varphi\in E^{-}\) then \(J_{\mathrm{der}}^{2}(\varphi)=-9\varphi\). Expanding \(J_{\mathrm{der}}^{2}(\varphi)\) one gets
\[J_{\mathrm{der}}^{2}\varphi(X,Y,Z)=-3\varphi(X,Y,Z)+2\varphi(JX,JY,Z)+2\varphi( X,JY,JZ)+2\varphi(JX,Y,JZ)\]
and so
\[\varphi(X,Y,Z)=\tfrac{1}{4}\bigl{(}\varphi(X,Y,Z)-\varphi(X,JY,JZ)-\varphi(JX, Y,JZ)-\varphi(JX,JY,Z)\bigr{)}=\varphi^{0,2}(X,Y,Z).\]
We now state and prove two useful lemmas involving the Nijenhuis tensor which we will require
**Lemma 8.24**.: \(\mathcal{P}N=\tfrac{1}{3}(d_{c}\omega)^{-}\)_._
Proof.: For \(X\in T_{\mathbb{C}}(M)\) consider \(g_{X}=(X,\cdot)\) where \((\cdot,\cdot)\) is the metric complex bilinearly extended to \(T_{\mathbb{C}}(M)\). We observe that \(N(X,Y,Z)\) (and hence \(\mathcal{P}N(X,Y,Z)\)) is non-zero only if the vectors \(X,Y,Z\) are all of pure type \((1,0)\) (or of type \((0,1)\)) in the direct sum decomposition \(T_{\mathbb{C}}(M)=T^{1,0}(M)\oplus T^{0,1}(M)\). We have
\[3\mathcal{P}N(X,Y,Z) =g_{X}(N(Y,Z))+g_{Y}(N(Z,X))+g_{Z}(N(X,Y))\] \[=N^{\vee}(g_{X})(Y,Z)+N^{\vee}(g_{Y})(Z,X)+N^{\vee}(g_{Z})(X,Y)\] \[=\bigl{(}\mu+\overline{\mu}\bigr{)}(g_{X})(Y,Z)+\bigl{(}\mu+ \overline{\mu}\bigr{)}(g_{Y})(X,Z)+\bigl{(}\mu+\overline{\mu}\bigr{)}(g_{Z})( X,Y)\] \[=-g_{X}\bigl{(}[Y,Z]\bigr{)}-g_{Y}\bigl{(}[Z,X]\bigr{)}-g_{Z} \bigl{(}[X,Y]\bigr{)}\]
where in the last equality we use that \(d\varphi(X,Y)=X(\varphi(Y))-Y(\varphi(X))-\varphi[X,Y]\) for any \(\varphi\in\Omega^{1}_{\mathbb{C}}(M)\), and that \((X,Y)=0\) for \(X,Y\) both of type \((1,0)\) or type \((0,1)\). Furthermore on vectors \(X,Y,Z\) all of type \((1,0)\) or \((0,1)\) we have
\[d\omega(X,Y,Z) =X(\omega(Y,Z))-Y(\omega(X,Z))+Z(\omega(X,Y))\] \[-\omega([X,Y],Z)+\omega([X,Z],Y)-\omega([Y,Z],X)\] \[=g_{JX}([Y,Z])+g_{JY}([Z,X])+g_{JZ}([X,Y]).\]
Noticing that
\[d_{c}\omega(X,Y,Z)=-d\omega(JX,JY,JZ)\]
and again using that \(X,Y,Z\) are all of pure type we obtain
\[(d_{c}\omega)^{-}(X,Y,Z)=-g_{X}\bigl{(}[Y,Z]\bigr{)}-g_{Y}\bigl{(}[Z,X]\bigr{)} -g_{Z}\bigl{(}[X,Y]\bigr{)}=3\mathcal{P}N(X,Y,Z).\qed\]
**Proposition 8.25**.: [G97] The map \(\mathcal{P}\) is invariant on \(\Omega^{0,2}(TM)\). Furthermore the restriction of \(\mathcal{P}\) to \(\Omega^{0,2}(TM)\) has image \(E^{-}\). That is, the restriction is a surjective map
\[\mathcal{P}:\Omega^{0,2}(TM)\to E^{-}.\]
Proof.: We observe \(\phi\in\Omega^{0,2}(TM)\) if and only if
\[\phi(X,Y,Z)=\phi^{0,2}(X,Y,Z)=\tfrac{1}{4}\big{(}\phi(X,Y,Z)-\phi(X,JY,JZ)-\phi( JX,Y,JZ)-\phi(JX,JY,Z)\big{)}\]
or equivalently
\[\phi(X,Y,Z)=-\tfrac{1}{3}\big{(}\phi(X,JY,JZ)+\phi(JX,Y,JZ)+\phi(JX,JY,Z)\big{)}.\]
We compute
\[3\mathcal{P}\phi(X,Y,Z) =\phi(X,Y,Z)+\phi(Y,Z,X)+\phi(Z,X,Y)\] \[=-\tfrac{1}{3}\Big{(}\phi(X,JY,JZ)+\phi(JX,Y,JZ)+\phi(JX,JY,Z)\] \[\qquad+\phi(Y,JZ,JX)+\phi(JY,Z,JX)+\phi(JY,JZ,X)\] \[\qquad+\phi(Z,JX,JY)+\phi(JZ,X,JY)+\phi(JZ,JX,Y)\Big{)}\] \[=-\mathcal{P}\phi(X,JY,JZ)-\mathcal{P}\phi(JX,Y,JZ)-\mathcal{P} \phi(JX,JY,Z)\]
and thus \(\mathcal{P}\phi\in\Omega^{0,2}(TM)\). As \(\mathcal{P}\phi\in\Omega^{3}(M)\) as well we have \(\mathcal{P}\phi\in E^{-}\). Finally if \(\varphi\in E^{-}\) then \(\varphi=\varphi^{0,2}\) and \(\mathcal{P}(\varphi^{0,2})=\mathcal{P}(\varphi)=\varphi\) so that the restriction is surjective.
**Definition 8.26**.: _[_G97_]_ _We define the subspace \(\Omega^{1,1}_{s}(TM)\subset\Omega^{1,1}(TM)\) by_
\[\Omega^{1,1}_{s}(TM)=\ker(\mathcal{P})\cap\Omega^{1,1}(TM).\]
_Furthermore we define \(\Omega^{1,1}_{a}(TM)\) to be its orthogonal complement in \(\Omega^{1,1}(TM)\)._
**Definition 8.27**.: _[_G97_]_ _We define the operator \(\mathcal{M}\) on \(\Omega^{2}(TM)\) by_
\[\mathcal{M}(\phi)(X,Y,Z)=\phi(X,JY,JZ).\]
_With this operator, the proof of the following two propositions can also be checked directly_ _[_G97_]__._
**Proposition 8.28**.: _[_G97_]_ _The map \(\mathcal{P}\) restricted to \(\Omega^{2,0}(TM)\) is an isomorphism_
\[\mathcal{P}:\Omega^{2,0}(TM)\xrightarrow{\sim}E^{+}.\]
_Furthermore for any \(\phi\in\Omega^{2,0}(TM)\) we have \(\phi=\tfrac{3}{2}(\mathcal{P}\phi-\mathcal{MP}\phi)\)._
**Proposition 8.29**.: _[_G97_]_ _The map \(\mathcal{P}\) restricted to \(\Omega^{1,1}_{a}(TM)\) is an isomorphism_
\[\mathcal{P}:\Omega^{1,1}_{a}(TM)\xrightarrow{\sim}E^{+}.\]
_Furthermore for any \(\phi\in\Omega^{1,1}_{a}(TM)\) we have \(\phi=\tfrac{3}{4}(\mathcal{P}\phi+\mathcal{MP}\phi)\)._
**Lemma 8.30**.: _For any \(\phi\in\Omega^{2}(TM)\) we have \(\mathcal{P}(\phi^{1,1}_{a})+\mathcal{P}(\phi^{2,0})=(\mathcal{P}\phi)^{+}\)._
Proof.: Evidently \(\phi=\phi^{1,1}+\phi^{2,0}+\phi^{0,2}\) and so
\[\mathcal{P}\phi=\mathcal{P}(\phi^{1,1})+\mathcal{P}(\phi^{2,0})+\mathcal{P}( \phi^{0,2}).\]
Thus, by Propositions 8.28 and 8.29 above we have
\[(\mathcal{P}\phi)^{+}=\mathcal{P}(\phi^{1,1}_{a})+\mathcal{P}(\phi^{2,0}).\qed\]
**Lemma 8.31**.: \(\mathcal{P}(\widetilde{\nabla}\omega)=\tfrac{1}{3}d\omega\)_. In particular_
\[\mathcal{P}(\widetilde{\nabla}\omega^{2,0})=\tfrac{1}{3}d\omega^{+}\quad\text{ and}\quad\mathcal{P}(\widetilde{\nabla}\omega^{0,2})=\tfrac{1}{3}d\omega^{-}.\]
Proof.: The identity \(\mathcal{P}(\widetilde{\nabla}\omega)=\tfrac{1}{3}d\omega\) follows at once from the observation that
\[d\varphi(v_{1},\dots,v_{k+1})=\sum_{j}(-1)^{j+1}\widetilde{\nabla}_{v_{j}} \varphi(v_{1},\dots,\widehat{v_{j}},\dots,v_{k+1})\text{ for any $k$ form $\varphi$}.\]
The remaining identities follows from the isomorphisms in Propositions 8.28 and 8.25 and the observation, proven in Lemma 8.18, that \((\widetilde{\nabla}\omega)^{1,1}=0\).
**Lemma 8.32**.: \((d\omega)^{+}=3\mathcal{P}\mathcal{M}((d\omega)^{+})\)_._
Proof.: We have by Proposition 8.28 and Lemma 8.31 that
\[(\widetilde{\nabla}\omega)^{2,0}=\tfrac{3}{2}(\mathcal{P}(\widetilde{\nabla} \omega^{2,0})-\mathcal{M}(\mathcal{P}(\widetilde{\nabla}\omega^{2,0}))=\tfrac{ 1}{2}((d\omega)^{+}-\mathcal{M}(d\omega)^{+}).\]
Applying \(\mathcal{P}\) on both sides of the equality, and using Lemma 8.31 once more, we conclude
\[\tfrac{1}{3}(d\omega)^{+}=\mathcal{P}\mathcal{M}(d\omega^{+}).\qed\]
**Lemma 8.33**.: \((d\omega)^{-}(X,Y,Z)=-3(\mathcal{P}T)^{-}(JX,Y,Z).\)__
Proof.: Observe that by Proposition 8.15 we have
\[-(\widetilde{\nabla}\omega)^{0,2}(X,Y,Z)=A^{0,2}(X,JY,Z)+A^{0,2}(X,Y,JZ)\]
so that
\[-3\mathcal{P}\big{(}(\widetilde{\nabla}\omega)^{0,2}\big{)}(X,Y,Z ) =A^{0,2}(X,JY,Z)+A^{0,2}(X,Y,JZ)\] \[+A^{0,2}(Y,JZ,X)+A^{0,2}(Y,Z,JX)\] \[+A^{0,2}(Z,JX,Y)+A^{0,2}(Z,X,JY).\]
Using the identity \(T(X,Y,Z)=A(Y,Z,X)+A(Z,X,Y)\) obtained in the proof of Proposition 8.22, we have
\[-3\mathcal{P}\big{(}(\widetilde{\nabla}\omega)^{0,2}\big{)}(X,Y,Z ) =T^{0,2}(JY,Z,X)+T^{0,2}(JZ,X,Y)+T^{0,2}(JX,Y,Z)\] \[=T^{0,2}(Y,Z,JX)+T^{0,2}(Z,JX,Y)+T^{0,2}(JX,Y,Z)\]
where in the last equality we use that \(\phi(JX,Y,Z)=\phi(X,JY,Z)\) for any \(\phi\in\Omega^{0,2}(TM)\). We conclude
\[-3\mathcal{P}\big{(}(\widetilde{\nabla}\omega)^{0,2}\big{)}(X,Y,Z)=3\mathcal{ P}\big{(}T^{0,2}\big{)}(JX,Y,Z)=3(\mathcal{P}T)^{-}(JX,Y,Z)\]
and the result follows by Lemma 8.31.
**Lemma 8.34**.: If \(\varphi\in E^{+}\) then \(\varphi(X,Y,Z)=\varphi(JX,JY,Z)+\varphi(X,JY,JZ)+\varphi(JX,Y,JZ)\).
Proof.: For any \(\varphi\in E^{+}\) we have \(J^{2}_{\mathrm{der}}\varphi=-\varphi\). But
\[J^{2}_{\mathrm{der}}\varphi(X,Y,Z)=-3\varphi(X,Y,Z)+2\varphi(JX,JY,Z)+2\varphi( X,JY,JZ)+2\varphi(JX,Y,JZ)\]
which gives the result.
**Proposition 8.35**.: [10] An affine metric connection \(\nabla\) on \(M\) is Hermitian if and only if
\[T^{2,0}-\tfrac{3}{2}\big{(}\mathcal{P}(T^{1,1})-\mathcal{M} \mathcal{P}(T^{1,1})\big{)} =\tfrac{1}{2}((d_{c}\omega)^{+}-\mathcal{M}(d_{c}\omega)^{+})\quad \text{and}\] \[T^{0,2} =N.\]
Proof.: By Proposition 8.15 we have that \(\nabla\) is hermitian if and only if
\[A(X,JY,Z)+A(X,Y,JZ)=-(\widetilde{\nabla}\omega)(X,Y,Z).\]
Using Proposition 8.22 we rewrite the identity in Proposition 8.15 as
\[(\alpha)\quad\quad T(X,JY,Z)+T(X,Y,JZ)-\tfrac{3}{2}\big{(}\mathcal{P}T(X,JY,Z) +\mathcal{P}T(X,Y,JZ)\big{)}=(\widetilde{\nabla}\omega)(X,Y,Z).\]
We decompose \(T=T^{1,1}+T^{2,0}+T^{0,2}\) and collecting the \((0,2)\) parts of the expression \((\alpha)\), by Proposition 8.25 we obtain
\[2T^{0,2}(JX,Y,Z)-3(\mathcal{P}T)^{-}(JX,Y,Z)=(\widetilde{\nabla}\omega)^{0,2} (X,Y,Z). \tag{6}\]
We observe that by Lemma 8.33 we have
\[2T^{0,2}(JX,Y,Z)+(d\omega)^{-}(X,Y,Z)=(\widetilde{\nabla}\omega)^{0,2}(X,Y,Z).\]
By Lemma 8.13, we see that
\[2T^{0,2}(JX,Y,Z)=(\widetilde{\nabla}\omega)^{0,2}(X,Y,Z)-d\omega^{-}(X,Y,Z)=2N ^{0,2}(JX,Y,Z)=2N(JX,Y,Z)\]
and we conclude equation (6) holds if and only if \(T^{0,2}=N\).
We recall that by Lemma 8.18 we have
\[A^{1,1}(X,JY,Z)+A^{1,1}(X,Y,JZ)=-(\widetilde{\nabla}\omega)^{1,1}(X,Y,Z)=0.\]
Collecting the \((1,1)\) parts of the expression \((\alpha)\), we observe, as in the proof of Proposition 8.15, that \(T^{1,1}(X,JY,Z)+T^{1,1}(X,Y,JZ)=0\) and so
\[\mathcal{P}(T^{1,1})(X,JY,Z)+\mathcal{P}(T^{1,1})(X,Y,JZ)=0.\]
Thus, by Lemma 8.30 we have
\[(\mathcal{P}T)^{+}(X,JY,Z)+(\mathcal{P}T)^{+}(X,Y,JZ)=\mathcal{P}(T^{2,0})(X, JY,Z)+\mathcal{P}(T^{2,0})(X,Y,JZ).\]
Finally, collecting the \((2,0)\) parts of the expression we have
\[(\gamma)\quad-2T^{2,0}(JX,Y,Z)-\tfrac{3}{2}\big{(}(\mathcal{P}T)^{+}(X,JY,Z)+( \mathcal{P}T)^{+}(X,Y,JZ)\big{)}=(\widetilde{\nabla}\omega)^{2,0}(X,Y,Z).\]
By Lemma 8.34 we rewrite \((\gamma)\) as
\[-2T^{2,0}(JX,Y,Z)+\tfrac{3}{2}\big{(}(\mathcal{P}T)^{+}(JX,Y,Z)-(\mathcal{P}T )^{+}(JX,JY,JZ)\big{)}=(\widetilde{\nabla}\omega)^{2,0}(X,Y,Z).\]
Notice that by Lemma 8.30 and Proposition 8.28 we have \((\mathcal{P}T)^{+}=\mathcal{P}(T^{2,0})+\mathcal{P}(T^{1,1})\) and \(T^{2,0}=\tfrac{3}{2}\big{(}\mathcal{P}(T^{2,0})-\mathcal{MP}(T^{2,0})\big{)}\). We rewrite \((\gamma)\) again as
\[-T^{2,0}(JX,Y,Z)+\tfrac{3}{2}\big{(}\mathcal{P}T^{1,1}(JX,Y,Z)-\mathcal{MP}T^{ 1,1}(JX,Y,Z)\big{)}=(\widetilde{\nabla}\omega)^{2,0}(X,Y,Z).\]
By Lemma 8.31 and Proposition 8.28 we see
\[(\widetilde{\nabla}\omega)^{2,0}=\tfrac{3}{2}\big{(}\mathcal{P}((\widetilde{ \nabla}\omega)^{2,0})-\mathcal{M}(\mathcal{P}((\widetilde{\nabla}\omega)^{2,0 }))\big{)}=\tfrac{1}{2}\big{(}(d\omega)^{+}-\mathcal{M}(d\omega)^{+}\big{)}\]
so that \((\gamma)\) holds if and only if
\[T^{2,0}-\tfrac{3}{2}\big{(}\mathcal{P}(T^{1,1})-\mathcal{MP}(T^{1,1})\big{)}= \tfrac{1}{2}((d_{c}\omega)^{+}-\mathcal{M}(d_{c}\omega)^{+}).\]
Evidently \(\nabla\) is hermitian if and only if \((\alpha)\), which holds if and only if both (6) and \((\gamma)\).
**Lemma 8.36**.: _[_6_]_ _Let \(\nabla\) be a Hermitian connection on \(M\). Then_
\[\mathcal{P}(T^{2,0}-T_{a}^{1,1})=\tfrac{1}{3}d_{c}\omega^{+}.\]
Proof.: By the proof of Proposition 8.35 we have
\[T^{2,0}-\tfrac{3}{2}\big{(}\mathcal{P}T^{1,1}-\mathcal{M}(\mathcal{P}T^{1,1} )\big{)}=(\nabla\omega)^{2,0}(J\cdot,\cdot,\cdot).\]
Applying \(\mathcal{P}\) to both sides of the equation we obtain by Lemma 8.31
\[\mathcal{P}(T^{2,0})-\tfrac{3}{2}\big{(}\mathcal{P}(T^{1,1})-\mathcal{P} \mathcal{MP}(T^{1,1})\big{)}=\tfrac{1}{3}d_{c}\omega^{+}.\]
Using that \(T^{1,1}(X,JY,Z)+T^{1,1}(X,Y,JZ)=0\) the identity \(\mathcal{P}(T^{1,1})=3\mathcal{P}\mathcal{MP}(T^{1,1})\) is readily verified. This gives the result.
Combining Lemmas 8.30 and 8.36 we see that whenever \(T\) is the torsion of a Hermitian connection on an almost Hermitian manifold \(M\) we have the equations
\[\mathcal{P}(T_{a}^{1,1}) =\tfrac{1}{2}(\mathcal{P}T^{+}-\tfrac{1}{3}(d_{c}\omega)^{+}),\] \[\mathcal{P}(T^{2,0}) =\tfrac{1}{2}(\mathcal{P}T^{+}+\tfrac{1}{3}(d_{c}\omega)^{+}).\]
This leads us to the following
**Theorem 8.37**.: _[_6_]_ _On any almost Hermitian manifold \(M\) the torsion of any Hermitian connection \(\nabla\) on \(M\) is given by_
\[T=N+\tfrac{9}{8}\mathcal{P}T^{+}+\tfrac{1}{8}d_{c}\omega^{+}-\tfrac{3}{8} \mathcal{M}(\mathcal{P}T^{+})-\tfrac{3}{8}\mathcal{M}(d_{c}\omega^{+})+T_{s} ^{1,1}.\]
Proof.: Let \(T\) be the torsion of a Hermitian connection. We have_
\[T =T^{0,2}+T^{1,1}+T^{2,0}\] \[=N+T_{a}^{1,1}+T^{2,0}+T_{s}^{1,1}\] \[=N+\tfrac{3}{4}\big{(}\mathcal{P}(T^{1,1})+M\mathcal{P}(T^{1,1} )\big{)}+\tfrac{3}{2}\big{(}\mathcal{P}(T^{1,1})-\mathcal{MP}(T^{1,1})\big{)}+ \tfrac{1}{2}(d_{c}\omega^{+}-\mathcal{M}(d_{c}\omega^{+}))+T_{s}^{1,1}\] \[=N+\tfrac{3}{8}\big{(}3\mathcal{P}T^{+}-d_{c}\omega^{+}-\mathcal{ M}(\mathcal{P}T^{+})+\tfrac{1}{3}\mathcal{M}(d_{c}\omega^{+})\big{)}+\tfrac{1}{2} \big{(}d_{c}\omega^{+}-\mathcal{M}(d_{c}\omega^{+})\big{)}+T_{s}^{1,1}\] \[=N+\tfrac{9}{8}\mathcal{P}T^{+}+\tfrac{1}{8}d_{c}\omega^{+}- \tfrac{8}{8}\mathcal{M}(\mathcal{P}T^{+})-\tfrac{3}{8}\mathcal{M}(d_{c}\omega ^{+})+T_{s}^{1,1}.\]
Therefore we obtain the
**Corollary 8.38**.: _[_G97_]_ _A Hermitian connection \(\nabla\) is uniquely determined by choice of \(T^{1,1}_{s}\) and \(\mathcal{P}T^{+}\)._
From the above Theorem and Corollary, Gauduchon [G97] defines an affine line of 'canonical' Hermitian connections on an almost Hermitian manifold. This is the set of Hermitian connections, \(\nabla^{t}\), with torsion \(T^{t}=T\) satisfying
\[T^{1,1}_{s}=0\quad\text{and}\quad\mathcal{P}T^{+}=\tfrac{2t-1}{3}(d_{c}\omega) ^{+}\text{ for any }t\in\mathbb{R}.\]
Theorem 8.37 then yields the following elegant characterization of any canonical Hermitian connection. Namely the torsion of \(\nabla^{t}\) is given by
(CT) \[T^{t}=N+\tfrac{3t-1}{4}d_{c}\omega^{+}-\tfrac{t+1}{4}\mathcal{M}(d_{c}\omega^{ +}).\]
For example, a natural choice of Hermitian connection is the _Chern connection_\(\nabla^{\text{ch}}\) defined by requiring \(T^{1,1}=0\) or equivalently \(t=1\). It is readily seen that, in the case of a Kahler manifold, we have the well known identity \(\nabla^{\text{ch}}=\widetilde{\nabla}\) as all torsion components of \(\nabla^{\text{ch}}\) vanish.
What is more, evidently all of the canonical Hermitian connections agree in the case that \(d\omega=0\). We denote the potential of a canonical hermitian connection by \(A^{t}\). By Proposition 8.22 we observe
\[A^{t}=-T^{t}+\tfrac{3}{2}\mathcal{P}T^{t}.\]
We then have
**Proposition 8.39**.: _[_G97_]_ _Let \(M\) be an almost Hermitian manifold and let \(\nabla^{t}\) be a canonical Hermitian connection in the sense of Gauduchon. Then_
\[A^{t}=-N+\tfrac{3}{2}\mathcal{P}N+\tfrac{t-1}{4}d_{c}\omega^{+}+\tfrac{t+1}{4 }\mathcal{M}(d_{c}\omega^{+}).\]
Proof.: The result follows by substituting the identity (CT) in \(A^{t}=-T^{t}+\tfrac{3}{2}\mathcal{P}T^{t}\). We observe that_
\[\mathcal{P}T^{t}=\mathcal{P}N+\tfrac{3t-1}{4}d_{c}\omega^{+}-\tfrac{t+1}{4} \mathcal{P}\mathcal{M}(d_{c}\omega^{+})=\mathcal{P}N+\tfrac{2t-1}{3}d_{c} \omega^{+},\]
_where we use that \(\mathcal{P}\mathcal{M}(d_{c}\omega^{+})=\tfrac{1}{3}d_{c}\omega^{+}\) which follows by Lemma 8.32. _
|
2304.07731 | Saturation numbers of bipartite graphs in random graphs | For a given graph $F$, the $F$-saturation number of a graph $G$, denoted by $
{sat}(G, F)$, is the minimum number of edges in an edge-maximal $F$-free
subgraph of $G$. In 2017, Kor\'andi and Sudakov determined $ {sat}({G}(n, p),
K_r)$ asymptotically, where ${G}(n, p) $ denotes the Erd\H{o}s-R\'enyi random
graph and $ K_r$ is the complete graph on $r$ vertices. In this paper, among
other results, we present an asymptotic upper bound on ${sat}({G}(n, p), F)$
for any bipartite graph $F$ and also an asymptotic lower bound on ${sat}({G}(n,
p), F)$ for any complete bipartite graph $F$. | Meysam Miralaei, Ali Mohammadian, Behruz Tayfeh-Rezaie, Maksim Zhukovskii | 2023-04-16T09:06:37Z | http://arxiv.org/abs/2304.07731v1 | # Saturation numbers of bipartite graphs in random graphs
###### Abstract
For a given graph \(F\), the \(F\)-saturation number of a graph \(G\), denoted by \(\operatorname{sat}(G,F)\), is the minimum number of edges in an edge-maximal \(F\)-free subgraph of \(G\). In 2017, Korandi and Sudakov determined \(\operatorname{sat}(G(n,p),K_{r})\) asymptotically, where \(\operatorname{\mathcal{G}}(n,p)\) denotes the Erdos-Renyi random graph and \(K_{r}\) is the complete graph on \(r\) vertices. In this paper, among other results, we present an asymptotic upper bound on \(\operatorname{sat}(\operatorname{\mathcal{G}}(n,p),F)\) for any bipartite graph \(F\) and also an asymptotic lower bound on \(\operatorname{sat}(\operatorname{\mathcal{G}}(n,p),F)\) for any complete bipartite graph \(F\).
**Keywords:** Bipartite graph, Random graph, Saturation number.
**2020 Mathematics Subject Classification:** 05C35, 05C80.
\({}^{*}\)Partially supported by a grant from IPM.
\({}^{b}\)Partially supported by Iran National Science Foundation under project number 99003814.
\({}^{c}\)Partially supported by the Natural Science Foundation of Anhui Province with grant identifier 2008085MA03 and by the National Natural Science Foundation of China with grant number 12171002.
which asks for the minimum number of edges in an edge-maximal \(F\)-free graph on \(n\) vertices. In this paper, we deal with a random version of this concept. We below define the concept in a more general form.
Let \(G\) be a graph. The edge set of \(G\) is denoted by \(E(G)\). A spanning subgraph \(H\) of \(G\) is said to be an _\(F\)-saturated subgraph of \(G\)_ if \(H\) is \(F\)-free and the addition of any edge from \(E(G)\setminus E(H)\) to \(H\) creates a copy of \(F\). The minimum number of edges in an \(F\)-saturated subgraph of \(G\) is denoted by \(\operatorname{sat}(G,F)\). Let \(K_{r}\) be the complete graph on \(r\) vertices and \(K_{s,t}\) be the complete bipartite graph with parts of sizes \(s\) and \(t\). Usually, \(\operatorname{sat}(K_{n},F)\) is written as \(\operatorname{sat}(n,F)\). Erdos, Hajnal, and Moon [8] proved that
\[\operatorname{sat}(n,K_{r})=(r-2)n-\binom{r-1}{2},\]
where \(n\geqslant r\geqslant 2\). Also, with the assumption \(t\geqslant s\), Bohman, Fonoberova, and Pikhurko [3] proved that
\[\operatorname{sat}(n,K_{s,t})=\frac{2s+t-3}{2}n+O\left(n^{\frac{3}{4}}\right).\]
We refer the reader to the survey [9] for more known results on saturation in graphs.
Recall that the Erdos-Renyi random graph model \(\,\mathbb{G}(n,p)\) is the probability space of all graphs on a fixed vertex set of size \(n\) where every two distinct vertices are adjacent independently with probability \(p\). Throughout this paper, \(p\) is assumed to be a fixed real number in \((0,1)\). Recall that the notion 'with high probability', which is written as 'whp' for brevity, is used whenever an event occurs in \(\,\mathbb{G}(n,p)\) with a probability approaching \(1\) as \(n\to\infty\). The study of saturation numbers in random graphs was initiated in 2017 by Korandi and Sudakov [15]. They proved that whp
\[\operatorname{sat}\bigl{(}\mathbb{G}(n,p),K_{r}\bigr{)}=\bigl{(}1+o(1) \bigr{)}n\log_{\frac{1}{1-p}}n\]
for any fixed \(r\geqslant 3\). Mohammadian and Tayfeh-Rezaie [16] studied the saturation numbers for stars and found that whp
\[\operatorname{sat}\bigl{(}\mathbb{G}(n,p),K_{1,t}\bigr{)}=\frac{t-1}{2}n- \bigl{(}t-1+o(1)\bigr{)}\log_{\frac{1}{1-p}}n\]
for any fixed \(t\geqslant 2\). Their result was refined by Demyanov and Zhukovskii in [6] where it has been proved that whp sat\(\bigl{(}\mathbb{G}(n,p),K_{1,t}\bigr{)}\) is concentrated in a set of two points. The related classical result had been proved by Kaszonyi and Tuza [14] as
\[\operatorname{sat}(n,K_{1,t})=\left\{\begin{array}{ll}\binom{t}{2}+\binom{ n-t}{2}&\text{ if }t+1\leqslant n\leqslant\frac{3t}{2},\\ \\ \left\lceil\frac{t-1}{2}n-\frac{t^{2}}{8}\right\rceil&\text{ if }n\geqslant \frac{3t}{2}.\end{array}\right.\]
Demidovich, Skorkin, and Zhukovskii [5] proved that whp
\[\operatorname{sat}\bigl{(}\mathbb{G}(n,p),C_{k}\bigr{)}=n+\Theta\left(\frac{n }{\log n}\right)\]
for any \(k\geqslant 5\), where \(C_{k}\) is a cycle graph on \(k\) vertices, while
\[\left(\frac{3}{2}+o(1)\right)n\leqslant\operatorname{sat}\bigl{(}\mathbb{G}( n,p),C_{4}\bigr{)}\leqslant\bigl{(}c_{p}+o(1)\bigr{)}n\]
for some explicit constant \(c_{p}\). In particular, \(c_{1/2}=27/14\).
The exact values of both \(\operatorname{sat}(n,K_{s,t})\) and \(\operatorname{sat}(\operatorname{\text{\text{\textlbrackc@underline G}}}(n,p),K_{s,t})\) are still unknown. Note that, for any connected graph \(F\) with no cut edges, both \(\operatorname{sat}(n,F)\) and \(\operatorname{sat}(\operatorname{\text{\textlbrackc@underline G}}}(n,p),F)\) are at least \(n-1\), since each \(F\)-saturated subgraph should be connected. Therefore, in particular, \(\operatorname{whp}\operatorname{sat}(\operatorname{\text{\textlbrackc@underline G }}}(n,p),K_{s,t})\geqslant n-1\) if \(t\geqslant s\geqslant 2\). Diskin, Hoshen, and Zhukovskii [7] showed that, for any bipartite graph \(F\), there exists a constant \(c_{F}\) such that \(\operatorname{sat}(\operatorname{\text{\textlbrackc@underline G}}}(n,p),F) \leqslant c_{F}n\)\(\operatorname{whp}\). However, an explicit value of \(c_{F}\) was not known. In this paper, we prove the following theorem for any arbitrary bipartite graph.
**Theorem 1.1**.: _Let \(p\in(0,1)\) be constant and let \(F\) be a bipartite graph with no isolated vertices. Let \(\{A_{1},B_{1}\},\ldots,\{A_{k},B_{k}\}\) be the vertex bipartitions of all the connected components of \(F\) with \(|B_{i}|\geqslant|A_{i}|\) for every \(i\). Let \(a=\max\{|A_{1}|,\ldots,|A_{k}|\}\) and \(\delta\) be the minimum degree over all vertices from \(A_{i}\) with \(|A_{i}|=a\). Then, \(\operatorname{whp}\)_
\[\operatorname{sat}\bigl{(}\operatorname{\text{\textlbrackc@underline G}}}(n,p),F \bigr{)}\leqslant\left(\frac{\delta-1}{p^{a-1}}-\frac{\delta-2a+1}{2}+o(1) \right)n.\]
Our proof of Theorem 1.1, which is presented in Section 4, is based on the construction suggested in [7]. Actually, we have tuned the parameters of the construction in order to achieve the optimal bound. For \(F=K_{s,t}\) with \(t\geqslant s\), Theorem 1.1 shows that \(\operatorname{whp}\)
\[\operatorname{sat}(\operatorname{\text{\textlbrackc@underline G}}}(n,p),K_{s,t})\leqslant\left(\frac{t-1}{p^{s-1}}-\frac{t-2s+1}{2}+o(1)\right)n.\]
For a lower bound on \(\operatorname{sat}(\operatorname{\text{\textlbrackc@underline G}}}(n,p),K_{s,t})\), we prove the following theorem.
**Theorem 1.2**.: _Let \(t\geqslant s\geqslant 2\) be fixed integers and let \(p\in(0,1)\) be constant. Then, \(\operatorname{whp}\)_
\[\operatorname{sat}(\operatorname{\text{\textlbrackc@underline G}}}(n,p),K_{s,t})\geqslant\left(\max\left\{\frac{2s+t-3}{2},\frac{t-s}{4p^{s-1}}+\frac{s-1 }{2}\right\}+o(1)\right)n.\]
The proof of the lower bound in Theorem 1.2 is the most involved part of the paper. It is presented in Section 5. For every fixed \(t>s\), our bounds in Theorem 1.2 imply that the \(K_{s,t}\)-saturation number in \(\operatorname{\text{\textlbrackc@underline G}}}(n,p)\) is \(\Theta(p^{1-s}n)\). Let us also note that, in the case \(s=t=2\), Theorem 1.2 provides the lower bound obtained in [5], while our upper bound is slightly worse.
As we saw above, \(\operatorname{whp}\operatorname{sat}(\operatorname{\text{\textlbrackc@underline G}}}(n,p),K_{r})\gg\operatorname{sat}(n,K_{r})\) for any \(r\geqslant 3\). For complete bipartite graphs, the saturation number is more stable, that is, \(\operatorname{sat}(\operatorname{\text{\textlbrackc@underline G}}}(n,p),K_{s,t})\) is linear in \(n\)\(\operatorname{whp}\) as well as \(\operatorname{sat}(n,K_{s,t})\). For \(t>s\geqslant 2\) and sufficiently small \(p\in(0,1)\), there is no asymptotical stability, that is, there exists a constant \(c>1\) such that \(\operatorname{sat}(\operatorname{\text{\textlbrackc@underline G}}}(n,p),K_{s,t})\geqslant c\operatorname{sat}(n,K_{s,t})\)\(\operatorname{whp}\). However, for \(s=t\) or \(t>s\geqslant 2\) and sufficiently large \(p\in(0,1)\), we do not know whether there is an asymptotical stability. Finally, the \(K_{1,t}\)-saturation number is asymptotically stable, while \(\operatorname{sat}(\operatorname{\text{\textlbrackc@underline G}}}(n,p),K_{1,t})<\operatorname{sat}(n,K_{1,t})\)\(\operatorname{whp}\). Note that, for cycles, \(\operatorname{whp}\operatorname{sat}(\operatorname{\text{\textlbrackc@underline G}}}(n,p),C_{k})\leqslant((k+2)/(k+3)+o(1)) \operatorname{sat}(n,C_{k})\) for any \(k\geqslant 5\) by a result of Furedi and Kim [11].
For the sake of completeness, we give in Section 3 two simple general lower bounds on \(\operatorname{sat}(\operatorname{\text{\textlbrackc@underline G}}}(n,p),F)\) for any arbitrary graph \(F\) which are asymptotically tight for certain graph families.
## 2 Notation and preliminaries
In this section, we introduce notation and formulate several properties of random graphs that will be used in the rest of the paper. First, let us fix some more notation and terminology of graph theory.
Let \(G\) be a graph. The vertex set of \(G\) is denoted by \(V(G)\) and the _order_ of \(G\) is defined as \(|V(G)|\). For a subset \(X\) of \(V(G)\), we denote the induced subgraph of \(G\) on \(X\) by \(G[X]\). For a subset \(Y\) of \(E(G)\), we denote by \(G-Y\) the graph obtained from \(G\) by removing the edges in \(Y\). For a subset \(Z\) of \(V(G)\), set \(N_{G}(Z)=\{v\in V(G)\,|\,v\text{ is adjacent to all vertices in }Z\}\). For the sake of convenience, we write \(N_{G}(z_{1},\ldots,z_{k})\) instead of \(N_{G}(\{z_{1},\ldots,z_{k}\})\). For a vertex \(v\) of \(G\), we define the _degree_ of \(v\) as \(|N_{G}(v)|\) and denote by \(d_{G}(v)\). The maximum and the minimum degree of vertices of \(G\) are denoted by \(\Delta(G)\) and \(\delta(G)\), respectively. For two subsets \(S\) and \(T\) of \(V(G)\), we denote by \(E_{G}(S,T)\) the set of all edges with endpoints in both \(S\) and \(T\). We write \(E_{G}(S)\) for \(E_{G}(S,S)\). We drop subscripts if there is no danger of confusion.
In what follows, we recall the probabilistic results that we make use of all in the next sections. The next lemma is well known and can be deduced from the Chernoff bound [12, Theorem 2.1].
**Lemma 2.1**.: _Let \(X\thicksim\operatorname{Bin}(n,p)\) be a binomial random variable with parameters \(n\) and \(p\). If \(\mathbb{E}[X]\to\infty\) as \(n\to\infty\), then \(X=\mathbb{E}[X](1+o(1))\) wh\(p\)._
The following lemma is a consequence of Proposition 19 in [1].
**Lemma 2.2**.: _For any constant \(p\in(0,1)\), there is a constant \(c\), depending on \(p\), such that \(\mathbb{G}(n,p)\) has the following property wh\(p\). For every subset \(X\) of vertices with \(|X|\geqslant c\log n\), the number of vertices with no neighbors in \(X\) is at most \(c\log n\)._
The following Lemma is an immediate consequence of the Chernoff bound [12, Theorem 2.1] and the union bound.
**Lemma 2.3**.: _Let \(\lambda>1\) and \(p\in(0,1)\) be constants. Then, \(\mathbb{G}(n,p)\) has the following property wh\(p\). For every two disjoint subsets \(X,Y\) of vertices of size at least \(\log^{\lambda}n\), we have \(|E(X)|=p\binom{|X|}{2}(1+o(1))\) and \(|E(X,Y)|=p|X||Y|(1+o(1))\)._
The following corollary follows from Lemma 2.3 immediately.
**Corollary 2.4**.: _Let \(\lambda>1\) and \(p\in(0,1)\) be constants. Then, \(\mathbb{G}(n,p)\) has the following property wh\(p\). For every two subsets \(X,Y\) of vertices, \(|E(X,Y)|\leqslant 3n\log^{\lambda}n+p|X||Y|(1+o(1))\)._
Note that, for a positive fixed integer \(M\), the probability that \(\mathbb{G}(n,p)\) does not contain a clique of size \(M\) is \(\exp(-\Theta(n^{2}))\) due to the Janson bound [12, Theorem 2.14]. Therefore, by the union bound, we get the following.
**Lemma 2.5**.: _Let \(\lambda>1\), \(p\in(0,1)\) be constants and let \(M\) be a positive fixed integer. Then, \(\mathbb{G}(n,p)\) has the following property wh\(p\). Every subset \(X\) of vertices of size at least \(\log^{\lambda}n\) contains a clique of size \(M\)._
## 3 General lower bounds
In this section, we prove a lower bound on \(\operatorname{sat}(G,F)\) for every two graphs \(G\) and \(F\) which provides a lower bound on \(\operatorname{sat}(\mathbb{G}(n,p),F)\). It is trivial that
\[\operatorname{sat}(G,F)\geqslant\frac{\min\{\delta(G),\delta(F)-1\}}{2}n\]
for any two graphs \(G\) and \(F\). In order to proceed, we need the following definition.
Let \(G\) be a graph and \(k\) be a nonnegative integer. A subset \(S\) of \(V(G)\) is called _\(k\)-independent_ if the maximum degree of \(G[S]\) is at most \(k\). The _\(k\)-independence number_ of \(G\), denoted by \(\alpha_{k}(G)\), is defined as the maximum cardinality of a \(k\)-independent set in \(G\). In particular, \(\alpha_{0}(G)=\alpha(G)\) is the usual independence number of \(G\). Furthermore, define \(r(G)=\min_{xy\in E(G)}\max\{d(x),d(y)\}\).
**Theorem 3.1**.: _Let \(F\) be a graph and let \(r=r(F)\). If \(r\geqslant 2\), then, for every graph \(G\) on \(n\) vertices,_
\[\operatorname{sat}(G,F)\geqslant\frac{(r-1)\big{(}n-\alpha_{r-2}(G)\big{)}}{2}.\]
Proof.: Let \(H\) be an \(F\)-saturated subgraph of \(G\). Let \(r\geqslant 2\) and let \(A\) be the set of vertices of \(H\) with degree at most \(r-2\) in \(H\). Suppose that there are two vertices \(x,y\in A\) with \(xy\in E(G)\setminus E(H)\). By definition of \(r\), adding \(xy\) to \(H\) does not create a copy of \(F\). This is a contradiction, since \(H\) is an \(F\)-saturated subgraph of \(G\). This implies that \(G[A]=H[A]\) and so \(|A|\leqslant\alpha_{r-2}(G)\). We hence obtain that
\[|E(H)|\geqslant\frac{\sum\limits_{v\in V(H)\setminus A}d_{H}(v)}{2}\geqslant \frac{(r-1)\big{(}n-\alpha_{r-2}(G)\big{)}}{2}.\qed\]
**Theorem 3.2** ([10, 16]).: _For every constants \(p\in(0,1)\) and \(k\geqslant 0\), whp_
\[\alpha_{k}\big{(}\,\text{\textcoth G}(n,p)\big{)}=\big{(}2+o(1)\big{)}\log_{ \frac{1}{1-p}}n.\]
Actually, we known from [13] that \(\alpha_{k}(\,\text{\textcoth G}(n,p))\) is concentrated in a set of two consecutive points whp. Using Theorems 3.1 and 3.2, we conclude the following.
**Corollary 3.3**.: _Let \(F\) be a graph and let \(r=r(F)\). Then, for each fixed real number \(p\in(0,1)\), whp_
\[\operatorname{sat}\bigl{(}\,\text{\textcoth G}(n,p),F\big{)}\geqslant\frac{r- 1}{2}n-\big{(}r-1+o(1)\big{)}\log_{\frac{1}{1-p}}n.\]
For \(F=K_{1,t}\), the lower bound given in Corollary 3.3 is tight by a result in [16]. However, for graphs \(F\) satisfying the property that each edge \(uv\in E(F)\) with \(\max\{d(u),d(v)\}=r(F)\) is contained in a triangle, the lower bound can be significantly improved.
For any graph \(G\), define \(w(G)=\min_{xy\in E(G)}\{\max\{d(x),d(y)\}+|N(x)\cap N(y)|\}\). Cameron and Puleo [4] proved that
\[\operatorname{sat}(n,F)\geqslant\frac{w(F)-1}{2}n-\frac{w(F)^{2}-4w(F)+5}{2}\]
for any \(n\). Below, we give a lower bound on \(\operatorname{sat}(\,\text{\textcoth G}(n,p),F)\) in terms of \(w(F)\) which is asymptotically stronger than Corollary 3.3 for many graphs \(F\).
**Theorem 3.4**.: _For any constant \(p\in(0,1)\) and any graph \(F\), whp_
\[\operatorname{sat}\bigl{(}\,\text{\textcoth G}(n,p),F\bigr{)}\geqslant\frac{w (F)-1}{2}n-O(\log n).\]
Proof.: If \(w(F)=1\), then there is nothing to prove. So, assume that \(w(F)\geqslant 2\). Let \(G\thicksim G(n,p)\) and \(\ell=c\log n\), where \(c\) is given in Lemma 2.2. Assume that \(H\) is an arbitrary \(F\)-saturated subgraph of \(G\) whose vertices are labeled as \(u_{1},\ldots,u_{n}\) for which \(d_{H}(u_{1})\leqslant\cdots\leqslant d_{H}(u_{n})\). Let \(U=\{u_{1},\ldots,u_{\ell}\}\).
For \(i=1,\ldots,\ell\), let \(V_{i}=N_{H}(u_{i})\) and \(V=\bigcup_{i=1}^{\ell}V_{i}\). Also, for \(i=1,\ldots,\ell\), define \(W_{i}=N_{G}(u_{i})\setminus(U\cup V\cup W_{1}\cup\cdots\cup W_{i-1})\) and set \(W=\bigcup_{i=1}^{\ell}W_{i}\). If \(d_{H}(u_{\ell})\geqslant w(F)-1\), then
\[|E(H)|\geqslant\frac{\sum\limits_{i=\ell+1}^{n}d_{H}(u_{i})}{2}\geqslant\frac{ \big{(}w(F)-1\big{)}(n-\ell)}{2}\]
which concludes the assertion. So, we may assume that \(d_{H}(u_{\ell})\leqslant w(F)-2\). Then, \(|V|\leqslant\sum_{i=1}^{\ell}d_{H}(u_{i})\leqslant\ell(w(F)-2)\). Let \(R=V(G)\setminus(U\cup V\cup W)\). Note that \(R\) is the set of all vertices in \(V(G)\setminus U\) which are not adjacent to any vertex in \(U\) and so \(|R|\leqslant c\log n\) by Lemma 2.2. Let \(x\in W_{i}\) and let \(F^{\prime}\) be a copy of \(F\) in \(H+xu_{i}\). It follows from \(d_{H}(x)\geqslant d_{H}(u_{i})\) that \(d_{H}(x)\geqslant\max\{d_{F^{\prime}}(x),d_{F^{\prime}}(u_{i})\}-1\). Since \(N_{H}(x)\cap V\supseteq N_{F^{\prime}}(x)\cap N_{F^{\prime}}(u_{i})\), one concludes that
\[d_{H}(x)+|N_{H}(x)\cap V|\geqslant\max\big{\{}d_{F^{\prime}}(x),d_{F^{\prime}} (u_{i})\big{\}}-1+|N_{F^{\prime}}(x)\cap N_{F^{\prime}}(u_{i})|\geqslant w(F) -1.\]
Now, we may write
\[2|E(H)| \geqslant\sum_{x\in V}d_{H}(x)+\sum_{x\in W}d_{H}(x)\] \[\geqslant\sum_{x\in V}|N_{H}(x)\cap U|+\sum_{x\in V}|N_{H}(x)\cap W |+\sum_{x\in W}d_{H}(x)\] \[\geqslant|V|+\sum_{x\in W}|N_{H}(x)\cap V|+\sum_{x\in W}d_{H}(x)\] \[=|V|+\sum_{x\in W}\big{(}d_{H}(x)+|N_{H}(x)\cap V|\big{)}\] \[\geqslant|V|+\sum_{x\in W}\big{(}w(F)-1\big{)}\] \[=|V|+\big{(}w(F)-1\big{)}\big{(}n-|U|-|V|-|R|\big{)}\] \[=\big{(}w(F)-1\big{)}n-\big{(}w(F)-2\big{)}|V|-\big{(}w(F)-1\big{)} \big{(}\ell+|R|\big{)} \tag{1}\]
Since \(\ell=c\log n\), \(|V|\leqslant\ell(w(F)-2)\), and \(|R|\leqslant c\log n\), the result follows from (1).
## 4 Upper bound for bipartite graphs
In this section, we prove Theorem 1.1. Our proof is based on the construction suggested in [7] which, in turn, resembles the proof strategy of a general linear in \(n\) upper bound on \(\operatorname{sat}(n,F)\) from [14]. First, we present a useful observation which can be proved straightforwardly.
**Observation 4.1**.: _Let \(H\) be an \(F\)-free subgraph of \(G\). Then, there is an \(F\)-saturated subgraph of \(G\) which has \(H\) as a subgraph._
Below, we show how a general linear in \(n\) upper bound on \(\operatorname{sat}(n,F)\) can be derived from Observation 4.1. While we use the same construction as in [14], we formulate the proof in a different way in order to make the move to random settings smoother.
**Theorem 4.2** ([14]).: _Let \(F\) be a graph and \(S\) be an independent set in \(F\) with maximum possible size. Let \(b=|V(F)|-|S|-1\) and \(d=\min\{|N_{F}(x)\cap S|\,|\,x\in V(F)\setminus S\}\). Then,_
\[\operatorname{sat}(n,F)\leqslant\frac{2b+d-1}{2}n-\frac{b(b+d)}{2}.\]
Proof.: Let \(B\) be a subset of \(V(K_{n})\) of size \(b\) and let \(\overline{B}=V(K_{n})\setminus B\). Consider the spanning subgraph \(H_{0}\) of \(K_{n}\) obtained by deleting all edges whose both endpoints are in \(\overline{B}\). If there is a copy \(F^{\prime}\) of \(F\) in \(H_{0}\), then \(V(F^{\prime})\cap\overline{B}\) is an independent set of size
\[|V(F^{\prime})\cap\overline{B}|=|V(F^{\prime})|-|V(F^{\prime})\cap B|\geqslant| V(F)|-|B|=|S|+1,\]
a contradiction. This shows that \(H_{0}\) is \(F\)-free. Using Observation 4.1, there is an \(F\)-saturated subgraph of \(G\), say \(H\), with \(E(H)\supseteq E(H_{0})\). For every \(x\in\overline{B}\), we have \(|N_{H}(x)\cap\overline{B}|\leqslant d-1\), as otherwise the subgraph of \(H\) with the edge set \(E(H_{0})\cup E_{H}(\{x\},\overline{B})\) contains a copy of \(F\). Since
\[|E(H)|=|E(H_{0})|+\sum_{x\in\overline{B}}|N_{H}(x)\cap\overline{B}|\leqslant| E(H_{0})|+\frac{(d-1)|\overline{B}|}{2},\]
the result follows.
Let us now prove Theorem 1.1. Note that it is impossible to find a construction as in the proof of Theorem 4.2, since vertex degrees in the random graph equal \(np(1+o(1))\). Thus, instead of considering a single clique \(B\) with its common neighborhood, we will consider \(\Theta(\ln n)\) disjoint sets of constant sizes as well as their common neighborhoods. For the sake of convenience, we handle the case of \(F\) being a disjoint union of stars separately. This proves Theorem 1.1 for the case \(a=1\) and generalizes a result given in [16].
**Lemma 4.3**.: _Let \(p\in(0,1)\) be constant and let \(F\) be the disjoint union of stars \(K_{1,t_{1}},\ldots,K_{1,t_{k}}\) with \(k\geqslant 1\) and \(t_{1}\geqslant\cdots\geqslant t_{k}\geqslant 1\). Then, whp_
\[\operatorname{sat}\bigl{(}\operatorname{\mathbb{G}}(n,p),F\bigr{)}=\frac{t_{k }-1}{2}n-\bigl{(}t_{k}-1+o(1)\bigr{)}\log_{\frac{1}{1-p}}n.\]
Proof.: In view of Corollary 3.3, it suffices to prove the upper bound. Using Theorem 3.2, \(\alpha(\operatorname{\mathbb{G}}(n,p))=(2+o(1))\log_{1/(1-p)}n\) whp. Let \(G\thicksim G(n,p)\) and \(h=|V(F)|-1\). Fix an integer-valued function \(\ell=\ell(n)=(2+o(1))\log_{1/(1-p)}n\) such that \((n-h-\ell)(t_{k}-1)\) is even and \(\alpha(\operatorname{\mathbb{G}}(n,p))\geqslant\ell\) whp. Also, let \(L\) be the disjoint union of \(K_{h}\) and an arbitrary regular graph on \(n-h-\ell\) vertices with degree \(t_{k}-1\). We know from a result of Alon and Furedi [2] that, for sufficiently small \(\varepsilon>0\), the graph \(\operatorname{\mathbb{G}}(n-\ell,n^{-\varepsilon})\) contains a copy of \(L\) whp. Using the standard multiple-exposure technique, it implies that \(\operatorname{\mathbb{G}}(n-\ell,p)\) does not contain a copy of \(L\) with probability at most \(\exp(n^{-\varepsilon+o(1)})\). Thus, by the union bound, whp there exists a subset \(S\subseteq V(G)\) with \(|S|=\ell\) such that \(S\) is an independent set in \(G\) and \(G[V(G)\setminus S]\) has a copy \(L^{\prime}\) of \(L\) as a subgraph. Denote by \(H\) the spanning subgraph of \(G\) with the edge set \(E(L^{\prime})\). It is easily seen that \(H\) is an \(F\)-saturated subgraph of \(G\) and
\[|E(H)|=\frac{(n-h-\ell)(t_{k}-1)}{2}+\binom{h}{2}=\frac{t_{k}-1}{2}n-\bigl{(}t_ {k}-1+o(1)\bigr{)}\log_{\frac{1}{1-p}}n\]
which concludes the result.
**Remark 4.4**.: Note that Lemma 4.3 for \(t_{k}=1\) could be strengthened as follows. If \(F\) is a graph with a connected component \(K_{2}\), then \(\operatorname{sat}(\operatorname{\mathbb{G}}(n,p),F)\leqslant\binom{|V(F)|-1}{ 2}\) whp. Conversely, if \(\operatorname{sat}(\operatorname{\mathbb{G}}(n,p),F)\) is bounded from above by a constant, then Corollary 3.3 forces \(F\) to have a connected component \(K_{2}\).
Proof of Theorem 1.1.: In view of Lemma 4.3, we may assume that \(a\geqslant 2\). Let \(G\sim G(n,p)\), \(b=1-p^{a-1}\), and \(\ell=\lfloor\log_{1/b}n^{2/3}\rfloor\). Without loss of generality, assume that \(|A_{1}|=\cdots=|A_{q}|>|A_{q+1}|\geqslant\cdots\geqslant|A_{k}|\) for some \(q\). Fix disjoint arbitrary \((a-1)\)-subsets \(V_{1},\ldots,V_{\ell}\) and \((a+1)\)-subsets \(V_{\ell+1},\ldots,V_{\ell+q-1}\) of \(V(G)\). Set \(V=\bigcup_{i=1}^{\ell}V_{i}\) and \(V^{\prime}=\bigcup_{i=\ell+1}^{\ell+q-1}V_{i}\). Let \(M_{i}=\bigcup_{j=1}^{i}N(V_{j})\) for any \(i\geqslant 1\). For \(i=1,\ldots,\ell+q-1\), define \(W_{i}=N(V_{i})\setminus(V\cup V^{\prime}\cup M_{i-1})\) and set \(W=\bigcup_{i=1}^{\ell}W_{i}\). Let \(R=V(G)\setminus(V\cup W)\). Note that \(R=(V(G)\setminus(V\cup M_{\ell}))\cup V^{\prime}\). Set \(V^{\prime\prime}=V^{\prime}\cap M_{\ell}\). A schematic of the structure of \(V(G)\) is illustrated in Figure 1.
As \(|R\setminus V^{\prime\prime}|\thicksim\operatorname{Bin}(n-\ell(a-1),b^{ \ell})\), Lemma 2.1 implies that whp
\[|R|=b^{\ell}\big{(}n-\ell(a-1)\big{)}\big{(}1+o(1)\big{)}+|V^{\prime\prime}|\]
which gives that \(|R|=O(n^{1/3})\). Similarly, for all \(i\), whp
\[|W_{i}|=b^{i-1}(1-b)\big{(}n-\ell(a-1)-(q-1)(a+1)\big{)}\big{(}1+o(1)\big{)}\]
which yields that \(|W_{i}|=\Omega(n^{1/3})\) for \(i=1,\ldots,\ell+q-1\). In particular, \(|W_{i}|\geqslant\max\{|B_{1}|,\ldots,|B_{k}|\}+1\).
Let \(H_{0}\) be a spanning subgraph of \(G\) with \(E(H_{0})=\cup_{i=1}^{\ell+q-1}E_{G}(V_{i},W_{i})\). By the definition of \(a\), we conclude that \(H_{0}\) is \(F\)-free. Using Observation 4.1, there is an \(F\)-saturated subgraph of \(G\), say \(H\), with \(E(H)\supseteq E(H_{0})\). Now, we bound the number of edges of \(H\). We will use
\[|E(H)|=|E_{H}(V(G)\setminus W)|+|E_{H}(V(G)\setminus W,W)|+|E_{H}(W)|. \tag{2}\]
It follows from \(V(G)\setminus W=R\cup V\) that \(|V(G)\setminus W|=O(n^{1/3})\) and hence \(|E_{H}(V(G)\setminus W)|=O(n^{2/3})\). For every \(i\in\{1,\ldots,\ell\}\) and every \(x\in V(G)\setminus V_{i}\), we have \(|N_{H}(x)\cap W_{i}|\leqslant\delta-1\), as otherwise the bipartite subgraph of \(H\) with the edge set \(E(H_{0})\cup E_{H}(\{x\},W_{i})\) contains a copy of \(F\). Therefore,
\[|E_{H}(V(G)\setminus W,W)|=\sum_{i=1}^{\ell}|E_{H}(V(G)\setminus W,W_{i})|\]
Figure 1: The structure of \(V(G)\) described in the proof of Theorem 1.1.
\[=\sum_{i=1}^{\ell}|E_{H}(V(G)\setminus(V_{i}\cup W),W_{i})|+\sum_{i=1 }^{\ell}|E_{H}(V_{i},W_{i})|\] \[\leqslant\ell(\delta-1)|V(G)\setminus W|+\sum_{i=1}^{\ell}|E_{H}( W_{i},V_{i})|\] \[\leqslant O\left(n^{\frac{1}{3}}\log n\right)+(a-1)n. \tag{3}\]
It remains to estimate \(|E_{H}(W)|\). To do this, we write
\[|E_{H}(W)| =\sum_{i=1}^{\ell}\sum_{j=1}^{i-1}|E_{H}(W_{i},W_{j})|+\sum_{i=1}^ {\ell}|E_{H}(W_{i})|\] \[\leqslant\sum_{i=1}^{\ell}(i-1)(\delta-1)|W_{i}|+\sum_{i=1}^{ \ell}\frac{\delta-1}{2}|W_{i}|\] \[=\frac{\delta-1}{2}\sum_{i=1}^{\ell}(2i-1)|W_{i}|\] \[\leqslant\frac{\delta-1}{2}\sum_{i=1}^{\ell}(2i-1)b^{i-1}(1-b)n \big{(}1+o(1)\big{)}\] \[\leqslant\frac{\delta-1}{2}(1-b)n\big{(}1+o(1)\big{)}\sum_{i=1}^ {\ell}(2i-1)b^{i-1}\] \[=\frac{\delta-1}{2}(1-b)n\big{(}1+o(1)\big{)}\frac{1+b-(2\ell+1)b ^{\ell}+(2\ell-1)b^{\ell+1}}{(1-b)^{2}}\] \[\leqslant\frac{\delta-1}{2}\left(\frac{1+b}{1-b}\right)n\big{(}1 +o(1)\big{)}. \tag{4}\]
By (2)-(4), we conclude that
\[|E(H)| \leqslant\left(\frac{\delta-1}{2}\left(\frac{1+b}{1-b}\right)+a- 1\right)n\big{(}1+o(1)\big{)}\] \[=\left(\frac{\delta-1}{p^{a-1}}-\frac{\delta-2a+1}{2}+o(1)\right)n.\]
Since \(\operatorname{sat}(G,F)\leqslant|E(H)|\), the result follows.
## 5 Lower bound for \(K_{s,t}\)
In this section, we prove the two lower bounds in Theorem 1.2. We start from the bound that does not depend on \(p\) that is stated separately below. Let us recall that this bound generalizes the lower bound from [5] for \(F=K_{2,2}\), that is, \(\operatorname{sat}(G(n,p),K_{2,2})\geqslant(\frac{3}{2}+o(1))n\) whp. However, our argument is simpler and resembles the argument used by Bohman, Fonoberova, and Pikhurko [3] for their asymptotic lower bound on \(\operatorname{sat}(n,K_{s,t})\).
**Theorem 5.1**.: _Let \(t\geqslant s\geqslant 2\) and \(p\in(0,1)\) be constants. Then, whp_
\[\operatorname{sat}\bigl{(}G(n,p),K_{s,t}\bigr{)}\geqslant\left(\frac{2s+t-3}{ 2}+o(1)\right)n.\]
Proof.: Let \(G\sim\,\mathbb{G}(n,p)\) and let \(H\) be a \(K_{s,t}\)-saturated subgraph of \(G\) with minimum possible number of edges. Let \(V=V(G)\). By Theorem 1.1, we have that \(|E(H)|=O(n)\) whp. For the subsets
\[A =\Big{\{}x\in V\,\Big{|}\,d_{H}(x)\geqslant n^{\frac{1}{4}}\,\Big{\}}\,,\] \[B =\big{\{}x\in V\setminus A\,\big{|}\,|N_{H}(x)\cap A|\leqslant s-2 \big{\}},\] \[C =\big{\{}x\in V\,\big{|}\,d_{H}(x)\leqslant s+t-3\big{\}},\] \[D =V\setminus(A\cup B\cup C).\]
of \(V(H)\), we prove the following claims.
**Claim 5.2**.: Whp\(|A|=O(n^{3/4})\)_._
Proof.: Since \(|E(H)|\geqslant|A|n^{1/4}/2\), we have \(|A|=O(n^{3/4})\).
**Claim 5.3**.: Whp\(|B|=O(n^{3/4})\)_._
Proof.: Take any two vertices \(x,y\in B\) such that \(\{x,y\}\in E(G)\setminus E(H)\). The addition of \(xy\) to \(H\) creates a copy of \(K_{s,t}\) with vertex bipartition \(\{X,Y\}\) so that \(x\in X\) and \(y\in Y\). Since \(|N_{H}(x)\cap A|\leqslant s-2\), \(x\) has a neighbor \(y^{\prime}\in Y\setminus A\). Similarly, \(y\) has a neighbor \(x^{\prime}\in X\setminus A\). This shows that there is a path \(x,y^{\prime},x^{\prime},y\) of length three which connects \(x\) to \(y\) in \(H-A\). Therefore, every two vertices in \(B\) which are adjacent in \(G\) are connected in \(H-A\) by a path of length one or three. If \(|B|\leqslant n^{3/4}\), then we are done. Otherwise, by Lemma 2.3, we have
\[\frac{p}{2}\binom{|B|}{2}\leqslant|E(G[B])|\leqslant|E(H[B])|+|B|\left(n^{ \frac{1}{4}}n^{\frac{1}{4}}n^{\frac{1}{4}}\right)\leqslant\frac{|B|n^{\frac{1} {4}}}{2}+|B|n^{\frac{3}{4}}\]
which gives \(|B|=O(n^{3/4})\).
**Claim 5.4**.: \(|C|\leqslant\log^{2}n\)_._
Proof.: By contradiction, assume that \(|C|>\log^{2}n\). Recall that the Ramsey number \(R_{s+t-3}(s+t)\) is the smallest positive integer \(m\) such that any coloring of the edges of \(K_{m}\) with \(s+t-3\) colors gives a monochromatic copy of \(K_{s+t}\). Using Lemma 2.5, \(C\) contains a clique \(C^{\prime}\) of size \(M=(s+t-2)R_{s+t-3}(s+t)\) in \(G\). We know that every graph \(\Gamma\) contains an independent set of size at least \(|V(\Gamma)|/(\Delta(\Gamma)+1)\). Since each vertex of \(C^{\prime}\) has degree at most \(s+t-3\) in \(H\), there is an independent set \(C^{\prime\prime}\subseteq C^{\prime}\) with \(|C^{\prime\prime}|\geqslant R_{s+t-3}(s+t)\) in \(H\). For each vertex \(x\in C^{\prime\prime}\), fix an arbitrary ordering of \(N_{H}(x)\) which we encode by a bijection \(f_{x}:N(x)\to\{1,\ldots,d_{H}(x)\}\). For each pair of distinct vertices \(x,y\in C^{\prime\prime}\) do the following. Fix a copy of \(K_{s,t}\) in \(H+xy\) with partition \(\{X,Y\}\) so that \(x\in X\) and \(y\in Y\). Since \(|(X\setminus\{x\})\cup(Y\setminus\{y\})|=s+t-2\), there are \(x^{\prime}\in X\setminus\{x\}\) and \(y^{\prime}\in Y\setminus\{y\}\) with \(f_{x}(x^{\prime})=f_{y}(y^{\prime})\). Denote the integer \(f_{x}(x^{\prime})=f_{y}(y^{\prime})\) by \(a\). Clearly, \(x^{\prime}y^{\prime}\in E(H)\). Now, color the edge \(xy\) by \(a\). This defines an edge coloring of \(E(G[C^{\prime\prime}])\) with \(s+t-3\) colors. By Ramsey's theorem, there is a \((s+t)\)-subset \(C^{\prime\prime\prime}\subseteq C^{\prime\prime}\) such that all edges of \(G[C^{\prime\prime\prime}]\) have the same color, say \(c\). For every two distinct vertices \(x,y\in C^{\prime\prime\prime}\), as \(f_{x}^{-1}(c)\) and \(f_{y}^{-1}(c)\) are adjacent in \(H\), \(f_{x}^{-1}(c)\neq f_{y}^{-1}(c)\). So \(\{f_{x}^{-1}(c)|x\in C^{\prime\prime\prime}\}\) is a clique of order \(s+t\) in \(H\) which contradicts the \(K_{s,t}\)-freeness of \(H\), proving the claim.
Using Claims 5.2-5.4, we conclude that \(|D|=n-O(n^{3/4})\). Since every vertex in \(D\) has at least \(s-1\) neighbors in \(A\), we may choose \(s-1\) distinct edges for each vertex of \(D\). Put all these edges in
a set \(E_{1}\). Since any vertex in \(D\) has at least \(s+t-2\) neighbors in \(H\), we conclude that every vertex in \(D\) is incident to at least \(t-1\) edges in \(E(H)\setminus E_{1}\). Now, we have
\[|E(H)|\geqslant|E(D,V(H))|\geqslant(s-1)|D|+\frac{t-1}{2}|D|\geqslant\left( \frac{2s+t-3}{2}+o(1)\right)n.\qed\]
The second lower bound in Theorem 1.2 is stated below.
**Theorem 5.5**.: _Let \(t\geqslant s\geqslant 2\) and \(p\in(0,1)\) be constants. Then, whp_
\[\operatorname{sat}\bigl{(}\operatorname{\mathbb{G}}(n,p),K_{s,t}\bigr{)} \geqslant\left(\frac{t-s}{4p^{s-1}}+\frac{s-1}{2}+o(1)\right)n.\]
Proof.: If \(s=t\), then the assertion follows from Corollary 3.3. So, assume that \(t>s\). Let \(G\thicksim\operatorname{\mathbb{G}}(n,p)\) and let \(H\) be a \(K_{s,t}\)-saturated subgraph of \(G\) with minimum possible number of edges. Let \(V=V(G)\). By Theorem 1.1, we have \(|E(H)|=O(n)\). Consider the partition \(\{A,B,C\}\) of \(V\), where
\[A =\bigl{\{}x\in V\,\big{|}\,d_{H}(x)<\log n\bigr{\}},\] \[B =\Bigl{\{}x\in V\,\left|\,\log n\leqslant d_{H}(x)\leqslant\frac {n}{\log^{s+1}n}\right.\Bigr{\}}\,,\] \[C =\Bigl{\{}x\in V\,\left|\,d_{H}(x)>\frac{n}{\log^{s+1}n}\, \right.\Bigr{\}}\,.\]
For any \(y\in V\), set \(N_{y}=N_{H}(y)\) and
\[F_{y}=\bigl{\{}x\in V\,\big{|}\,|N_{H}(x,y)|\geqslant t-1\bigr{\}}.\]
Moreover, let
\[\mathcal{O}=\bigl{\{}Y\subseteq V\,\big{|}\,|Y|=s-1\text{ and }|N_{H}(Y)| \geqslant t\bigr{\}}.\]
Further, for any \(Y\in\mathcal{O}\), set \(N_{Y}=N_{H}(Y)\) and
\[F_{Y}=\bigl{\{}x\in V\,\big{|}\,|N_{H}(\{x\}\cup Y)|=t-1\bigr{\}}.\]
Finally, consider the partition \(\{\mathcal{A},\mathcal{B},\mathcal{C}\}\) of \(\mathcal{O}\), where
\[\mathcal{A} =\{Y\in\mathcal{O}\,|\,Y\cap A\neq\varnothing\},\] \[\mathcal{B} =\{Y\in\mathcal{O}\,|\,Y\cap A=\varnothing\text{ and }Y\cap B\neq \varnothing\},\] \[\mathcal{C} =\{Y\in\mathcal{O}\,|\,Y\subseteq C\}.\]
Since adding every edge \(xx^{\prime}\in E(G)\setminus E(H)\) to \(H\) creates a copy of \(K_{s,t}\) in \(H\), we conclude that \(E(G)\setminus E(H)\subseteq\bigcup_{Y\in\mathcal{O}}E_{G}(N_{Y},F_{Y})\). Therefore, using Lemma 2.3, we find that whp_
\[\left|\bigcup_{Y\in\mathcal{O}}E_{G}(N_{Y},F_{Y})\right|\geqslant|E(G) \setminus E(H)|=\frac{n^{2}p}{2}\bigl{(}1+o(1)\bigr{)}. \tag{5}\]
Note that \(E_{G}(N_{Y},F_{Y})\subseteq E_{G}(N_{y},F_{y})\) for every \(y\in Y\), since \(N_{Y}\subseteq N_{y}\) and \(F_{Y}\subseteq F_{y}\). For every vertex \(y\in V\), by a double counting of the set \(\{(x,S)\,|\,x\in F_{y},S\subseteq N_{H}(x,y),\text{ and }|S|=s\}\), we derive that
\[|F_{y}|{t-1\choose s}\leqslant{|N_{y}|\choose s}(t-1).\]
It follows from \(t>s\) that \(|F_{y}|\leqslant|N_{y}|^{s}\). Hence, \(|F_{y}|\leqslant\log^{s}n\) for every \(y\in A\). This gives
\[\left|\bigcup_{Y\in\mathcal{A}}E_{G}(N_{Y},F_{Y})\right|\leqslant\left|\bigcup_ {y\in A}E_{G}(N_{y},F_{y})\right|\leqslant\sum_{y\in A}|N_{y}||F_{y}|\leqslant n (\log n)\log^{s}n=n\log^{s+1}n. \tag{6}\]
Since \(|E(H)|=O(n)\) whp, we get that \(|F_{y}\setminus A|=O(n/\log n)\) for each \(y\in V\) whp. Using this, we may write whp
\[\left|\bigcup_{Y\in\mathcal{B}}E_{G}(N_{Y},F_{Y})\right| \leqslant\left|\bigcup_{y\in B}E_{G}(N_{y},F_{y})\right|\] \[\leqslant\sum_{y\in B}|E_{G}(N_{y},F_{y})|\] \[\leqslant\sum_{y\in B}|E_{G}(N_{y},F_{y}\setminus A)|+\sum_{y \in B}|E_{G}(N_{y},F_{y}\cap A)|\] \[\leqslant\sum_{y\in B}|N_{y}||F_{y}\setminus A|+\sum_{y\in B}|N_ {y}||F_{y}\cap A|\] \[\leqslant O\left(\frac{n}{\log n}\right)\sum_{y\in B}|N_{y}|+ \frac{n}{\log^{s+1}n}\sum_{x\in A}|F_{x}\cap B|\] \[\leqslant O\left(\frac{n}{\log n}\right)|E(H)|+\frac{n}{\log^{s+1 }n}|A|\log^{s}n\] \[=O\left(\frac{n^{2}}{\log n}\right) \tag{7}\]
Since \(|E(H)|=O(n)\) whp, we deduce that \(|C|=O(\log^{s+1}n)\) whp and so \(|\mathcal{C}|\leqslant|C|^{s-1}=O(\log^{s^{2}-1}n)\) whp. Now, by setting \(\lambda=2\) in Corollary 2.4, we obtain that whp
\[\left|\bigcup_{Y\in\mathcal{C}}E_{G}(N_{Y},F_{Y})\right| \leqslant\sum_{Y\in\mathcal{C}}|E_{G}(N_{Y},F_{Y})|\] \[\leqslant\sum_{Y\in\mathcal{C}}\Bigl{(}3n\log^{2}n+p|N_{Y}||F_{Y} |\bigl{(}1+o(1)\bigr{)}\Bigr{)}\] \[\leqslant 3|\mathcal{C}|n\log^{2}n+\sum_{Y\in\mathcal{C}}p^{s}n|F_{Y} |\bigl{(}1+o(1)\bigr{)}\] \[\leqslant O\left(n\log^{s^{2}+1}n\right)+p^{s}n\left(\sum_{Y\in \mathcal{C}}|F_{Y}|\right)\bigl{(}1+o(1)\bigr{)}. \tag{8}\]
Therefore, by (5)-(8), we find that whp
\[\sum_{Y\in\mathcal{C}}|F_{Y}|\geqslant\frac{n}{2p^{s-1}}\bigl{(}1+o(1)\bigr{)}. \tag{9}\]
Set
\[S=\bigcup_{\begin{subarray}{c}X,Y\in\mathcal{C}\\ X\neq Y\end{subarray}}N_{X}\cap N_{Y}.\]
Note that \(|S|\leqslant\binom{|\mathcal{C}|}{2}(t-1)=O(\log^{2s^{2}-2}n)\) whp. For every \(Y\in\mathcal{C}\), set \(M_{Y}=N_{Y}\setminus S\). Let
\[F^{\prime}=\left\{x\in\bigcup_{Y\in\mathcal{C}}F_{Y}\Bigg{|}\ |N_{x}\cap S|\geqslant s \right\}.\]
We claim that \(|F^{\prime}|\leqslant\binom{|S|}{s}(t-1)\). To see this, suppose otherwise. By the pigeonhole principle, there is a \(t\)-subset \(T\) of \(F^{\prime}\) such that \(|N_{H}(T)\cap S|\geqslant s\) which gives a copy of \(K_{s,t}\) in \(H\), a contradiction. This proves the claim which in turn implies that \(|F^{\prime}|=O(\log^{2s^{3}-2s}n)\). For every \(Y\in\mathcal{C}\), set \(F^{\prime}_{Y}=F_{Y}\setminus F^{\prime}\). Noting that the sets \(M_{Y}\) are mutually disjoint and \(F_{Y}\cap Y=\varnothing\) for every \(Y\in\mathcal{O}\), we may write whp
\[2|E(H)| =\sum_{Y\in\mathcal{C}}\sum_{x\in M_{Y}}d_{H}(x)+\sum_{x\notin \bigcup_{Y\in\mathcal{C}}M_{Y}}d_{H}(x)\] \[\geqslant\sum_{Y\in\mathcal{C}}\left(|E_{H}(M_{Y},F^{\prime}_{Y} \setminus M_{Y})|+2|E_{H}(M_{Y},F^{\prime}_{Y}\cap M_{Y})|+|E_{H}(M_{Y},Y)| \right)+(s-1)\left|V\setminus\bigcup_{Y\in\mathcal{C}}M_{Y}\right|\] \[\geqslant\sum_{Y\in\mathcal{C}}\left((t-s)|F^{\prime}_{Y} \setminus M_{Y}|+(t-s)|F^{\prime}_{Y}\cap M_{Y}|+(s-1)|M_{Y}|\right)+(s-1) \left(n-\sum_{Y\in\mathcal{C}}|M_{Y}|\right)\] \[=(s-1)n+\sum_{Y\in\mathcal{C}}(t-s)|F^{\prime}_{Y}|\] \[=(s-1)n+(t-s)\left(\left(\sum_{Y\in\mathcal{C}}|F_{Y}|\right)-| \mathcal{C}||F^{\prime}|\right)\] \[\geqslant(s-1)n+(t-s)\left(\frac{n}{2p^{s-1}}\big{(}1+o(1)\big{)} -O\left(\log^{2s^{3}+s^{2}-2s-1}n\right)\right)\] \[=\left(\frac{t-s}{2p^{s-1}}+s-1+o(1)\right)n,\]
where the last inequality follows from (9), completing the proof.
We point out here that Theorem 1.2 is concluded from Theorems 5.1 and 5.5.
**Remark 5.6**.: It is worth noting that using the proof of Theorem 5.1, one may improve the estimate on the number of edges of \(H\) in the last paragraph of proof of Theorem 5.5 to obtain
\[\operatorname{sat}(\mathcal{G}(n,p),K_{s,t})\geqslant\left(\frac{t-s}{4p^{s- 1}}+s-1+o(1)\right)n.\]
For the sake of clarity of presentation we disregarded this improvement in the proof of Theorem 5.5.
|
2308.11286 | On Birkhoff sums that satisfy no temporal distributional limit theorem
for almost every irrational | Dolgpoyat and Sarig showed that for any piecewise smooth function $f:
\mathbb{T} \to \mathbb{R}$ and almost every pair $(\alpha,x_0) \in \mathbb{T}
\times \mathbb{T}$, $S_N(f,\alpha,x_0) := \sum_{n =1}^{N} f(n\alpha + x_0)$
fails to fulfill a temporal distributional limit theorem. In this article, we
show that the two-dimensional average is in fact not needed: For almost every
$\alpha \in \mathbb{T}$ and all $x_0 \in \mathbb{T}$, $S_N(f,\alpha,x_0)$ does
not satisfy a temporal distributional limit theorem, regardless of centering
and scaling. The obtained results additionally lead to progress in a question
posed by Dolgopyat and Sarig. | Lorenz Frühwirth, Manuel Hauke | 2023-08-22T08:54:35Z | http://arxiv.org/abs/2308.11286v1 | # On Birkhoff sums that satisfy no temporal distributional limit theorem for almost every irrational
###### Abstract
Dolgopyat and Sarig showed that for any piecewise smooth function \(f:\mathbb{T}\to\mathbb{R}\) and almost every pair \((\alpha,x_{0})\in\mathbb{T}\times\mathbb{T}\), \(S_{N}(f,\alpha,x_{0}):=\sum_{n=1}^{N}f(n\alpha+x_{0})\) fails to fulfill a temporal distributional limit theorem. In this article, we show that the two-dimensional average is in fact not needed: For almost every \(\alpha\in\mathbb{T}\) and all \(x_{0}\in\mathbb{T}\), \(S_{N}(f,\alpha,x_{0})\) does not satisfy a temporal distributional limit theorem, regardless of centering and scaling. The obtained results additionally lead to progress in a question posed by Dolgopyat and Sarig.
## 1 Introduction and main results
Let \(X\) be a metric space, \(T:X\to X\) a Borel measurable map, \(f:X\to\mathbb{R}\) a measurable function and \(x_{0}\in X\). Then
\[S_{N}(f,T,x_{0})=\sum_{k=0}^{N-1}f\circ T^{k}(x_{0})\]
defines the Birkhoff sum of \(f\) over \(T\) at stage \(N\) with starting point \(x_{0}\). A pair \((T,f)\) is said to satisfy a temporal distributional limit theorem (TDLT) along the orbit of a fixed \(x_{0}\in X\) whenever there exist two sequences \((A_{M}(f,T,x_{0}))_{M\in\mathbb{N}}\), \((B_{M}(f,T,x_{0}))_{M\in\mathbb{N}}\) with \(\lim_{M\to\infty}B_{M}=\infty\), and a non-constant random variable \(Y\) such that
\[\lim_{M\to\infty}\frac{1}{M}\#\left\{1\leq N\leq M:\frac{S_{N}(f,T,x_{0})-A_{M} }{B_{M}}\leq a\right\}=\mathbb{P}[Y\leq a]. \tag{1}\]
For a more detailed introduction in this area, we refer the reader to [8] and especially to the survey article [13].
Motivated by various research areas such as Discrepancy theory (see, e.g., [5, 6, 19]) and the theory of "deterministic random walks" (see, e.g., [1, 3]), particularly interesting and well-studied objects are ergodic sums induced by the irrational rotation on the torus \(\mathbb{T}\) (see Section 2 for notation and precise definitions)
\[T_{\alpha}:\mathbb{T} \to\mathbb{T}\] \[x \mapsto x+\alpha,\]
where \(\alpha\notin\mathbb{Q}\). The corresponding sum \(S_{N}(f,\alpha,x_{0}):=S_{N}(f,T_{\alpha},x_{0})\) is often known as the Birkhoff sum of the irrational circle rotation.
There are two different types of temporal limit laws, which we define by following the definition in [12] as "quenched" and "annealed". In the annealed case, the average is not only taken over \(N\) for fixed \(\alpha\), but a pair \((\alpha,N)\) is drawn uniformly at random from \(\mathbb{T}\times\{1,\ldots,M\}\) with \(M\to\infty\). Here, a recent
result of Dolgopyat and Sarig [12] shows that for \(f(x)=\{x\}-\frac{1}{2}\) and any \(x_{0}\in\mathbb{T}\), \(S_{N}(f,\alpha,x_{0})\) converges (after appropriate centering and scaling) in distribution to a Cauchy random variable. This resembles the behaviour found by Kesten [17] who showed that also the _spatial_ average (that is, \((\alpha,x_{0})\) is drawn uniformly at random whereas \(N\) is fixed) converges to a Cauchy distribution.
In the present article, we are dealing with the _quenched temporal_ case. This means we are investigating the pointwise behaviour of \(S_{N}(f,\alpha,x_{0})\) for fixed \(\alpha\in\mathbb{T}\) where we study TDLTs in the sense of (1). There are two prominent limit distributions such that a TDLT is satisfied: On the one hand, there are examples where a temporal _central_ limit theorem (TCLT) holds, that is, (1) is obtained with \(Y\) being a standard Gaussian random variable. Such results are known to hold for irrational circle rotations for specific irrationals \(\alpha\), starting points \(x_{0}\) and certain functions \(f\). For quadratic irrationals \(\alpha\), the existence of a TCLT was shown to hold for \(S_{N}(f,\alpha,0)\) when \(f(x)=\{x\}-1/2\), \(f(x)=\mathds{1}_{[0,\beta)}(x)-\beta\), \(\beta\in\mathbb{Q}\) or \(f(x)=\log|2\sin(\pi x)|\) (see [4, 5, 6, 7]). For the special case where \(\alpha=[0;a,a,a,\ldots],a\in\mathbb{N}\), Borda [7] showed that a TCLT for \(S_{N}(f,\alpha,0)\) holds for any function \(f\) of bounded variation. The case \(f(x)=\mathds{1}_{[0,\beta)}(x)-\beta\) was generalized to arbitrary orbits \(S_{N}(f,\alpha,x_{0}),x_{0}\in\mathbb{R}\) by Dolgopyat and Sarig [10] and further by Bromberg and Ulcigrai [8] to badly approximable \(\alpha\) under some Diophantine assumption (with respect to \(\alpha\)) on \(\beta\).
Note that the results on quadratic irrationals mentioned above do not say anything about typical \(\alpha\) since the set of badly approximable numbers (and thus in particular, of quadratic irrationals) is a set of Lebesgue measure \(0\). So a natural question is whether a TDLT can hold for almost all \(\alpha\in\mathbb{T}\) or at least for \(\alpha\) in a set of positive measure.
If \(f\) is a smooth function, the existence of a TDLT in the metric sense (i.e. for almost all \(\alpha\in\mathbb{T}\)) is immediately ruled out: If the Fourier coefficients of \(f\sim\sum_{n\in\mathbb{Z}}c_{n}e(nx)\) decay at rate \(c_{n}=O(1/n^{2})\) (which holds in particular for \(f\in C^{2}\)), then for almost all \(\alpha\in\mathbb{T}\) and all \(x_{0}\in\mathbb{R}\), \(S_{N}(f,\alpha,x_{0})\) is bounded (see [12, 16]). Therefore, a TDLT cannot hold because the scaling sequence \((B_{M})_{M\in\mathbb{N}}\) needs to be unbounded. Thus, the interesting functions to consider are those that lack smoothness such as functions that have discontinuities or singularities. Concerning functions with singularity, Borda [7] ruled out a central limit theorem for \(S_{N}(f,\alpha,0)\) for almost every \(\alpha\) where \(f(x)=\log(|2\sin(\pi x)|)\). In this article, however, we are not considering functions with singularities, but piecewise smooth functions with finitely many discontinuities (compare to, e.g., [11, 12, 15]).
**Definition 1.1** (Piecewise smooth functions).: _We call a function \(f:\mathbb{T}\to\mathbb{R}\) with \(\int_{\mathbb{T}}f(x)\,\mathrm{d}\mu(x)=0\) a piecewise smooth function if there exist \(\nu\geq 1\) and \(\{\gamma_{1},\ldots,\gamma_{\nu}\}\subseteq\mathbb{T}\) with \(0\leq\iota(\gamma_{1})<\ldots<\iota(\gamma_{\nu})<1\) (a denotes the canonical embedding \(\mathbb{T}\hookrightarrow[0,1)\), see Section 2) such that the following properties hold:_
* \(f\) _is differentiable on_ \(\mathbb{T}\setminus\{\gamma_{1},\ldots,\gamma_{\nu}\}\)_._
* \(f^{\prime}\) _extends to a function of bounded variation on_ \(\mathbb{T}\)_._
* _There exists an_ \(i\in\{1,\ldots,\nu\}\) _such that_ \(\lim_{\delta-0}\left[f(\gamma_{i}-\delta)-f(\gamma_{i}+\delta)\right]\neq 0\)_._
In [15], the authors examined the maximal oscillation of \(S_{N}(f,\alpha,x_{0})\) for \(f\) as in Definition 1.1 where an unexpected sensitivity on the interplay between the number-theoretic properties of \(x_{0},\gamma_{1},\ldots,\gamma_{\nu}\) and analytic properties of \(f\) was discovered.
Note that the class of functions from Definition 1.1 contains most of the examples mentioned above, such as \(f(x)=\{x\}-1/2\) or \(f(x)=\mathds{1}_{[\beta,\gamma]},\beta,\gamma\in\mathbb{T}\). Returning to the (non)-existance of TCLTs, the best currently known result for general piecewise smooth \(f\) was established in [11]:
**Theorem A**.: (Dolgopyat, Sarig, 2018). _Let \(f\) be a piecewise smooth function as in Definition 1.1. Then there exists a set \(\mathcal{E}\subset\mathbb{T}\times\mathbb{T}\) of full two-dimensional (Haar) measure such that for all \((\alpha,x_{0})\in\mathcal{E}\), \(S_{N}(f,\alpha,x_{0})\) does not satisfy a TDLT._
The aim of the present article is to show that the two-dimensional metric setup above is not necessary and a TDLT fails for almost every \(\alpha\) and _any_ initial point \(x_{0}\in\mathbb{T}\):
**Theorem 1**.: _Let \(f\) be a piecewise smooth function (see Definition 1.1). Then for (Haar-) almost all \(\alpha\in\mathbb{T}\) and for any \(x_{0}\in\mathbb{T}\) the following holds: Let \(N\) be uniformly distributed on \(\{1,\ldots,M\}\). Then the sequence of random variables \(\left(\frac{S_{N}(f,T_{a},x_{0})-A_{M}}{B_{M}}\right)_{M\in\mathbb{N}}\) does not satisfy a distributional limit theorem in the sense of (1), regardless of how \((B_{M})_{M\in\mathbb{N}}\) and \((A_{M})_{M\in\mathbb{N}}\) are chosen._
**Remark**.: _Theorem 1 reveals that the set \(\mathcal{E}\) from Theorem 1 can be chosen as \(\mathcal{E}=\mathcal{A}\times\mathbb{T}\) where \(\mathcal{A}\) has full (\(1\)-dimensional) Haar measure. The techniques used in the proof of Theorem 1 in [11] only allow to make a statement about almost all pairs \((\alpha,x_{0})\in\mathbb{T}\times\mathbb{T}\) and we do not know whether adapting the method from [11] would allow to rule out the temporal limit theorem for every \(x_{0}\in\mathbb{T}\) and \(\alpha\) in a set \(\mathcal{A}\) (that does not depend on \(x_{0}\)) of full measure. Our method of proof takes a different approach and we do not use Fourier-analytic methods as it was done in [11, 12]._
For the special case of the sawtooth function \(s(x)=\{x\}-\frac{1}{2}\), Dolgopyat and Sarig showed in [12, Corollary 2.3] that for all starting points \(x_{0}\in\mathbb{T}\), there exists a set \(\mathcal{A}_{x_{0}}\subseteq\mathbb{T}\) with full Haar measure such that for all \(\alpha\in\mathcal{A}_{x_{0}}\), \(S_{N}(s,\alpha,x_{0})\) does not satisfy a TDLT. Again, Theorem 1 implies the stronger result that there exists a set \(\mathcal{A}\subseteq\mathbb{T}\) of full Haar measure such that, for all starting points \(x_{0}\in\mathbb{T}\) and all \(\alpha\in\mathcal{A}\), the associated Birkhoff sum \(S_{N}(s,\alpha,x_{0})\) does not satisfy a TLDT.
In [12, Corollary 2.3], Dolgopyat and Sarig were able to identify a certain family of distributions where each member is realized as a temporal limit along a suitably normalized subsequence of \(S_{N}(s,T_{a},x_{0})\). In the same paper, the authors ask for a better understanding for general functions in the form of Definition 1.1. A comparable family of distributions appears in our method of proof (see (7) in Lemma 3.7) for all functions \(f\) in the form of Definition 1.1. For the special case \(f=\mathds{1}_{[0,a]}\), Dolgopyat and Sarig [10] showed that if \(N\) is not sampled uniformly from \(\{1,\ldots,M\}\), but \(N\sim\mathrm{Log}(\{1,\ldots,M\})\), \(S_{N}(\mathds{1}_{[0,a]},\alpha,0)\) does not satisfy a TDLT. However, even for the special case \(f=\mathds{1}_{[0,a]}\), the result of Theorem 1 was not yet established.
The rest of this paper is organized as follows. In Section 2, we fix notation and we state all necessary standard results needed to prove Theorem 1. In Section 3.1, we decompose \(f\) into a linear combination of the sawtooth function and certain indicator functions (Proposition 3.1). Further, by using the metric theory of continued fractions, we obtain the almost sure existence of infinitely many (unusually) large partial quotients whose corresponding convergent denominator also satisfies additional properties (see Lemma 3.4 and Remark 3.5). A fact that might be of theoretical interest on its own. In Section 3.2, Lemma 3.7 establishes limit distributions of \(S_{N}(f,\alpha,x_{0})\) along certain subsequences of integers. Finally, we conclude the proof of Theorem 1 by showing that there are at least two such limit distributions that do not coincide.
## 2 Prerequisites
### Notation
Given two functions \(f,g:(0,\infty)\to\mathbb{R}\), we write \(f(t)=O(g(t)),f\ll g\) or \(g\gg f\) if \(\limsup_{t\to\infty}\frac{|f(t)|}{|g(t)|}<\infty\). Any dependence of the value of the limes superior above on potential parameters is denoted by appropriate subscripts. For two sequences \((a_{k})_{k\in\mathbb{N}}\) and \((b_{k})_{k\in\mathbb{N}}\) with \(b_{k}\neq 0\) for all \(k\in\mathbb{N}\), we write \(a_{k}\sim b_{k},k\to\infty\), if \(\lim_{k\to\infty}\frac{a_{k}}{b_{k}}=1\). We denote the characteristic function of a set \(A\) by \(\mathds{1}_{A}\) and understand the value of empty sums as \(0\). For \(A\subseteq\mathbb{N}\), we define the lower density of \(A\) as \(\liminf_{N\to\infty}\frac{1}{N}\#(A\cap[1,N])\).
To avoid confusion between elements on \(\mathbb{T}\simeq\nicefrac{{R}}{{Z}}\) and on \(\mathbb{R}\), we use the following notation: We write \(\iota\colon\mathbb{T}\hookrightarrow[0,1)\) for the canonical embedding \(x+\mathbb{Z}\hookrightarrow\{x\}:=x-\lfloor x\rfloor\) and let \(\|x\|:=\min\{\iota(x),1-\iota(x)\}\) denote the canonical norm on \(\mathbb{T}\). We will denote the normalized Haar measure on \(\mathbb{T}\) by \(\mu\). For \(a,b,x\in\mathbb{T}\), we understand \(\mathbb{I}_{[a,b]}(x)\) as \(\mathbb{I}_{[a(a),\iota(b)]}(\iota(x))\). For \(a\in\mathbb{T}\) and \(n\in\mathbb{N}\), we define as usual \(na:=\sum_{i=1}^{n}a\). If \(x\in\mathbb{R}\) and \(a\in\mathbb{T}\), we understand \(x+a\) as \(\iota^{-1}(x)+a\in\mathbb{T}\).
Let \(X,Y\) be two real-valued random variables defined on a common probability space. If \(X\) and \(Y\) have the same distribution, we write \(X\stackrel{{ d}}{{=}}Y\). If \(X\) has the distribution \(\mu\) we write \(X\sim\mu\). For \(a,b\in\mathbb{R}\) with \(a<b\), we denote the uniform distribution on \([a,b]\) as \(U([a,b])\). When \(a,b\in\mathbb{N}_{0}\) with \(a<b\), \(U([a,b])\) is the (discrete) uniform distribution on \([a,b]\cap\mathbb{N}_{0}\).
### Continued fractions and Koksma's inequality
In this subsection, we recall several well-known results from the theory of continued fractions which are heavily used in the proof of Theorem 1. For a more detailed background, we refer the reader to classical literature such as [2, 20]. Every irrational \(\alpha\in[0,1)\) has a unique infinite continued fraction expansion denoted by \([0;a_{1},a_{2},\ldots]\) with convergents \(p_{k}/q_{k}:=[0;a_{1},\ldots,a_{k}]\) that satisfy the recursions
\[p_{k+1}=p_{k+1}(\alpha)=a_{k+1}(\alpha)p_{k}+p_{k-1},\qquad q_{k+1}=q_{k+1}( \alpha)=a_{k+1}(\alpha)q_{k}+q_{k-1},\quad k\in\mathbb{N},\]
with initial values \(p_{0}=0,\;p_{1}=1,\;q_{0}=1,\;q_{1}=a_{1}\). For the sake of brevity, we just write \(a_{k},p_{k},q_{k}\), although these quantities depend on \(\alpha\). Note that the convergents \(p_{k}/q_{k}\) satisfy the inequalities
\[\frac{1}{(a_{k+1}+2)q_{k}}\leq\delta_{k}:=(-1)^{k}(q_{k}\alpha-p_{k})\leq\frac {1}{a_{k+1}q_{k}},\quad k\geq 1. \tag{2}\]
Conversely, if \(|\alpha-p/q|<\frac{1}{2q^{2}}\), Legendre's Theorem implies that \(p/q\) is a convergent of \(\alpha\).
Since this article deals with almost sure behaviour, we also make use of the following classical results that arise from the well-studied area of the _metric_ theory of continued fractions:
* (Diamond and Vaaler [9]): For almost every \(\alpha\), \[\sum_{\ell\leq K}a_{\ell}-\max_{\ell\leq K}a_{\ell}\sim\frac{K\log K}{\log 2 },\quad K\to\infty.\] (3)
* (Khintchine and Levy, see, e.g., [20, Chapter 5, SS9, Theorem 1]): For almost every \(\alpha\), \[\log q_{k}\sim\frac{\pi^{2}}{12\log 2}k,\quad k\to\infty.\] (4)
On several positions in the proof, we will make use of Koksma's inequality which allows to estimate the error between sums and corresponding integrals. For more details about this topic and the closely related area of Discrepancy theory, we refer the reader to [18]. Denoting the discrepancy of a sequence \((y_{n})_{n\in\mathbb{N}}\subseteq\mathbb{T}\) at stage \(N\in\mathbb{N}\) by
\[D_{N}((y_{n})_{n\in\mathbb{N}}):=\sup_{0\leq a\leq b<1}\left|\frac{1}{N}\#\{1 \leq n\leq N:\iota(y_{n})\in[a,b]\}-(b-a)\right|\]
and the total variation of \(f:\mathbb{T}\to\mathbb{R}\) by \(\operatorname{Var}(f)\), Koksma's inequality is given by
\[\left|\sum_{i=1}^{N}f(y_{i})-N\int_{\mathbb{T}}f(x)\mathrm{d}\mu(x)\right| \leq\operatorname{Var}(f)ND_{N}((y_{n})_{n\in\mathbb{N}}).\]
In the special case where \((y_{n})_{n\in\mathbb{N}}\) is the Kronecker sequence \((n\alpha)_{n\in\mathbb{N}}\), we have the estimates
\[D_{q_{n}}((y_{n})_{n\in\mathbb{N}})\ll\frac{1}{q_{n}},\quad D_{N}((y_{n})_{n\in \mathbb{N}})\ll\frac{1}{N}\sum_{i=1}^{k}a_{i},\]
where \(k=k(N)\) is such that \(q_{k-1}\leq N<q_{k}\). Thus Koksma's inequality leads (in this particular case also known as Denjoy-Koksma inequality, see, e.g., [16]) to
\[\big{|}S_{q_{n}}(f,\alpha,x_{0})\big{|}\ll_{f}1,\quad\big{|}S_{N}(f,\alpha,x_{ 0})\big{|}\ll_{f}\sum_{i=1}^{k}a_{i}, \tag{5}\]
with the implied constant being uniform in \(x_{0}\).
## 3 Proof of Theorem 1
### Preparatory Lemmas
**Proposition 3.1**.: _Let \(f:\mathbb{T}\to\mathbb{R}\) be as in Definition 1.1. Let \(h:\mathbb{T}\to\mathbb{R}\) be defined as_
\[h(x)=\sum_{i=1}^{\nu}H_{i}\left(t(x)-\frac{1}{2}\right)+\sum_{i=1}^{\nu}H_{i} \left(\mathbb{I}_{[0,\gamma_{i})}(x)-t(\gamma_{i})\right),\]
_where \(H_{i}:=\lim_{g\to 0}\big{[}f(\gamma_{i}-\delta)-f(\gamma_{i}+\delta)\big{]}\). Then, for almost every \(\alpha\in\mathbb{T}\), any \(N\in\mathbb{N}\) and any \(y\in\mathbb{T}\), we have_
\[S_{N}(f,\alpha,y)=S_{N}(h,\alpha,y)+O_{f}(1),\]
_with the implied constant only depending on \(f\)._
Proof.: This can be proven analogously to [15, Lemma 3.1]. A more detailed proof can be found in [12, Appendix A].
**Proposition 3.2**.: _(Duffin and Schaeffer, [14, Theorem 3]). Let \(A\subseteq\mathbb{N}\) be a set of positive lower density and \(\psi:\mathbb{N}\to[0,\infty)\) be a monotone decreasing function such that \(\sum\limits_{q=1}^{\infty}\psi(q)=\infty\). Then, for almost every \(\alpha\), there exist infinitely many coprime \((p,q)\in\mathbb{Z}\times A\) that satisfy \(\big{|}\alpha-\frac{p}{q}\big{|}<\frac{\psi(q)}{q}\)._
**Proposition 3.3**.: _(Gallagher, [21, Lemma 2]). Let \((I_{k})_{k\in\mathbb{N}}\subseteq\mathbb{T}\) be a sequence of intervals with \(\lim_{k\to\infty}\mu(I_{k})=0\). Further let \(c>0\) and \((U_{k})_{k\in\mathbb{N}}\) be a sequence of measurable sets that satisfy the following for all \(k\in\mathbb{N}\):_
* \(U_{k}\subseteq I_{k}\)_,_
* \(\mu(U_{k})\geq c\mu(I_{k})\)_._
_Then, \(\mu(\limsup_{k\to\infty}U_{k})=\mu(\limsup_{k\to\infty}I_{k})\)._
Combining the statements above, we can deduce the following result.
**Lemma 3.4**.: _Let \(A\subseteq\mathbb{N}\) be a set with positive lower density. Then, for almost every \(\alpha=[\mathbb{Z};a_{1},a_{2},\ldots]\in\mathbb{T}\), there exists a sequence of even integers \((k_{j})_{j\in\mathbb{N}}\) such that \(q_{k_{j}}\in A\) for all \(j\in\mathbb{N}\) and \(\lim_{j\to\infty}\frac{\sum_{i=1}^{k_{j}}a_{i}}{a_{k_{j}+1}}=0\)._
Proof.: Let \(\psi(q)=\frac{1}{q\log q\log\log q\log\log\log q}\)1, then it holds that \(\sum_{q\in\mathbb{N}}\psi(q)=\infty\) as well as \(\psi(q)\leq 1\) for all \(q\in\mathbb{N}\).
Footnote 1: For convenience, we set \(\log x:=1\) if \(x\leq e\).
Let \((r_{k}/s_{k})_{k\in\mathbb{N}}\) be the set of rationals with \(s_{k}\in A\) and \(1\leq r_{k}\leq s_{k}-1\) with \(\gcd(r_{k},s_{k})=1\). We define
\[I_{k}:=t^{-1}\left(\frac{r_{k}}{s_{k}}-\frac{\psi(s_{k})}{s_{k}},\frac{r_{k}}{s _{k}}+\frac{\psi(s_{k})}{s_{k}}\right)\quad\text{and}\quad U_{k}:=t^{-1}\left( 0,\frac{r_{k}}{s_{k}}+\frac{\psi(s_{k})}{s_{k}}\right).\]
By Proposition 3.2, we have \(\mu(\limsup_{k\to\infty}I_{k})=1\). Since clearly \(U_{k}\subseteq I_{k}\) and \(\mu(U_{k})\geq\frac{1}{2}\mu(I_{k})\) for all \(k\in\mathbb{N}\), an application of Proposition 3.3 shows \(\mu(\limsup_{k\to\infty}U_{k})=1\). In other words, for almost all \(\alpha\in\mathbb{T}\), there are infinitely many coprime pairs \((p,q)\in\mathbb{N}\times A\) such that
\[0\leq\alpha-\frac{p}{q}<\frac{\psi(q)}{q}=\frac{1}{q^{2}\log q\log\log q\log \log q}. \tag{6}\]
By Legendre's Theorem, for \(q\geq 10\), the above is only possible if \((p,q)\) is a convergent of \(\alpha\). Thus, the pairs \((p,q),q\geq 10\) that satisfy (6) form a subsequence \((p_{k_{j}},q_{k_{j}})_{j\in\mathbb{N}}\) of the sequence of convergents \((p_{k},q_{k})_{k\in\mathbb{N}}\). Since \(\alpha-\frac{p_{k_{j}}}{q_{k_{j}}}\geq 0\) for all \(j\in\mathbb{N}\), it follows by (2) that all \(k_{j}\) are even. Moreover, by construction of \(\psi\) and combining (2) and (4), we have \(a_{k_{j}+1}\gg k_{j}\log k_{j}\log\log k_{j}\). By (3) this implies that for almost every \(\alpha\), we have \(\sum\limits_{i=1}^{k_{j}}a_{i}=o\left(a_{k_{j}+1}\right)\).
**Remark 3.5**.: _By obvious modifications, the statement of Lemma 3.4 also holds when "even" is replaced by "odd". In Lemma 3.7, this would lead to an even larger class of limiting distributions that are realized as limits of certain Birkhoff sums along suitable subsequences. For our purpose of ruling out any TDLT, the stated version of Lemma 3.4 is sufficient._
**Proposition 3.6**.: _Let \(\beta_{1},\beta_{2},\ldots,\beta_{v}\in\mathbb{T}\setminus\{0\},\nu\in \mathbb{N}\). Then there exists \(\delta>0\) such that the set \(\{N\in\mathbb{N}:\forall 1\leq j\leq\nu:\|N\beta_{j}\|>\delta\}\) has positive lower density._
Proof.: We partition \(\{\beta_{i}\}_{i=1}^{v}\) into rational and irrational numbers. Without loss of generality, we may assume \(\iota(\beta_{1})=\frac{a_{1}}{b_{1}},\ldots,\iota(\beta_{k})=\frac{a_{k}}{b_{k }}\in\mathbb{Q}\) with \(a_{i},b_{i}\in\mathbb{N},\gcd(a_{i},b_{i})=1,b_{i}\geq 2\) since \(\beta_{i}\neq 0\) for \(i=1,\ldots,k\), and \(\iota(\beta_{k+1}),\ldots,\iota(\beta_{v})\notin\mathbb{Q}\). Let \(b_{\pi}:=\prod_{i=1}^{k}b_{i}\). Clearly, if \(N\equiv 1\pmod{b_{\pi}}\), then for all \(1\leq i\leq k\), \(b_{i}\nmid N\) and thus, \(\iota(N\beta_{i})\in\left\{\frac{1}{b_{i}},\ldots,\frac{b_{i}-1}{b_{i}}\right\}\), which is disjoint from \((0,\delta)\cup(1-\delta,1)\) if \(\delta\) is chosen sufficiently small. Since \(\{N\in\mathbb{N}:N\equiv 1\pmod{b_{\pi}}\}\) has positive lower density, it suffices to show that
\[\left\{M\in\mathbb{N}:\forall i\in\{k+1,\ldots,v\}:\|(Mb_{\pi}+1)\beta_{i}\|>\delta\right\}\]
has positive lower density. Since \(\iota(b_{\pi}\beta_{i})\notin\mathbb{Q}\) for all \(i=k+1,\ldots,\nu\), it follows that \(\left\{(Mb_{\pi}\beta_{i}+\beta_{i})\right\}_{M\in\mathbb{N}}\) is uniformly distributed on \(\mathbb{T}\). This immediately shows
\[\liminf_{N\to\infty}\frac{1}{N}\#\left\{M\leq N:\forall i\in\{k+1,\ldots,v\}: \|(Mb_{\pi}+1)\beta_{i}\|>\delta\right\}\geq 1-2\nu\delta>0,\]
provided \(\delta<\frac{1}{2\nu}\).
### Main Lemma and conclusion of the proof
**Lemma 3.7**.: _Let \(f(x)=\left(\sum_{i=1}^{v}H_{i}\right)\left(\iota(x)-\frac{1}{2}\right)+\sum_{i= 1}^{v}H_{i}\left(\mathbb{I}_{[0,\gamma_{i})}(x)-\iota(\gamma_{i})\right)\) where \(\gamma_{1},\ldots,\gamma_{v}\in\mathbb{T}\) are distinct. Then for almost every \(\alpha=[\mathcal{Z};a_{1},a_{2},\ldots]\in\mathbb{T}\) and any \(x_{0}\in\mathbb{T}\), there exists an increasing sequence \((n_{\ell})_{\ell\in\mathbb{N}}\) such that the following holds:_
* _For every_ \(\ell\in\mathbb{N}\)_,_ \(q_{n_{\ell}}\) _is a denominator of a convergent of_ \(\alpha\)
\[\lim_{\ell\to\infty}\frac{S_{\lfloor ca_{n_{\ell}+1}\rfloor q_{n_{\ell}}}(s,\alpha, x_{0})}{a_{n_{\ell}+1}}=\int_{0}^{c}t(y+\overline{x_{0}})\,\mathrm{d}y-\frac{c}{2}, \tag{8}\]
with the convergence being uniform in \(c\in[0,1]\). Let \(\varepsilon>0\) be given. We will show that for any sufficiently large \(\ell\) and any integer \(u\) with \(0\leq u\leq\lfloor ca_{n_{\ell}+1}\rfloor\) that satisfies \(\left\|\frac{u}{a_{n_{\ell}+1}}+\overline{x_{0}}\right\|>\varepsilon\), we have
\[\left|\left(S_{(u+1)q_{n_{\ell}}}(s,\alpha,x_{0})-S_{uq_{n_{\ell}}}(s,\alpha, x_{0})\right)-\left\{\frac{u}{a_{n_{\ell}+1}}+t(\overline{x_{0}})\right\}-\frac{1}{2 }\right|<\varepsilon. \tag{9}\]
For \(\ell\) large enough, we have
\[\left\|q_{n_{\ell}}x_{0}-\overline{x_{0}}\right\|<\varepsilon/10.\]
Now observe that
\[S_{(u+1)q_{n_{\ell}}}(s,\alpha,x_{0})-S_{uq_{n_{\ell}}}(s,\alpha,x_ {0}) =S_{q_{n_{\ell}}}\left(s,\alpha,T^{uq_{n_{\ell}}}(x_{0})\right)\] \[=\sum_{n=0}^{q_{n_{\ell}}-1}\left\{t\left((n+uq_{n_{\ell}})\alpha \right)+t(x_{0})\right\}-\frac{q_{n_{\ell}}}{2}\] \[=\sum_{n=0}^{q_{n_{\ell}}-1}\left\{n\frac{p_{n_{\ell}}}{q_{n_{ \ell}}}+n\frac{\delta_{n_{\ell}}}{q_{n_{\ell}}}+u\delta_{n_{\ell}}+t(x_{0}) \right\}-\frac{q_{n_{\ell}}}{2}\] \[=\sum_{n=0}^{q_{n_{\ell}}-1}\left\{n\frac{p_{n_{\ell}}}{q_{n_{ \ell}}}+\frac{u/a_{n_{\ell}+1}}{q_{n_{\ell}}}+\frac{O(1/a_{n_{\ell}+1})}{q_{n_ {\ell}}}+t(x_{0})\right\}-\frac{q_{n_{\ell}}}{2}\]
where \(\delta_{n_{\ell}}:=t(q_{n_{\ell}}\alpha)=\frac{1}{a_{n_{\ell}+1}q_{n_{\ell}}} \left(1+O\left(\frac{1}{a_{n_{\ell}+1}}\right)\right)\), which follows from the assumption that \(n_{\ell}\) is even and we apply (2). Since \(\gcd(p_{n_{\ell}},q_{n_{\ell}})=1\), we have
\[\sum_{n=0}^{q_{n_{\ell}}-1}\left\{n\frac{p_{n_{\ell}}}{q_{n_{\ell} }}+\frac{u/a_{n_{\ell}+1}}{q_{n_{\ell}}}+\frac{O(1/a_{n_{\ell}+1})}{q_{n_{\ell }}}+t(x_{0})\right\}\] \[=\sum_{j=0}^{q_{n_{\ell}}-1}\left\{\frac{j}{q_{n_{\ell}}}+\frac{u/ a_{n_{\ell}+1}}{q_{n_{\ell}}}+\frac{O(1/a_{n_{\ell}+1})}{q_{n_{\ell}}}+\frac{ \lfloor q_{n_{\ell}}t(x_{0})\rfloor}{q_{n_{\ell}}}+\frac{t(q_{n_{\ell}}x_{0}) }{q_{n_{\ell}}}\right\}\] \[=\sum_{j=0}^{q_{n_{\ell}}-1}\left\{\frac{j}{q_{n_{\ell}}}+\frac{u/ a_{n_{\ell}+1}+t\left(q_{n_{\ell}}x_{0}\right)}{q_{n_{\ell}}}+\frac{O(1/a_{n_{\ell}+1}) }{q_{n_{\ell}}}\right\}\] \[=\sum_{j=0}^{q_{n_{\ell}}-1}\left\{\frac{j}{q_{n_{\ell}}}+\frac{u/ a_{n_{\ell}+1}+t\left(\overline{x_{0}}\right)}{q_{n_{\ell}}}+\frac{O(1/a_{n_{\ell}+1}) }{q_{n_{\ell}}}+\frac{R_{\varepsilon}}{q_{n_{\ell}}}\right\},\]
where \(R_{\varepsilon}:=t(q_{n_{\ell}}x_{0})-t(\overline{x_{0}})\) which satisfies \(|R_{\varepsilon}|\leq\frac{\varepsilon}{10}\) by the choice of \(\ell\). For all integers \(u\) with \(0\leq u\leq\lfloor ca_{n_{\ell}+1}\rfloor\) such that \(\left\|\frac{u}{a_{n_{\ell}+1}}+\overline{x_{0}}\right\|>\varepsilon\), we have
\[S_{(u+1)q_{n_{\ell}}}(s,\alpha,x_{0})-S_{uq_{n_{\ell}}}(s,\alpha, x_{0}) =S_{q_{n_{\ell}}}\left(s,\alpha,T_{\alpha}^{uq_{n_{\ell}}}(x_{0})\right)\] \[=\sum_{j=0}^{q_{n_{\ell}}-1}\left\{\frac{j}{q_{n_{\ell}}}+\frac{u/ a_{n_{\ell}+1}+t\left(\overline{x_{0}}\right)}{q_{n_{\ell}}}+\frac{O(1/a_{n_{\ell}+1}) }{q_{n_{\ell}}}+\frac{R_{\varepsilon}}{q_{n_{\ell}}}\right\}-\frac{q_{n_{\ell}} }{2}\] \[=\sum_{j=0}^{q_{n_{\ell}}-1}\left(\frac{j}{q_{n_{\ell}}}+\frac{ \left\{u/a_{n_{\ell}+1}+t\left(\overline{x_{0}}\right)\right\}}{q_{n_{\ell}}}+ \frac{O(1/a_{n_{\ell}+1})}{q_{n_{\ell}}}+\frac{R_{\varepsilon}}{q_{n_{\ell}}} \right)-\frac{q_{n_{\ell}}}{2}\] \[=\left\{u/a_{n_{\ell}+1}+t(\overline{x_{0}})\right\}-\frac{1}{2} +O(1/a_{n_{\ell}+1})+R_{\varepsilon},\]
which proves (9). Clearly,
\[\#\left\{0\leq u\leq\lfloor ca_{n_{\ell}+1}\rfloor:\left\|\frac{u}{a_{n_{\ell}+1 }}+\overline{x_{0}}\right\|<\varepsilon\right\}\leq 2\varepsilon a_{n_{\ell}+1}+2\]
and by the Denjoy-Koksma inequality (see (5)), we have
\[|S_{(u+1)q_{n_{\ell}}}(s,\alpha,x_{0})-S_{uq_{n_{\ell}}}(s,\alpha,x_{0})|\ll 1,\]
for any \(0\leq u\leq a_{n_{\ell}+1}-1\). Thus,
\[S_{[ca_{n_{\ell}+1}]}q_{n_{\ell}}(s,\alpha,x_{0}) =\sum_{u=0}^{\lfloor ca_{n_{\ell}+1}\rfloor-1}S_{(u+1)q_{n_{\ell}} }(s,\alpha,x_{0})-S_{uq_{n_{\ell}}}(s,\alpha,x_{0})\] \[=\sum_{u=0}^{\lfloor ca_{n_{\ell}+1}\rfloor-1}\left(\left\{u/a_{n _{\ell}+1}+\iota(\overline{x_{0}})\right\}-\frac{1}{2}+O(\varepsilon)+O\left( 1/a_{n_{\ell}+1}\right)\right)+O\left(\varepsilon a_{n_{\ell}+1}\right)\] \[=a_{n_{\ell}+1}\left(\int_{0}^{c}\iota\left(y+\overline{x_{0}} \right)\,\mathrm{d}y-\frac{c}{2}+O(\varepsilon)\right)+O(1),\]
where the implied constants in the \(O\)-terms depend neither on \(c\) nor on \(\varepsilon\). In the last line, we used Koksma's inequality to compare sum and integral. With \(\varepsilon\to 0\), (8) follows.
Next, we fix \(i\in\{1,\ldots,\nu\}\) and show that
\[\lim_{\ell\to\infty}\frac{S_{\lfloor ca_{n_{\ell}+1}\rfloor q_{n_{\ell}}}( \mathds{1}_{[0,\gamma]\iota},\alpha,x_{0})}{a_{n_{\ell}+1}}=\left(\int_{0}^{ c}\mathds{1}_{[0,\overline{\gamma_{1}}]}\left(y+\overline{x_{0}}\right)\, \mathrm{d}y-ct(\overline{\gamma_{1}})\right), \tag{10}\]
with the convergence being uniform in \(c\in[0,1]\). For convenience, we will drop the index \(i\) in the following, that is, we set \(\gamma:=\gamma_{i},\overline{\gamma}:=\overline{\gamma_{i}}\). Since \(\overline{\gamma}\neq 0\), we take \(\varepsilon>0\) such that \(\varepsilon\leq\|\overline{\gamma}\|\). Further, let \(\ell\) be large enough such that \(\left|\frac{\lfloor ca_{n_{\ell}+1}\rfloor}{a_{n_{\ell}+1}}-c\right|=\frac{ \lfloor ca_{n_{\ell}+1}\rfloor}{a_{n_{\ell}+1}}<\varepsilon/10\) uniformly in \(c\in[0,1]\). Moreover for \(\ell\) large enough, we have
\[\left\|q_{n_{\ell}}\gamma-\overline{\gamma}\right\|<\varepsilon/10,\quad \left\|q_{n_{\ell}}x_{0}-\overline{x_{0}}\right\|<\varepsilon/10.\]
We will show that for any \(\ell\) sufficiently large and any \(0\leq u\leq\lfloor ca_{n_{\ell}+1}\rfloor\) that satisfies \(\left\|\frac{u}{a_{n_{\ell}+1}}+\overline{x_{0}}-\overline{\gamma}\right\|>\varepsilon\) and \(\left\|\frac{u}{a_{n_{\ell}+1}}+\overline{x_{0}}\right\|>\varepsilon\), we have
\[\left|\left(S_{(u+1)q_{n_{\ell}}}(\mathds{1}_{[0,\gamma]\iota},\alpha,x_{0})-S_ {uq_{n_{\ell}}}(\mathds{1}_{[0,\gamma]\iota},\alpha,x_{0})\right)-\left( \mathds{1}_{[0,\overline{\gamma}]}\left(u/a_{n_{\ell}+1}+\overline{x_{0}} \right)-\iota(\overline{\gamma})\right)\right|<\varepsilon.\]
To prove this, observe that
\[S_{(u+1)q_{n_{\ell}}}(\mathds{1}_{[0,\gamma]\iota},\alpha,x_{0}) -S_{uq_{n_{\ell}}}(\mathds{1}_{[0,\gamma]\iota},\alpha,x_{0}) =S_{q_{n_{\ell}}}\left(\mathds{1}_{[0,\gamma]\iota},\alpha,T_{ \alpha}^{uq_{n_{\ell}}}(x_{0})\right)\] \[=\#\left\{0\leq n\leq q_{n_{\ell}}-1:\iota\left(n\alpha+uq_{n_{ \ell}}\alpha+x_{0}\right)\in[0,\iota(\gamma)]\right\}-\iota(\gamma)q_{n_{\ell}}\] \[=\#\left\{0\leq n\leq q_{n_{\ell}}-1:\iota\left(n\frac{p_{n_{\ell} }}{q_{n_{\ell}}}+\frac{\iota\iota(\overline{x_{0}})}{q_{n_{\ell}}}+\frac{ \delta n_{\ell}}{q_{n_{\ell}}}+u\delta_{n_{\ell}}+x_{0}\right)\in[0,\iota( \gamma)]\right\}-\iota(\gamma)q_{n_{\ell}}\] \[=\#\left\{0\leq n\leq q_{n_{\ell}}-1:\iota\left(n\frac{p_{n_{\ell} }}{q_{n_{\ell}}}+\frac{u/a_{n_{\ell}+1}}{q_{n_{\ell}}}+\frac{O(1/a_{n_{\ell}+1} )}{q_{n_{\ell}}}+\frac{O(1/a_{n_{\ell}+1})}{q_{n_{\ell}}}+x_{0}\right)\in[0, \iota(\gamma)]\right\}\] \[\qquad-\iota(\gamma)q_{n_{\ell}}.\]
Since \(\gcd(p_{n_{\ell}},q_{n_{\ell}})=1\), we have
\[\#\left\{0\leq n\leq q_{n_{\ell}}-1:\iota\left(n\frac{p_{n_{\ell} }}{q_{n_{\ell}}}+\frac{u/a_{n_{\ell}+1}}{q_{n_{\ell}}}+\frac{O(1/a_{n_{\ell}+1} )}{q_{n_{\ell}}}+x_{0}\right)\in[0,\iota(\gamma)]\right\}\] \[=\#\left\{0\leq n\leq q_{n_{\ell}}-1:\left\{n\frac{p_{n_{\ell}}}{q _{n_{\ell}}}+\frac{u/a_{n_{\ell}+1}}{q_{n_{\ell}}}+\frac{O(1/a_{n_{\ell}+1})}{q _{n_{\ell}}}+\frac{\lfloor q_{n_{\ell}}\iota(x_{0})\rfloor}{q_{n_{\ell}}}+\frac {\iota(q_{n_{\ell}}x_{0})}{q_{n_{\ell}}}\right\}\in\left[0,\frac{\lfloor q_{n_{ \ell}}\iota(\gamma)\rfloor}{q_{n_{\ell}}}+\frac{\iota(q_{n_{\ell}}\gamma)}{q_{n _{\ell}}}\right]\right\}\] \[=\#\left\{0\leq j\leq q_{n_{\ell}}-1:\left\{\frac{j}{q_{n_{\ell}}}+ \frac{\left\{u/a_{n_{\ell}}+\iota(\overline{x_{0}})\right\}}{q_{n_{\ell}}}+ \frac{O(1/a_{n_{\ell}+1})}{q_{n_{\ell}}}+\frac{R_{\varepsilon}}{q_{n_{\ell}}} \right\}\in\left[0,\frac{\lfloor q_{n_{\ell}}\iota(\gamma)\rfloor}{q_{n_{\ell}}}+ \frac{\iota(q_{n_{\ell}}\gamma)}{q_{n_{\ell}}}\right]\right\}\] \[=\#\left\{0\leq j\leq q_{n_{\ell}}-1:\left\{\frac{j}{q_{n_{\ell}}}+ \frac{\left\{u/a_{n_{\ell}+1}+\iota(\overline{x_{0}})\right\}}{q_{n_{\ell}}}+ \frac{O(1/a_{n_{\ell}+1})}{q_{n_{\ell}}}+\frac{R_{\varepsilon}}{q_{n_{\ell}}} \right\}\in\left[0,\frac{\lfloor q_{n_{\ell}}\iota(\gamma)\rfloor}{q_{n_{\ell}}}+ \frac{\iota(\overline{\gamma})}{q_{n_{\ell}}}+\frac{S_{\varepsilon}}{q_{n_{\ell}}} \right]\right\},\]
where \(R_{\varepsilon}:=\iota(q_{n_{\ell}}x_{0})-\iota(\overline{x_{0}})\) and \(S_{\varepsilon}:=\iota(q_{n_{\ell}}\gamma)-\iota(\overline{\gamma})\). By the choice of \(\ell\), we have \(|R_{\varepsilon}|,|S_{\varepsilon}|\leq\frac{\varepsilon}{10}\leq\frac{ \|\overline{\gamma}\|}{10}\). Moreover, let \(\ell\) be large enough such that \(|O(1/a_{n_{\ell}})|\leq\frac{\varepsilon}{10}\) and since \(\left\|\frac{u}{a_{n_{\ell}+1}}+\overline{x_{0}}\right\|>\varepsilon\), we can drop the fractional part in the previous expression.
We now distinguish two cases: First, consider \(u\) with \(\left\{u/a_{n_{\ell}+1}+\iota(\overline{x_{0}})\right\}\leq\iota(\overline{ \gamma})\). Then using our assumption \(\left\|\frac{u}{a_{n_{\ell}+1}}+\overline{x_{0}}-\overline{\gamma}\right\|>\varepsilon\), it follows that \(\left\{u/a_{n_{\ell}+1}+\iota(\overline{x_{0}})\right\}-\iota(\overline{ \gamma})\leq-\varepsilon\) and thus
\[0\leq\left\{u/a_{n_{\ell}+1}+\iota(\overline{x_{0}})\right\}+R_{\varepsilon} +O(1/a_{n_{\ell}+1})-S_{\varepsilon}<\iota(\overline{\gamma}).\]
This implies
\[\#\left\{0\leq j\leq q_{n_{\ell}}-1:\frac{j}{q_{n_{\ell}}}+\frac{\left\{u/a_{n _{\ell}+1}+\iota(\overline{x_{0}})\right\}}{q_{n_{\ell}}}+\frac{O(1/a_{n_{\ell }+1})}{q_{n_{\ell}}}+\frac{R_{\varepsilon}}{q_{n_{\ell}}}\in\left[0,\frac{ \left\lfloor q_{n_{\ell}}\iota(\gamma)\right\rfloor}{q_{n_{\ell}}}+\frac{\iota (\overline{\gamma})}{q_{n_{\ell}}}+\frac{S_{\varepsilon}}{q_{n_{\ell}}} \right]\right\}=\left\lfloor q_{n_{\ell}}\iota(\gamma)\right\rfloor+1. \tag{11}\]
Similarly, if \(\left\{u/a_{n_{\ell}+1}+\iota(\overline{x_{0}})\right\}>\iota(\overline{ \gamma})\), then
\[\#\left\{0\leq j\leq q_{n_{\ell}}-1:\frac{j}{q_{n_{\ell}}}+\frac{\left\{u/a_{ n_{\ell}+1}+\iota(\overline{x_{0}})\right\}}{q_{n_{\ell}}}+\frac{O(1/a_{n_{\ell}+1}) }{q_{n_{\ell}}}+\frac{R_{\varepsilon}}{q_{n_{\ell}}}\in\left[0,\frac{\left\lfloor q _{n_{\ell}}\iota(\gamma)\right\rfloor}{q_{n_{\ell}}}+\frac{\iota(\overline{ \gamma})}{q_{n_{\ell}}}+\frac{S_{\varepsilon}}{q_{n_{\ell}}}\right]\right\}= \left\lfloor q_{n_{\ell}}\iota(\gamma)\right\rfloor. \tag{12}\]
Note that
\[\iota(\gamma)q_{n_{\ell}}=\left\lfloor q_{n_{\ell}}\iota(\gamma)\right\rfloor+ \iota(\overline{\gamma})+T_{\varepsilon},\]
where \(T_{\varepsilon}=\iota(q_{n_{\ell}}\gamma)-\iota(\overline{\gamma})\). We choose \(\ell\) large enough such that \(|T_{\varepsilon}|\leq\frac{\varepsilon}{10}\). Combining this with (11) and (12) yields
\[\left|\left(S_{(u+1)q_{n_{\ell}}}(\mathds{1}_{\{0,\gamma\}},\alpha,x_{0})-S_{ uq_{n_{\ell}}}(\mathds{1}_{\{0,\gamma\}},\alpha,x_{0})\right)-\left(\mathds{1}_{\{0, \overline{\gamma}\}}\left(u/a_{n_{\ell}+1}+\overline{x_{0}}\right)-\iota( \overline{\gamma})\right)\right|=|T_{\varepsilon}|<\varepsilon,\]
for any \(0\leq u\leq\left\lfloor ca_{n_{\ell}+1}\right\rfloor\) that satisfies \(\left\|\frac{u}{a_{n_{\ell}+1}}+\overline{x_{0}}-\overline{\gamma}\right\|>\varepsilon\) and \(\left\|\frac{u}{a_{n_{\ell}+1}}+\overline{x_{0}}\right\|>\varepsilon\). Clearly,
\[\#\left\{0\leq u\leq\left\lfloor ca_{n_{\ell}+1}\right\rfloor:\left\|\frac{u}{ a_{n_{\ell}+1}}+\overline{x_{0}}-\overline{\gamma}\right\|\leq\varepsilon\text{ or }\left\|\frac{u}{a_{n_{\ell}+1}}+\overline{x_{0}}\right\|\leq\varepsilon\right\}\leq 4 \varepsilon a_{n_{\ell}+1}+4\]
and thus analogously to above, we obtain
\[S_{\left\lfloor ca_{n_{\ell}+1}\right\rfloor}q_{n_{\ell}}( \mathds{1}_{\{0,\gamma\}},\alpha,x_{0}) =\sum_{u=0}^{\left\lfloor ca_{n_{\ell}+1}\right\rfloor-1}S_{(u+1)q _{n_{\ell}}}(\mathds{1}_{\{0,\gamma\}},\alpha,x_{0})-S_{uq_{n_{\ell}}}( \mathds{1}_{\{0,\gamma\}},\alpha,x_{0})\] \[=\sum_{u=0}^{\left\lfloor ca_{n_{\ell}+1}\right\rfloor-1}\left( \mathds{1}_{\{0,\overline{\gamma}\}}\left(u/a_{n_{\ell}+1}+\overline{x_{0}} \right)-\iota(\overline{\gamma})\right)+O(\varepsilon a_{n_{\ell}+1})+O(1)\] \[=a_{n_{\ell}+1}\left(\int_{0}^{\varepsilon}\left(\mathds{1}_{\{0, \overline{\gamma}\}}\left(y+\overline{x_{0}}\right)-\iota(\overline{\gamma}) \right)\mathrm{d}y\right)+O(\varepsilon a_{n_{\ell}+1})+O(1).\]
With \(\varepsilon\to 0\), (10) follows. Combining (8) and (10), we obtain statement (7), which finishes the proof.
Proof of Theorem 1.: We assume that there exist normalizing sequences \((A_{M})_{M\in\mathbb{N}}\) and \((B_{M})_{M\in\mathbb{N}}\) with \(A_{M}\in\mathbb{R},\,B_{M}>0\) and \(B_{M}\to\infty\) such that
\[\lim_{M\to\infty}\frac{S_{N}(f,\alpha,x_{0})-A_{M}}{B_{M}}\overset{d}{=}X, \tag{13}\]
where \(N\sim U([1,M]]\) and \(X\) is a random variable with a non-degenerate distribution, i.e. \(X\) attains at least two different values with positive probability. By Proposition 3.1 and since \(B_{M}\to\infty\), we can assume that \(f\) is of the form
\[f(x)=\left(t(x)-\frac{1}{2}\right)\sum_{i=1}^{\nu}H_{i}+\sum_{i=1}^{\nu}H_{i} \left(\mathbb{I}_{[0,Y_{i})}(x)-t(\gamma_{i})\right)\]
where \(H_{i}\in\mathbb{R}\). Let \((n_{\ell})_{\ell\in\mathbb{N}}\) be the sequence of integers from Lemma 3.7 and, for some \(c\in(0,1]\), define \(M_{\ell}:=\left\{ca_{n_{\ell}+1}\right\}q_{n_{\ell}}+q_{n_{\ell}}-1\). Clearly, any \(N\in[0,M_{\ell}]\) has a unique representation of the form \(N=b_{\ell}q_{n_{\ell}}+N^{\prime}\) where \(0\leq b_{\ell}\leq\left\{ca_{n_{\ell}+1}\right\}\) and \(0\leq N^{\prime}\leq q_{n_{\ell}}-1\). It follows immediately from the definition that we can decompose the Birkhoff sum as
\[S_{N}(f,\alpha,x_{0}) =S_{b_{\ell}q_{\ell}}(f,\alpha,x_{0})+S_{N^{\prime}}\left(f, \alpha,T_{\alpha}^{b_{\ell}q_{n_{\ell}}}(x_{0})\right)\] \[=S_{b_{\ell}q_{\ell}}(f,\alpha,x_{0})+S_{N^{\prime}}(f,\alpha,x_{ 0}+b_{\ell}q_{n_{\ell}}\alpha).\]
Applying the Denjoy-Koksma inequality (see (5)) shows that
\[|S_{N^{\prime}}(f,\alpha,x_{0}+b_{\ell}q_{n_{\ell}}\alpha)|\ll_{f}\sum_{i=1}^{ n_{\ell}}a_{i},\]
which by the properties of \((n_{\ell})_{\ell\in\mathbb{N}}\) implies that
\[\frac{S_{N^{\prime}}(f,\alpha,x_{0}+b_{\ell}q_{n_{\ell}}\alpha)}{a_{n_{\ell}+ 1}}=o(1),\quad\ell\to\infty.\]
If \(N_{\ell}\sim U([0,M_{\ell}])\), then it is easy to see that
\[N_{\ell}\stackrel{{ d}}{{=}}b_{\ell}q_{n_{\ell}}+N^{\prime},\]
where \(b_{\ell}\sim U([0,\left\{ca_{n_{\ell}+1}\right\}])\), \(N^{\prime}\sim U([0,q_{n_{\ell}}-1])\) and \(b_{\ell}\) and \(N^{\prime}\) are independent. Hence,
\[\frac{S_{N_{\ell}}(f,\alpha,x_{0})}{a_{n_{\ell}+1}}\stackrel{{ d}}{{=}}\frac{S_{b_{\ell}q_{n_{\ell}}}(f,\alpha,x_{0})}{a_{n_{ \ell}+1}}+o(1).\]
Thus we get for any \(x\in\mathbb{R}\)
\[\frac{1}{M_{\ell}}{}^{\#}\left\{1\leq N\leq M_{\ell}:\frac{S_{N} (f,\alpha,x_{0})}{a_{n_{\ell}+1}}\leq x\right\} =\frac{1}{M_{\ell}}{}^{\#}\left\{0\leq N\leq M_{\ell}:\frac{S_{N} (f,\alpha,x_{0})}{a_{n_{\ell}+1}}\leq x\right\}+o(1)\] \[=\mathbb{P}\left[\frac{S_{N_{\ell}}(f,\alpha,x_{0})}{a_{n_{\ell} +1}}\leq x\right]+o(1)\] \[=\mathbb{P}\left[\frac{S_{b_{\ell}q_{n_{\ell}}}(f,\alpha,x_{0})}{ a_{n_{\ell}+1}}\leq x+o(1)\right]+o(1)\] \[=\mathbb{P}\left[\frac{S_{\left\{U_{\ell}a_{n_{\ell}+1}\right\} q_{n_{\ell}}}(f,\alpha,x_{0})}{a_{n_{\ell}+1}}\leq x+o(1)\right]+o(1),\]
where \(U_{\ell}\sim U([0,c])\). In the last line, we used that
\[\mathbb{P}\left[\frac{S_{b_{\ell}q_{n_{\ell}}}(f,\alpha,x_{0})}{a_{n_{\ell}+1 }}\leq y\right]=\mathbb{P}\left[\frac{S_{\left\{U_{\ell}a_{n_{\ell}+1}\right\} q_{n_{\ell}}}(f,\alpha,x_{0})}{a_{n_{\ell}+1}}\leq y\right]+o(1),\]
uniformly in \(y\in\mathbb{R}\). Moreover, by Lemma 3.7 we get the (almost sure) limit
\[\lim_{\ell\to\infty}\frac{S_{\left\{U_{\ell}a_{n_{\ell}+1}\right\}q_{n_{\ell}} }(f,\alpha,x_{0})}{a_{n_{\ell}+1}}=g(U_{\ell}),\]
where, for \(x\in[0,1]\),
\[g(x):=\left(\sum_{i=1}^{x}H_{i}\right)\left(\int_{0}^{x}\iota(y+\overline{x_{0}}) \,\mathrm{d}y-\frac{x}{2}\right)+\sum_{i=1}^{\nu}H_{i}\left(\int_{0}^{x} \mathds{1}_{[0,\overline{\gamma_{i}}]}\left(y+\overline{x_{0}}\right)\, \mathrm{d}y-x\iota(\overline{\gamma_{i}})\right).\]
Since \(g(U_{c})\) has a continuous distribution, this implies that
\[\lim_{\ell\to\infty}\frac{1}{M_{\ell}}\#\left\{1\leq N\leq M_{\ell}:\frac{S_{N }(f,\alpha,x_{0})}{a_{n_{\ell}+1}}\leq x\right\}=\mathbb{P}\left[g(U_{c})\leq x \right].\]
Now let \(\tilde{A}_{M}:=0\) and \(\tilde{B}_{M}:=\frac{M}{q_{\mu\alpha\alpha}}\), where \(q_{n(M)}\leq M<q_{n(M)+1}\). We have shown in the previous argument that, for any \(c\in(0,1]\) and for \((n_{\ell})_{\ell\in\mathbb{N}}\) as before, we have
\[\lim_{\ell\to\infty}\frac{S_{[U_{c},a_{n_{\ell}+1}]}q_{n_{\ell}}(f,\alpha,x_{0 })-\tilde{A}_{M_{\ell}}}{\tilde{B}_{M_{\ell}}}\overset{d}{=}cg(U_{c}).\]
By the convergence of types theorem (see, e.g., [22, Theorem 14.2]) and since the limit in (13) also holds along every subsequence tending to infinity, there exist quantities \(B_{c}>0\) and \(A_{c}\in\mathbb{R}\) such that for any \(c\in(0,1]\) we have
\[cg(U_{c})\overset{d}{=}B_{c}X+A_{c}.\]
This implies that for any \(0<c_{1},c_{2}\leq 1\), we can write
\[g(U_{c_{1}})\overset{d}{=}B(c_{1},c_{2})g(U_{c_{2}})+A(c_{1},c_{2}), \tag{14}\]
where \(B(c_{1},c_{2})>0\) and \(A(c_{1},c_{2})\in\mathbb{R}\).
We now collect a few properties of the function \(g(x)\) for \(x\in\mathbb{R}\). First, we note that \(g(0)=g(1)=0\). Further, \(g\) is differentiable except in all points of the form \(\overline{\gamma}_{i}+\overline{x}_{0}\) and \(g\) is non-constant. To see the latter, we fix \(\delta>0\) small enough such that \(\delta<\min_{i=2,\ldots,\nu}\|\overline{\gamma}_{1}-\overline{\gamma_{i}}\|\) (which is possible because \(\overline{\gamma_{1}}\neq\overline{\gamma_{i}}\) for all \(i=2,\ldots,\nu\)). We then get
\[g^{\prime}\left(\iota(\overline{\gamma}_{1}+\overline{x}_{0})-\frac{\delta}{ 2}\right)-g^{\prime}\left(\iota(\overline{\gamma}_{1}+\overline{x}_{0})+\frac {\delta}{2}\right)=\delta\left(\sum_{i=1}^{\nu}H_{i}\right)+H_{1}.\]
By choice of \(f\), there exists at least one \(H_{i}\neq 0\), thus, we may assume \(H_{1}\neq 0\). Since \(\delta\) can be chosen arbitrarily small, it follows that \(g^{\prime}\) is not constant and hence \(g\) is not constant. Hence, locally to the right of \(0\), \(g(x)\) is either monotonically increasing or monotonically decreasing. In the following we discuss the case where \(g(x)\) is increasing, the case where \(g(x)\) is decreasing can be handled analogously. It follows that there exists an \(\varepsilon\in(0,1)\) and \(\delta\in(0,1]\) with \(\varepsilon<\delta\) with the following properties: The function \(g\) is increasing on \([0,\varepsilon]\) with \(g(\varepsilon)>0\). On \([\varepsilon,\delta]\), \(g\) is decreasing and \(0\leq g(\delta)<g(\varepsilon)\).
Illustration of the argument above. Clearly, \(g([0,\varepsilon])=g([0,\delta])\).
Using (14) we infer
\[g(U_{\varepsilon})\stackrel{{ d}}{{=}}B(\varepsilon,\delta)g(U_{ \delta})+A(\varepsilon,\delta).\]
However, by the choice of \(\varepsilon\) and \(\delta\), we have \(g([0,\varepsilon])=g([0,\delta])\), which immediately implies that \(A(\varepsilon,\delta)=0\) and \(B(\varepsilon,\delta)=1\). By construction, we have
\[\mathbb{P}\left[g(U_{\varepsilon})>g(\delta)\right]<\mathbb{P}\left[g(U_{ \delta})>g(\delta)\right],\]
which is an immediate contradiction to \(g(U_{\varepsilon})\stackrel{{ d}}{{=}}g(U_{\delta})\).
### Acknowledgements
We would like to thank Bence Borda for many valuable discussions. LF and MH are supported by the Austrian Science Fund (FWF) Project P 35322 _Zufall und Determinismus in Analysis und Zahlentheorie_.
|
2306.16045 | OpenNDD: Open Set Recognition for Neurodevelopmental Disorders Detection | Since the strong comorbid similarity in NDDs, such as attention-deficit
hyperactivity disorder, can interfere with the accurate diagnosis of autism
spectrum disorder (ASD), identifying unknown classes is extremely crucial and
challenging from NDDs. We design a novel open set recognition framework for
ASD-aided diagnosis (OpenNDD), which trains a model by combining autoencoder
and adversarial reciprocal points learning to distinguish in-distribution and
out-of-distribution categories as well as identify ASD accurately. Considering
the strong similarities between NDDs, we present a joint scaling method by
Min-Max scaling combined with Standardization (MMS) to increase the differences
between classes for better distinguishing unknown NDDs. We conduct the
experiments in the hybrid datasets from Autism Brain Imaging Data Exchange I
(ABIDE I) and THE ADHD-200 SAMPLE (ADHD-200) with 791 samples from four sites
and the results demonstrate the superiority on various metrics. Our OpenNDD
achieves promising performance, where the accuracy is 77.38%, AUROC is 75.53%
and the open set classification rate is as high as 59.43%. | Jiaming Yu, Zihao Guan, Xinyue Chang, Shujie Liu, Zhenshan Shi, Xiumei Liu, Changcai Yang, Riqing Chen, Lanyan Xue, Lifang Wei | 2023-06-28T09:28:33Z | http://arxiv.org/abs/2306.16045v2 | # OpenNDD: Open Set Recognition for Neurodevelopmental Disorders Detection
###### Abstract
Neurodevelopmental disorders (NDDs) are a highly prevalent group of disorders and represent strong clinical behavioral similarities, and that make it very challenging for accurate identification of different NDDs such as autism spectrum disorder (ASD) and attention-deficit hyperactivity disorder (ADHD). Moreover, there is no reliable physiological markers for NDDs diagnosis and it solely relies on psychological evaluation criteria. However, it is crucial to prevent misdiagnosis and underdiagnosis by intelligent assisted diagnosis, which is closely related to the follow-up corresponding treatment. In order to relieve these issues, we propose a novel open set recognition framework for NDDs screening and detection, which is the first application of open set recognition in this field. It combines auto encoder and adversarial reciprocal points open set recognition to accurately identify known classes as well as recognize classes never encountered. And considering the strong similarities between different subjects, we present a joint scaling method called MMS to distinguish unknown disorders. To validate the feasibility of our presented method, we design a reciprocal opposition experiment protocol on the hybrid datasets from Autism Brain Imaging Data Exchange I (ABIDE I) and THE ADHD-200 SAMPLE (ADHD-200) with 791 samples from four sites and the results demonstrate the superiority on various metrics. Our OpenNDD has achieved promising performance, where the accuracy is 77.38%, AUROC is 75.53% and the open set classification rate is as high as 59.43%.
Keywords:Neurodevelopmental disorders, Open set recognition, ASD, Adversarial reciprocal points, MMS.
Introduction
Neurodevelopmental disorders (NDDs) are a group of early-onset disorders affecting brain development and function, which are characterized by wide genetic and clinical variability with high prevalence [1]. It includes autism spectrum disorder (ASD), intellectual disabilities, attention-deficit hyperactivity disorder (ADHD), communication disorder, specific learning disorders, and motor disorders among others [2, 3]. Evidences have shown that NDDs have overlapping phenotypes, frequently co-occur, and share multiple genetic causes, which implies strong similarities between NDDs [4, 5, 6, 7, 8, 9, 10]. For instance, Rommelse et al. have recognized considerable clinical, genetic, and neuropsychological overlap between ASD and ADHD, though ASD and ADHD are considered as distinct disorders in the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) [11]. Sokolova et al. have shown that 22% to 83% of children with ASD have symptoms that satisfy the DSM-5 criteria for ADHD, vice versa, 30% to 65% of children with ADHD have clinically significant symptoms of ASD [12]. Therefore, although ASD and ADHD are considered as two different disorders, periodically some behaviors of ASD may be misdiagnosed as ADHD, and there may be some patients with ASD who have a combination of ADHD symptoms, easily leading to misdiagnosis and underdiagnosis. It is relied solely on psychological criteria and there is no pathological biomarkers [13] for diagnosing ASD or other NDDs. Some considerable works in exploring NDDs have been done with traditional methods [14, 15]. However, these methods default to the fact that the target subject is either an ASD subject or a typically developing (TD) subject. This is clearly unreasonable because there are different kinds of mental illnesses and we cannot merely categorize patient types as ASD or TD. Thus, an ideal open system should screen for an unknown disease reasonably. Fortunately, open set recognition (OSR) has achieved a great success for visual recognition tasks recently. Aiming to simultaneously classify the seen classes and identify the unseen classes as 'unknown', OSR can not only distinguish among the training classes, but also indicate whether a subject comes from an unknown category [16].
Inspired by the above observations, we introduce Adversarial Reciprocal Points Learning (ARPL) Open Set Recognition for NDDs aided diagnosis to alleviate misdiagnosis and underdiagnosis among similar NDDs, such as ASD and ADHD. Compared with other OSR methods [17, 18, 19], the method based on Reciprocal Point elaborates open space risk from the perspective of multiclass integration to model the latent open space for each known class in the feature space. Based on reciprocal points with adversarial margin constraint between two known categories, a classification framework diminishes the open space risk and the empirical classification risk [20].
In this paper, we design an Auto Encoder network (AE) combining with the ARPL to extract brain functional connectivity (FC) networks for NDDs detection. It is the first application of OSR in the field of NDDs screening and detection. To broaden the distinction between in-distribution (ID) data like ASD subjects or TD subjects and out-of-distribution (OOD) data such as ADHD subjects, we propose a joint scaling method by Min-Max scaling combined with Standardization (MMS), which significantly improves the differences between the ID and OOD data. And the Maximum Mean Discrepancy (MMD), a domain adaptation method, is used to make a distinction between TD
subjects and ASD subjects by reducing the variability among TD subjects. Moreover, we design a Reciprocal Opposition Experiment (ROE) to verify the feasibility and robustness of our proposed method. The same four sites of open hybrid datasets from ABIDE I and ADHD-200 are used for experimental verification and evaluation in our experiments demonstrates the superiority on various metrics.
## 2 Method
### Problem Formalization
Given a training dataset of \(n\) subjects \(D_{L}=\{(x_{1},y_{1}),\ldots,(x_{n},y_{n})\}\) includes \(K\) known categories, where \(y_{i}\in\{1,\ldots,K\}\) represents the category of \(x_{i}\). There are \(m\) test subjects in testing dataset \(D_{T}=\{t_{1},\ldots,t_{m}\}\) where \(t_{i}\) belongs to categories \(\{1,\ldots,K,K+1,...,K+U\}\), and \(U\) is the quantity of unknown categories. The deep embedding space of a certain category \(k\) is designated as \(S_{k}\) and its corresponding open space is designated as \(O_{k}\). For the purpose of formalizing and effectively managing the open space risk, \(O_{k}\) is decomposed into two subspaces: the positive open space from other known categories as \(O_{k}^{pos}\) and the remaining infinite unknown space as the negative open space \(O_{k}^{neg}\). Namely, \(O_{k}=O_{k}^{pos}\cup O_{k}^{neg}\). In this paper, the subjects in \(D_{L}^{k}\in S_{k}\) come from a certain category \(k\), subjects in \(D_{L}^{*k}\in O_{k}^{pos}\) come from other known categories, and subjects in unknown dataset \(D_{U}\in O_{k}^{neg}\) come from other than \(D_{L}\), where \(K=2\) denotes two known categories of ID and \(U=1\) denotes an unknown category of OOD.
Figure 1: The framework of our method. Time series are extracted from the fMRI of all subjects, which is used for MMS after computing functional connectivity. In training stage, the model is trained by AE and the features obtained from MMS are used as the input data. In testing stage, the trained model is used to obtain predictions and the input features extracted from OOD and ID subjects for testing.
### Overview
The goal of our architecture is to accurately identify known classes and unknown classes. The overview of the architecture is given in **Fig. 1**. Corresponding to each fMRI image, the time series are first extracted from the anatomical brain regions of interests (ROIs) with Anatomical Automatic Labeling (AAL) atlas. MMS includes min-max scaling and standardization for the extracted features, which is obtained from the calculated FC. It can enlarge the differences between the ID and OOD data. In training stage, ID subjects for training are subsequently trained in a self-supervised modality to obtain feature embedding. In addition, a domain adaptation approach MMD is applied to reduce the differences between TD subjects from ABIDE I and ADHD-200. The model is trained by AE and the features obtained from MMS with ID subjects for training are as the input data. The total loss of AE model is composed of MMD loss and ARPL loss. In testing stage, it is different from the training stage, which is the input data including the ID subjects for testing as well as the OOD subjects. The trained model is used to obtain predictions and the input features extracted from OOD and ID subjects for testing.
### Open Set Recognition for NDDs Detection
To structure an open set recognition framework for NDDs detection, we employ ARPL in our model which elaborates open space risk from the perspective of multiclass integration. And it also models the latent open space for each known class in the feature space. By utilizing ARPL, we can diminish the open space risk along with the empirical classification risk [20].
**Reciprocal Points for Classification.** The potential space of sub-dataset \(D_{L}^{\pi k}\cup D_{U}\) is represented by the reciprocal point (RP) \(P^{k}\) which belongs to a particular category \(k\). Hence, the subjects in \(O_{k}\) should be closer to the \(P^{k}\) than the subjects in \(S_{k}\):
\[max\left(\varsigma(D_{L}^{\pi k}\cup D_{U},P^{k})\right)\leq d,\forall d\in \varsigma(D_{L}^{k},p^{k}) \tag{1}\]
where \(\varsigma(\cdot,\cdot)\) computes the distance of all subjects between two sets. According to Eq. (1), we can classify the subjects by comparing their RPs with the corresponding known categories. The framework assesses the distinctness between the RPs and the embedding feature of all known categories to ascertain which category it belongs to by the proposed distance metrics. Finally, the softmax function is applied to normalize the classification probability. The learning of \(\theta\) is achieved by minimizing the RPs classification loss on the basis of the negative log-probability of the true category \(k\):
\[L_{c}(x;\theta,P)=-\log p(y=k|x,C,P) \tag{2}\]
In addition to the classification of the known categories, an advantage of minimizing Eq. (2) is to separate the known and unknown spaces by maximizing the distance between the RPs of the categories and their corresponding training subjects as follows:
\[\underset{f\in H}{\operatorname{argmax}}\{\zeta(D_{L}^{k},P^{k})\} \tag{3}\]
Even though Eq. (2) and Eq. (3) contribute to maximize the interval between the closed space \(S_{k}\) and the open space center \(O_{k}\), \(O_{k}\) is not restrained in Eq. (2), which means the risk of open space still remains.
**Adversarial Margin Constraint.** To reduce open space risk, Adversarial Margin Constraint (AMC) is presented to constrain open space [20]. The total open space risk can be restricted after constraining the open space risk for each known category. For the separation of \(S_{k}\) and \(O_{k}\) to a larger extent, the open space \(O_{k}\) has to be limited so that the space of open set can be confirmed. Our goal is to reduce the open space risk of each known category by limiting the open space \(O_{k}\) to a finite range making the maximum value of the distance between unknown data and RP less than \(R\) increases: \(\max(\zeta(D_{L}^{\pi k}\cup D_{U},P^{k}))\leq R\), where \(R\) is a learnable margin to obtain more non-k subjects. Obviously, it is almost impossible to govern open space risk by limiting open space as there are a great number of unknown subjects in \(D_{U}\). Taking into account spaces \(S_{k}\) and \(O_{k}\) are mutually complementary, the open space risk can be constrained indirectly by restricting the distance between the subjects from \(S_{k}\) and the RPs \(P^{k}\) to be less than \(R\) as follows:
\[L_{o}(x;\theta,P^{k},R^{k})=\max(d_{e}(C(x),P^{k})-R,0) \tag{4}\]
Concretely, minimizing Eq. (4) by the classification loss \(L_{c}\) is equivalent to making \(\zeta(D_{L}^{\pi k}\cup D_{U},P^{k})\) in Eq. (1) as small as possible relative to \(R\). In such multicategory interactions, the known categories are restrained each other. On the one hand, there will be an increase in the distance between category \(k\) and its RP due to the optimization of former classification loss in Eq. (2). On the other hand, the category \(k\) is bounded by other RPs \(P^{\pi k}\) as follows:
\[\underset{f\in H}{\operatorname{argmin}}\{\max(\{\zeta(D_{L}^{k},P^{\pi k})-R \cup\{0\})\} \tag{5}\]
Each known category is maximally forced to margin of limited feature space to keep each category away from its potential unknown space with the adversarial mechanism between Eq. (3) and Eq. (5). Hence, we can predict known categories \(y_{l}\in\{1,\ldots,K\}\) correctly and reject unknown ones \(t_{j}\in\{K+1,\ldots,K+U\}\).
### Joint Min-Max Scaling and Standardization (MMS) with MMD
NDDs exhibit strong clinical similarities, making it challenging to differentiate between data from various FC. Thus, we present MMS to enlarge the differences between the ID and OOD subjects with the aim of differentiating ID data, including TD subjects and ASD subjects, and OOD data like ADHD subjects. While FC is integrated into AE for training and prediction in ASD diagnosis, conventional classifiers struggle to differentiate between ID and OOD data, primarily due to the influence of similar OOD data. To address this limitation, we apply ARPL to perform MMS on FC:
\[\begin{split} M_{min-max}=a+\frac{(M-M_{min})(b-a)}{M_{max}-M_{min}}\\ D=\frac{M_{min-max}-M_{min-max}}{M_{std}}\end{split} \tag{7}\]
where \(M\) is a matrix of FC and its value is to be mapped into the \([a,b]\) interval. After applying Min-Max scaling to \(M\), we can get a new distribution of all matrixes which is \(\bar{M}_{min-max}\) and we set \(a=-1\) and \(b=1\). Subsequently, \(D\) is calculated by \(M_{min-max}\) and its mean \(\bar{M}_{min-max}\) with standard deviation \(M_{std}\) so that feature embedding \(D\) for each subject is 0, with a standard deviation 1.
Addressing data heterogeneity is another challenge that our work tackles. To mitigate the impact of having two different subtypes of TD subjects in the ADHD-200 dataset [21], we incorporate MMD into our framework inspired by the literature [22]. MMD is a domain adaptation metric that measures the differences in the distribution of distance data between source and target domain. MMD involves finding a mapping function that projects a variable to a higher dimensional space, and determining the difference between the expectation of the two distributed random variables after the mapping, which is the Mean Discrepancy. Finally, we identify the upper bound of this Mean Discrepancy to obtain the MMD value.
## 3 Experimental Settings
### Dataset and Experimental Details
Due to the different preprocessing pipelines will result in data heterogeneity [24], it is necessary for NDDs detection to choose data by the same pipeline. To validate and compare our approach with baseline models, we construct the multi-site hybrid fMRI datasets from ABIDE I and ADHD-200 which are both preprocessed by NeuroImaging Analysis Kit (NIAK) pipeline in Preprocessed Connectomes Project (PCP) [23]. We collect 791 subjects, including 470 TD subjects, 144 ASD subjects and 177 ADHD subjects. Considering the domain shift of all sites [24] in the hybrid datasets which will reduce the difficulties of our OpenNDD, we accordingly retain the four sites KKI, NYU, OHSU and PITT with no missing values to maintain the degree of difficulty. It is necessary for the number of ID and OOD subjects to be close to 1:1 for the OSR task [20]. Therefore, we design a novel scheme of cross-validation experiments for solving number unbalance in TD subjects, ASD subjects and ADHD subjects with counts of 470, 144 and 177, respectively. Then we divide the samples of TD subjects into three random parts so that the ratio of the three categories is close to 1:1:1 [20]. And each data of TD subjects is crossed with ASD subjects and ADHD subjects respectively in 5 cross-validation experiments. Thus, there are 15 cross-validation experiments to calculate the mean and standard deviation.
Since this is the first application of OSR for NDDs screening and detection, we regard the method as baseline, which introduce ARPL directly to NDDs open set detection. We only compare the proposed framework with the baseline by some ablation experiments on seven metrics: accuracy (ACC), AUROC, open set classification rate (OSCR), specificity (SPE), sensitivity (SEN), AUIN and AUOUT [20, 25], where
AUROC (ability to distinguish ID and OOD data), OSCR(ability of open set classification), AUIN(ability to distinguish ID data) and AUOUT(ability to distinguish OOD data) are metrics of OSR. For ablation studies, we aim to validate the contribution of MMS and MMD for the performance of our OpenNDD. With regard to implementation details, we run the model with 16 batch size over 100 epochs. The momentum stochastic gradient descent (Momentum SGD) optimizer is used for classifier training [26]. The learning rate of the classifier starts at 0.01 and decreases by a factor of 0.1 with every 30 epochs over the course of training progress.
### Experimental Results
The results of ablation experiments on seven metrics for the four sites are shown in **Table 1**. Each result is obtained in 15 cross-validation experiments. The general experiments indicate ASD subjects and TD subjects as the ID data while ADHD subjects as the OOD data. Our framework significantly outperforms the baseline method and achieves the promising performance, except for the SPE metric. Owing to the presence of two different subtypes of TD subjects in the ADHD-200 [21], all frameworks tend to have larger variations on SPE and SEN metrics. As shown in **Table 1**, the SPE metric tends to be largely lower than the SEN metric that implies the discrimination ability for TD subjects is lower than that of ASD subjects. According to [20], the goal is to distribute all known classes around the periphery of the bounded embedding space and to confine unknown classes to the internal bounded space as shown in **Fig. 2** by our outcomes. It is combining histogram and scatter diagram with OOD and ID subjects for testing, in which the TD subjects (blue scatters) are distributed everywhere. It is observed that the majority of TD subjects and ASD subjects (red scatters) are separated to the sides as much as possible while the middle part is predominantly ADHD subjects (green scatters) in **Fig. 2**. These demonstrate that our proposed method boosts a significant improvement in distinguishing ID and OOD samples, and achieves a certain
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline \hline \multirow{3}{*}{\begin{tabular}{c} **Metrics (\%)** \\ \end{tabular} } & \multicolumn{3}{c|}{**General experiments**} & \multicolumn{3}{c}{**ROEs**} \\ \cline{2-7} & MMS (w/o) & \begin{tabular}{c} MMS (w) \\ MMD \\ (w/o) \\ \end{tabular} & \begin{tabular}{c} MMS (w) \\ MMD (w) \\ \end{tabular} & \begin{tabular}{c} MMS (w) \\ MMD (w/o) \\ \end{tabular} & \begin{tabular}{c} MMS (w) \\ MMD (w/o) \\ \end{tabular} & \begin{tabular}{c} MMS (w) \\ MMD (w/o) \\ \end{tabular} &
\begin{tabular}{c} MMS (w) \\ MMD (w) \\ \end{tabular} \\ \hline
**ACC** & 76.27\(\pm\)3.94 & 76.84\(\pm\)4.92 & **77.38\(\pm\)5.92** & 72.93\(\pm\)5.23 & 72.63\(\pm\)4.80 & **73.23\(\pm\)5.22** \\ \hline
**AUROC** & 17.88\(\pm\)4.63 & 63.31\(\pm\)6.87 & **75.53\(\pm\)6.01** & 20.48\(\pm\)8.61 & 71.70\(\pm\)6.62 & **74.95\(\pm\)5.69** \\ \hline
**OSCR** & 18.11\(\pm\)4.69 & 52.41\(\pm\)6.92 & **59.43\(\pm\)6.98** & 18.26\(\pm\)7.40 & 56.02\(\pm\)6.49 & **57.02\(\pm\)6.31** \\ \hline
**SPE** & **69.90\(\pm\)8.54** & 69.12\(\pm\)8.72 & 66.35\(\pm\)8.69 & **66.65\(\pm\)10.74** & 64.00\(\pm\)7.02 & 63.65\(\pm\)7.34 \\ \hline
**SEN** & 83.26\(\pm\)7.17 & 85.13\(\pm\)4.14 & **89.51\(\pm\)6.41** & 78.31\(\pm\)6.62 & 80.35\(\pm\)8.64 & **81.87\(\pm\)7.25** \\ \hline
**AUIN** & 46.13\(\pm\)1.83 & 75.25\(\pm\)5.12 & **83.83\(\pm\)4.71** & 55.77\(\pm\)4.73 & 86.41\(\pm\)3.19 & **86.54\(\pm\)3.78** \\ \hline
**AUOUT** & 23.79\(\pm\)1.08 & 46.21\(\pm\)6.26 & **57.77\(\pm\)7.54** & 19.63\(\pm\)1.72 & 45.52\(\pm\)8.85 & **51.55\(\pm\)7.58** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results for our ablation experiments. General experimental (ASD subjects and TD subjects as the ID data while ADHD subjects as the OOD data) and ROEs (ADHD subjects and TD subjects as the ID data while ASD subjects as the OOD data) results on four sites (KKI, NYU, OHSU and PITT) for hybrid datasets from ABIDE I and ADHD-200.
benchmark for the open-set classification accuracy. To further test and verify the feasibility of our method, we design an effective experimental protocol ROE and comparative experiment on seven metrics are also shown in **Table 1**. ROEs indicate ADHD subjects and TD subjects as the ID data while ASD subjects as the OOD data. It is similar to the results of general experiments, where all metrics are significantly enhanced, except for the SPE metric. The consistency of the two experimental results also validates the robustness of our proposed method.
## 4 Discussion and Conclusions
The goal of our architecture is to accurately identify known classes and unknown classes for NDDs screening and detection. We have noticed that the accuracy of the open set can be significantly boosted by improving the accuracy of the closed set as proved in literature [16]. Meanwhile, there are two subtypes of normal individuals among ADHD-200 [21], which may lead to intraclass variance and make the SPE metrics deteriorate. We leave the exploration and validation of these two issues for further work.
Figure 2: The illustrations of 6 best results in 15 cross-validation experiments. Blue, red and green represent TD subjects, ASD subjects and ADHD subjects respectively. We divide the samples of TD subjects into three random parts. And each data of TD subjects is crossed with ASD subjects and ADHD subjects respectively in 5 cross-validation experiments. Thus, there are 15 cross-validation experiments.
Extending prior studies on applying OSR to visual task [16, 17, 18, 19, 20], our proposed framework OpenNDD is the first application of OSR in the field of NDDs screening and detection. The experimental results prove that our method can distinguish known and unknown categories as well as identify known NDDs accurately. The limitation is that we solely choose the subjects from the four same sites with no missing values while ignoring the group differences such as age and gender. Nevertheless, our framework still demonstrates it is robust and feasible, which is of great meaning for a better auxiliary role in clinical diagnosis of NDDs.
|
2303.02836 | Blockchain-Empowered Lifecycle Management for AI-Generated Content
(AIGC) Products in Edge Networks | The rapid development of Artificial IntelligenceGenerated Content (AIGC) has
brought daunting challenges regarding service latency, security, and
trustworthiness. Recently, researchers presented the edge AIGC paradigm,
effectively optimize the service latency by distributing AIGC services to edge
devices. However, AIGC products are still unprotected and vulnerable to
tampering and plagiarization. Moreover, as a kind of online non-fungible
digital property, the free circulation of AIGC products is hindered by the lack
of trustworthiness in open networks. In this article, for the first time, we
present a blockchain-empowered framework to manage the lifecycle of edge AIGC
products. Specifically, leveraging fraud proof, we first propose a protocol to
protect the ownership and copyright of AIGC, called Proof-of-AIGC. Then, we
design an incentive mechanism to guarantee the legitimate and timely executions
of the funds-AIGC ownership exchanges among anonymous users. Furthermore, we
build a multi-weight subjective logic-based reputation scheme, with which AIGC
producers can determine which edge service provider is trustworthy and reliable
to handle their services. Through numerical results, the superiority of the
proposed approach is demonstrated. Last but not least, we discuss important
open directions for further research. | Yinqiu Liu, Hongyang Du, Dusit Niyato, Jiawen Kang, Zehui Xiong, Chunyan Miao, Xuemin, Shen, Abbas Jamalipour | 2023-03-06T02:06:13Z | http://arxiv.org/abs/2303.02836v1 | # Blockchain-Empowered Lifecycle Management for AI-Generated Content (AIGC) Products in Edge Networks
###### Abstract
The rapid development of Artificial Intelligence-Generated Content (AIGC) has brought daunting challenges regarding service latency, security, and trustworthiness. Recently, researchers presented the edge AIGC paradigm, effectively optimize the service latency by distributing AIGC services to edge devices. However, AIGC products are still unprotected and vulnerable to tampering and plagiarization. Moreover, as a kind of online non-fungible digital property, the free circulation of AIGC products is hindered by the lack of trustworthiness in open networks. In this article, for the first time, we present a blockchain-empowered framework to manage the lifecycle of edge AIGC products. Specifically, leveraging fraud proof, we first propose a protocol to protect the ownership and copyright of AIGC, called Proof-of-AIGC. Then, we design an incentive mechanism to guarantee the legitimate and timely executions of the funds-AIGC ownership exchanges among anonymous users. Furthermore, we build a multi-weight subjective logic-based reputation scheme, with which AIGC producers can determine which edge service provider is trustworthy and reliable to handle their services. Through numerical results, the superiority of the proposed approach is demonstrated. Last but not least, we discuss important open directions for further research.
AI-Generated Content (AIGC), Blockchain, Edge Networks, Circulation, Reputation.
## I Introduction
As an emerging technique, Artificial Intelligence-Generated Content (AIGC) has attracted significant attention from both academia and industry [1]. Instead of manually generating the content, AIGC enables the automatic creation (e.g., writing an essay, composing a song, and drawing a picture) using machine learning techniques such as Generative Adversarial Networks (GAN) and diffusion models. Consequently, we can acquire massive high-quality multimodal content while significantly saving on the required labor. Since 2014, AIGC has experienced rapid development and has been widely adopted in 3D gaming, voice assistants, video processing, etc. [2].
However, the current centralized AIGC framework suffers from high service latency. For instance, to generate an image on _Hugging Face_ platform (_[https://huggingface.co/spaces_](https://huggingface.co/spaces_)) using _Stable Diffusion_ model, users have to wait for 40-60 seconds. The reasons are twofold. Firstly, AIGC inference is complicated and time-consuming. In the above example, the Stable Diffusion model creates images from scratch by conducting denoising operations gradually, which takes around 20-30 seconds. Moreover, the queueing latency is also considerable (20-30 seconds in our example) since massive service requests congest one central server.
Recently, researchers have presented the idea of edge AIGC, which deploys AIGC generation services on edge devices [1]. By distributing services to numerous edge devices which are close to users, service latency can be effectively reduced. Meanwhile, the robustness gets increased due to the elimination of single-point-failure. Moreover, users can customize AIGC services, e.g., sharing their background, locations, or characters with edge devices to generate personalized content accordingly. Finally, since the users directly communicate with edge devices, personal information can be protected from leakage. Although enjoying these advantages, the following challenges exist in deploying edge AIGC.
* As digital property on the Internet, AIGC products are vulnerable to tampering and plagiarization (the tampering and plagiarization are shown in Section III).
* The economic system of AIGC is complicated. Without a mechanism guaranteeing that all the participants can benefit from AIGC circulation and obtain their deserved revenue legitimately, the generation, distribution, and trading of AIGC products will be discouraged.
* Recall that the generation services become distributed in edge AIGC. Therefore, as Edge Service Providers (ESPs) show significant heterogeneity in terms of model configuration and service quality, the users can hardly select reliable ESPs for their tasks.
Fortunately, blockchain provides available solutions for these issues. As a distributed ledger, blockchain can construct trustworthiness among anonymous participants by maintaining an immutable and traceable history [3]. Moreover, smart contracts make blockchain programmable, enabling the on-chain deployment of arbitrarily complex mechanisms (e.g., two-phase locks and incentive mechanisms). Consequently, the status and trading of AIGC products can be monitored
on-chain, eliminating the security and trustworthiness problems. In 2022, Oben AI published the proposal of _AIGC chain ([https://www.aigecchain.io/about](https://www.aigecchain.io/about))_, which allows users to contribute resources for training distributed AIGC models and acquiring rewards. As the first blockchain for AIGC, however, this project is still under development and far from completing the whole ecosystem. Moreover, it only uses blockchain as a crowdsourcing platform for generating AIGC, while the distribution and trading of AIGC are unprotected.
In this article, we propose the blockchain-empowered AIGC product lifecycle management in edge networks. Specifically, we first define "_AIGC product lifecycle_" and discuss four major concerns regarding lifecycle management. To help AIGC products defend malicious attacks, a Proof-of-AIGC mechanism is proposed, using fraud proofs to deal with plagiarization. Given the complex economic system of AIGC, we further equip our framework with an on-chain incentive mechanism based on Hash Time Look (HTL) [4]. With guaranteed and timely revenue issuance, the circulation of AIGC can be motivated and incentivized. Finally, noticing the heterogeneity of ESPs, we enable AIGC producers to select the ESPs based on their accumulated reputation, which is modeled by Multi-weight Subjective Logic (MWSL) method [5]. _To the best of our knowledge, this is the first work discussing the issues and solutions of AIGC product lifecycle management._ Our contributions are summarized as follows:
* We present Proof-of-AIGC mechanism. Different from Proof-of-X (e.g., Proof-of-Semantics [6]), a challenge scheme is implemented, thus deregistering plagiarized AIGC products and protecting users' copyright.
* We propose an incentive mechanism with one-way incentives and two-way guarantees. The former encourages users to participate in managing the AIGC product lifecycle, and the latter ensures the atomic executions of AIGC trading, i.e., fund-ownership exchanges.
* We design a reputation-based ESP selection strategy. By calculating and sharing reputation, users can easily quantify the trustworthiness of numerous heterogeneous ESPs and assign their tasks to the most reliable one.
## II AIGC: Current Progress, Lifecycle Management, and Concerns
In this section, we first review the development of AIGC. Then, we show the AIGC product lifecycle in edge networks. Finally, important security and circulation concerns existing in the AIGC product lifecycle are discussed.
### _Development of AIGC_
AIGC is an emerging generation diagram after Professional-Generated Content and User-Generated Content. As the name suggests, the development of AIGC is driven by progresses in AI research. Before 2010, machines can hardly generate high-quality content due to the limited capability of deep learning models. Since 2014, various generative neural networks have been presented, such as GAN and variational autoencoders. Consequently, AIGC enters a period of rapid development. In 2020, OpenAI published the _Generative Pretrained Transformer-3_ (GPT-3) model, supporting multiple text generation tasks, e.g., mechanism translation and report creation [7]. Two years later, the diffusion-based _DALL-E-2_ model is presented. Based on the text description given by users, DALL-E-2 can generate high-quality realistic images automatically. Apart from text-to-text and text-to-image generation, AIGC is widely adopted in video processing, gaming, voice assistants, etc. Moreover, it is regarded as a building block for many revolutionary techniques, including Web3, metaverse, digital twin, even the future 7G [8].
### _AIGC Product Lifecycle Management in Edge Networks_
Traditionally, AIGC models are operated by centralized servers, such as _Hugging Face_ platform. In this case, massive users send requests to the central server, wait in line, and receive the services. Researchers attempt to deploy AIGC services in edge networks to avoid request congestion and optimize service latency. Compared with central servers, edge devices also have enough computing resources to conduct AIGC inference and are closer to users. Therefore, the users can communicate with devices with lower transmission latency. Moreover, since AIGC services are distributed to multiple edge devices, the waiting latency can be significantly decreased. Nonetheless, the current research only covers the generation of AIGC products. As a kind of non-fungible online property like NFT [9], each AIGC product has its ownership, copyright, and value. Accordingly, the protection and management of AIGC products should cover their whole lifecycle. Next, we define the concept of "AIGC product lifecycle".
The entire AIGC product lifecycle has three phases, namely generation, distribution, and trading (see Steps 1 - 3 in Fig. 1). Taking text-to-image generation as an example, the primary process of each phase is described below.
* **Generation:** Producers, with insufficient physical resources, pack prompts, i.e., interesting and accurate text descriptions, and requirements in a request and sends them to ESPs (Step 1 ). Edge devices serve as ESPs, providing AIGC generation services for clients using local well-trained AIGC models (Step 2 ). Since AIGC generation is time-consuming and takes computing resources, ESPs can claim fees from producers.
* **Distribution:** After generation, the producers acquire the ownership of the AIGC products. Consequently, they have the right to distribute these products to social media or AIGC platforms through edge networks (Step 3 ).
* **Trading:** Since AIGC products are regarded as a novel kind of non-fungible digital properties, they can be traded. The trading process can be modelled as a fund-ownership exchange between two parties.
During such a lifecycle, several issues are yet to be addressed. As shown in Fig. 1, firstly, the ownership and copyright of AIGC products are vulnerable on the Internet. Meanwhile, the producers also encounter problems in choosing reliable ESPs. Finally, the legitimate trading of AIGC products among anonymous participants is unsolved. In the following part, we discuss these concerns in detail.
### _Security Concerns_
Since AIGC products are published on open networks, various kinds of attacks threaten them [10]. Here, we illustrate two crucial attacks targeting the AIGC products, namely the tampering of ownership and the plagiarization of AIGC. Note that other attacks, such as denial-of-service and injection, can also destroy AIGC [11]. Nevertheless, since they are general-purpose attacks and have been well-elaborated, we do not cover them in this article.
#### Ii-C1 Tampering of Ownership
Taking text-to-image AIGC as an example, first, Steps 1 -3 in Fig. 1 illustrate its lifecycle. For conducting ownership tampering, the attackers generally deploy many robots to monitor closely the Internet and find high-quality AIGC products timely (Step 4 ). After selecting the victim image, the attacker, assisted by its robots, distributes massive messages to re-publish the image, pretending that the image is its original (Step 5 ). Since the attacker can broadcast information more rapidly, the consumers have a high probability of first reading the information offered by the attacker. If so, the ownership of the victim image can be regarded as successfully tampered.
#### Ii-C2 Plagiarization of AIGC
Compared with ownership tampering, the plagiarization of AIGC is harder to be detected. In this case, the attacker will not directly claim ownership of the victim image. Instead, it downloads the high-quality victim image, conducts some slight revision (e.g., adding noise or changing the colors of some objects), and publishes it as a brand new AIGC product (Step 6 -3 ). Since such revision is much easier and cheaper than generating AIGC images from scratch, the attacker can make significant profits. Moreover, it can even repeat this strategy, i.e., using one original image to generate a series of duplicates with little difference, thus further increasing its gains.
### _Circulation Concerns_
Apart from security concerns, to realize the free circulation of AIGC, we also encounter two challenges.
#### Ii-D1 Heterogeneity of ESPs
The lifecycle of every AIGC product starts from generation, i.e., using well-trained AIGC models to create content based on producers' requirements. Nonetheless, ESPs in edge networks show great heterogeneity in model and service quality. Taking Fig. 1 as an example, one ESP is equipped with _Stable Diffusion_[12], the state-of-the-art AIGC model. The training of Stable Diffusion is called forward diffusion, i.e., smoothly perturbing the original image data by adding noise. The corresponding training time exceeds 150,000 hours on 256 Nvidia A100 GPUs, at a cost of US600,000. In contrast, another ESP in Fig. 1 only has a simple GAN model. The quality of the resulting content generated by these two ESPs is significantly different. However, since ESPs may lie to producers, they cannot determine which ESPs are trustworthy.
#### Ii-D2 Issuance of the Deserved Revenue
Nowadays, we are experiencing the evolution from Web2 to Web3. In the Web3 era, everyone owns the content he/she generates. Correspondingly, all the contributions to maintain, distribute, and enrich the community should be rewarded. Nevertheless, ensuring that all the deserved revenue can be issued timely is challenging, especially in the AIGC scenario, whose economic system is complex. Given the high costs of AIGC generation, the computing power and time invested by ESPs should be rewarded with fees. Meanwhile, the producers are only willing to pay if they are guaranteed to receive the AIGC products on time. Likewise, AIGC trading also involves a two-way guarantee of whether the producer and consumer can obtain the funds and AIGC ownership, respectively. However, on the public Internet, two parties of transactions can hardly build trustworthiness. Such a concern might discourage the producers from distributing and trading products, thus blocking the free circulation of AIGC products.
From the above discussion, we can observe that the difficulty of AIGC lifecycle management originates from two issues, i.e., i) the intrinsic venerability of AIGC as a kind of digital non-fungible property and ii) the lack of trustworthiness on the Internet. Fortunately, as an immutable ledger and trust maker, blockchain can effectively solve these two issues.
## III Blockchain-Empowered AIGC Lifecycle Management
### _Framework Overview_
The proposed blockchain-based framework for AIGC product lifecycle management is shown in Fig. 2. In the following
Fig. 1: The AIGC product lifecycle and its important concerns.
part, we introduce this framework in terms of stakeholders, blockchain platform, and on-chain mechanisms.
#### Iii-B1 Stakeholders
The entire AIGC product lifecycle in edge networks involves four types of stakeholders in total, namely producers, ESPs, consumers, and attackers.
* **Producer:** Producers initialize the lifecycle of an AIGC product. Due to resource limitations, they only propose prompts (e.g., interesting and accurate text descriptions in text-to-image AIGC) and then request for ESPs to complete the generation tasks. After the generation, they become the first owners of the resulting products and have the right to publish and sell them.
* **ESP:** ESPs (e.g., edge servers) have enough resources to save well-trained AIGC models and generate content (see Fig. 2, Part A). Therefore, they can provide content generation services for producers. However, given the complexity of AIGC generation, ESPs can charge producers based on the time and computing power that they invest to the tasks.
* **Consumer:** After distribution, the AIGC product will be viewed by numerous people, some of whom may buy it. Such viewers are called consumers. During the lifecycle of an AIGC product, it might experience multiple times of trading with different consumers.
* **Attacker:** Attackers can launch ownership tampering and AIGC plagiarization to disturb the normal operations of AIGC products and make profits.
#### Iii-B2 Blockchain Platform
In our framework, blockchain has two major functions: i) providing a traceable and immutable ledger and ii) supporting on-chain mechanisms. To this end, every phase of the AIGC product lifecycle will be recorded by transactions, whose basic format is _Trans (Sender, Receiver, Payload, Timestamp, Signature)_. Note that the payload is different depending on the specific types of events. Transactions are packed into blocks and submitted to the blockchain network, a distributed Peer-to-Peer (P2P) network. The participants of the P2P network, named full nodes, conduct a consensus mechanism for block verification. Finally, valid blocks can be appended to the ledger and saved by all full nodes in parallel. Since everyone preserves a ledger copy, the attackers have to revise at least 50% copies for tampering history, which is almost impossible. In addition, we can easily trace any historical events by traversing the ledger. Moreover, to support complex on-chain mechanisms, a turing-complete smart contract engine is deployed.
Among all participants, ESPs serve as full nodes and are responsible for message synchronization, block verification, and ledger storage. Given the resource limitation, producers and consumers act as clients, relying on ESPs to access the blockchain services. Note that the consensus mechanism in our blockchain is delegated Proof-of-Stake [13], in which ESPs deposit stakes and take turns to create blocks. In this case, the attackers need to manipulate 50% ESPs for launching 51% attacks. Moreover, the deposited stakes will be locked if their malicious attacks are detected.
#### Iii-B3 On-chain Mechanism
The framework is equipped with three on-chain mechanisms for different purposes. Firstly, we design the Proof-of-AIGC mechanism to defend plagiarization (see Fig. 2, Part B). To protect the funds-AIGC ownership exchange, we further implement an incentive mechanism based on HTL (see Fig. 2, Part C). Finally, we present the reputation-based ESP selection, which effectively schedules AIGC gen
Fig. 2: The blockchain-empowered framework for AIGC product lifecycle management. Part A represents the AIGC models operated by ESPs. Parts B, C, and D illustrate the Proof-of-AIGC (demonstrated in Section III-B), incentive mechanism (demonstrated in Section III-C), and reputation-based ESP selection (demonstrated in Section IV), respectively.
eration tasks among ESPs (see Fig. 2, Part D).
### _Proof of AIGC_
As shown in Fig. 2, Part B, the Proof-of-AIGC consists of two phases, namely proof generation and challenge.
#### Iii-B1 Proof Generation
Proof generation intends to register AIGC products on blockchain. We still take text-to-image AIGC as an example. For generating an image, the producer first sends a corresponding request to an ESP (ESP selection strategy is discussed in Section IV). The request format is _(Text description, service fee, expected time)_. After receiving the service request, the ESP checks its availability and decides whether to accept the task. If the expected time and service fee are acceptable, it conducts a handshake with the producer (see Fig. 2, Part B). Then, the image creation can be conducted by the ESP, using well-trained AIGC models.
After generating the image, ESP initializes a transaction \(Trans_{AIGC}^{Gen}\)_(Sender, Receiver, Payload, Timestamp, Signature)_. The Payload format is _(Product index, Metadata, Challenge expiration)_, in which _Product index_ is calculated by hash function and is regarded as the unique identity for the AIGC product. _Metadata_ contains the basic information of the AIGC product. Such a transaction will go through the verification and be recorded by the blockchain. Finally, the ESP will send the image to the producer, with a copy of \(Trans_{AIGC}^{Gen}\). \(Trans_{AIGC}^{Gen}\) can be regarded as a proof, which not only registers the AIGC product, but also claims its ownership by setting _Receiver_ as producer's address. Given the immutability of blockchain ledger, the concerns about ownership tampering can be effectively addressed. Next, we demonstrate the challenge mechanism to help producers defend the AIGC plagiarization.
#### Iii-B2 Challenge
Proof-of-AIGC follows the principle of fraud proof. In other words, our blockchain assumes that all AIGC products are original work in the proof generation phase. However, the information recorded in \(Trans_{AIGC}^{Gen}\) enables producers to challenge any on-chain AIGC product that they believe copies their own work. If the challenge succeeds, the duplicate will be deregistered, thus protecting the copyright of the real producer. Next, we illustrate the challenge workflow.
Suppose that the producer has created and published an AIGC product (called original product). Then, it surfs the Internet and finds an AIGC product which is significantly similar to its own work (called duplicate). In this case, it can initialize the challenge process by sending a transaction \(Trans_{AIGC}^{Chall}\) with the payload _(Product\({}_{1}\), Product index\({}_{1}\), Product\({}_{2}\), Product index\({}_{2}\), Pledge deposit)_. Here, _Product\({}_{1}\)_ (_Product\({}_{2}\)_) and _Product index\({}_{1}\)_ (_Product index\({}_{2}\)_) represent the content and indexes of the original product (duplicate), respectively. We consider that the duplicates will also be registered on blockchain because consumers will only buy the AIGC products with clear proof. After receiving \(Trans_{AIGC}^{Chall}\), the ESPs will conduct the following four steps:
* **Step 1: Fetch the proofs**. The \(Trans_{AIGC}^{Gen}\) of both the original product and the duplicate will be fetched from local ledger. Recall that the format of \(Trans_{AIGC}^{Gen}\) is _(Sender, Receiver, Payload, Timestamp, Signature)_.
* **Step 2: Check the identity of the challenger**. The ESPs verify challenger's signature in \(Trans_{AIGC}^{Chall}\) using _Receiver_ public key in \(Trans_{AIGC}^{Gen}\). If signature verification is successful, it can prove that the challenger is indeed the owner of the original product.
* **Step 3: Measure the similarity between the original product and the duplicate.** Firstly, ESPs conduct hash operations on _Product\({}_{1}\)_ and _Product\({}_{2}\)_ and check whether the hashes match _Product index\({}_{1}\)_ and _Product index\({}_{2}\)_, respectively. If so, they conduct the similarity measurement using three well-established metrics, namely image histogram, perceptual hash, and difference hash. Note that the metrics can be changed for other AIGC scenarios.
* **Step 4: Check the results**. If the similarity level exceeds the threshold in any two metrics, the challenge can be regarded as successful. Otherwise, the challenge fails.
If the challenge succeeds, the ESPs create and send a transaction \(Trans_{AIGC}^{Derse}\) with the payload _(Product index\({}_{2}\), Pledge deposit, Similarity)_, where _Similarity_ is defined as a three-element tuple _(histogram, phash, dhash)_. \(Trans_{AIGC}^{Derse}\) aims to deregister the duplicate by pointing out its product index. Moreover, it unlocks the pledge deposit provided by the challenger. Recall that the challenge is initialized by \(Trans_{AIGC}^{Chall}\), whose _Sender_ is obviously the challenger's address. However, the _Receiver_ address of \(Trans_{AIGC}^{Chall}\) does not belong to any participant. Instead, it is a special system account for locking the pledge deposit provided by challenger. The motivation for requiring pledge deposit is to restrict challengers from launching challenges arbitrarily, since the challenge process causes extra burden to the blockchain. Nonetheless, such deposit can be waived if the challenge happens before the pre-defined _Challenge expiration_ in \(Trans_{AIGC}^{Gen}\). For example, if the original product is registered in the 5_th_ block and _Challenge expiration_ is 20, the challenge will be free from the 6_th_ to the 20_th_ block. From the 21_st_ block, the challenger can only withdraw its deposit if it successfully proves that a duplicate is mistakenly registered on blockchain. Otherwise, the locked pledge deposit will be regarded as a service fee and be used to reward the next block creator.
### _Incentive Mechanism_
The economic system of AIGC is complicated because it accommodates different stakeholders, which conduct transactions with each other frequently. Thus, we should guarantee that: i) all the stakeholders can be incentivized to manage the AIGC lifecycle; ii) the funds-AIGC ownership exchanges can be conducted legitimately without repudiation. To this end, an on-chain incentive mechanism is presented.
#### Iii-C1 One-way Incentives
One-way incentives are automatically issued to the ESPs which maintain the ledger and provide blockchain services. Recall that our blockchain adopts delegated Proof-of-Stake as the consensus mechanism, where ESPs take turns to generate new blocks. During each round of block generation, the generator can include a coinbase transaction to reward itself. The _Sender_ and _Receiver_ addresses of such coinbase transactions are the system account and generator's public key address, respectively. For the specific reward value,
it can be set according to the target system inflation rate. Note that there is no transaction fee in our incentive mechanism. Hence, block generator just packs pending transactions by the first-come-first-serve strategy.
#### Iii-C2 Two-way Guarantee
As mentioned before, during both AIGC generation and trading, there exist two-way exchanges between fund and ownership. However, people might hesitate to conduct such exchanges, since they cannot guarantee that the other party will strictly follow its promise. To build mutual trust and facilitate AIGC circulation, we design a two-way guarantee protocol using HTL (Hash Time Lock) as a part of our incentive mechanism.
Take the two-way exchange happening in AIGC generation phase as an example. In this case, the ESP grants the producer the ownership of its AIGC product, and the producer pays the pre-configured service fee. To do so, we implement a smart contract with two atomic operations named lock and release. As shown in Fig. 2, Part C, during the handshake process described in Section IV-B, the producer creates a randomness \(R\) and sends its hash \(H(R)\) to ESP. When \(Trans^{Gen}_{AIGC}\) is recorded on the blockchain, a corresponding contract instance \(C_{1}\) will be created by ESP immediately. \(C_{1}\) calls lock function to lock the ownership storing in \(Trans^{Gen}_{AIGC}\) using \(H(R)\). Only the one with \(R\) can release the lock. Meanwhile, the ESP sends a payment reminder to the producer. Receiving the bill, the producer sends a payment transaction with the payload _(Balance)_, where _Balance_ should be equal to the pre-configured service fee. Then, it also initializes its own contract instance \(C_{2}\), which locks the fund in the payment transaction by \(H(R)\). Up till now, both fund and AIGC ownership are on-chain.
Then, the secure exchange between fund and AIGC ownership can be conducted. Firstly, the producer unlocks the AIGC ownership by calling release operation of \(C_{1}\), with an input \(R\). \(C_{1}\) will check whether the hash of \(R\) matches \(H(R)\) and unlock the \(Trans^{Gen}_{AIGC}\) if \(H(R)\) is correct. Since such a process exposes \(R\) to \(C_{1}\), the owner of \(C_{1}\), i.e., ESP, can also release the fund locked by \(C_{2}\) using \(R\). To prevent participants from intentional delay, we further add an expiration to the smart contract. Consequently, if they fail to unlock the properties on time, the lock will become permanent and the corresponding transactions will be discarded. Clearly, such a protocol guarantees the atomic and timely executions of the exchange process.
### _Security Analysis_
Recall that in Section II, we point out four concerns of the AIGC product lifecycle, namely ownership tampering, AIGC plagiarization, non-guaranteed exchanges, and ESP heterogeneity. Firstly, our Proof-of-AIGC registers every AIGC products on chain. In this case, even though attackers can distribute massive fake messages to "claim" their ownership, they can hardly launch 51% attacks (as mentioned in Section III-A) or tamper the registration history preserved in all full nodes. Additionally, the challenge scheme provides the standard procedure for producers to defend AIGC plagiarization and retrieve their copyright. Furthermore, the incentive mechanism guarantees that all the funds-AIGC ownership exchanges can be conducted strictly following the pre-confirmed contracts. Next, we address the final concern, i.e., ESP heterogeneity, by presenting a reputation-based ESP selection.
## IV Reputation-Based ESP Selection
### _Problem Statement_
Recall that AIGC services are distributed to numerous edge devices in our framework. Hence, each producer can access multiple heterogeneous ESPs simultaneously. In this case, selecting a reliable ESP for the specific task becomes a problem. Traditionally, producers can select the most familiar ESP, i.e., the one with which they have traded the most times, to minimize the potential risk. However, such strategies may lead to an imbalanced workload among ESPs, thus increasing the service latency on busy ESPs. Meanwhile, the computing resources of idle ESPs will be wasted.
To solve this problem, we implement a reputation-based ESP selection scheme in our framework. Specifically, it sorts all available ESPs according to their reputation, which is calculated by Multi-weight Subjective Logic (MWSL) [5]. We intend to achieve three goals: i) helping producers select the most reliable ESP for each AIGC generation task; ii) balancing the workload among multiple ESPs, thereby reducing the overall service latency; iii) encouraging ESPs to complete the assigned tasks timely and honestly, since a negative reputation will directly affect their profits.
### _Reputation Based on Multi-weight Subjective Logic_
As shown in Fig. 2, Part D, producers select ESPs by the following steps: i) calculate the reputation of all available ESPs, ii) sort candidate ESPs according to their latest reputation, and iii) assign the AIGC generation task to the ESP with the highest reputation. Note that the item \(ESP_{1}\) is marked red because it denies the service request. In this case, the
Fig. 3: The reputation calculation process (from the perspective of producer \(P_{1}\)) and the illustration of AIGC services.
producer traverses the reputation table and re-sends the request to the next candidate, i.e., \(ESP_{3}\). Next, we demonstrate the reputation calculation based on MWSL.
As shown in Fig. 3, MWSL utilizes the term "opinion" to denote the basic items for reputation calculation. Suppose that our edge AIGC has three producers (\(P_{1}\)-\(P_{3}\)) and three ESPs (\(ESP_{1}\)-\(ESP_{3}\)). Firstly, for a given producer, say \(P_{1}\), if it has direct interactions with these ESPs, \(P_{1}\)'s evaluation of them is called local opinions. Meanwhile, considering that \(P_{2}\) and \(P_{3}\) may also have the experience for interacting with these ESPs, their evaluation should also be taken into account. From the perspective of \(P_{1}\), the evaluation of \(ESP_{1}\)-\(ESP_{3}\) from \(P_{2}\) and \(P_{3}\) are called recommended opinions. Here, an interaction refers to the entire process from sending service request, to confirming AIGC generation order, and to acquire AIGC products. The opinion is defined as a three-element vector [\(p,n,u\)], where \(p\) and \(n\) represent the proportion of _positive_ and _negative_ interactions in all interaction attempts, respectively. \(u\) (from 0 to 1) indicates the uncertainty level between producer and ESP. According to MWSL, \(u\) is set manually according to the communication quality.
Although recommended opinions make reputation calculation more comprehensive, the hidden subjectivity might affect the fairness. For instance, if \(P_{2}\) once suffered an unexpected high latency from \(ESP_{1}\), it may regard all subsequent interactions as negative. To mitigate the effect caused by subjectivity, for each producer, say \(P_{1}\), an overall opinion averaging all the received recommended opinions will be generated. Moreover, since \(P_{2}\) and \(P_{3}\) have different familiarity degrees with ESPs, the weight of their recommended opinions is also different. The detailed reputation calculation process is:
* **Step 1: Generate local opinions**. Every producer updates its local opinion for every ESP (see Step 1 in Fig. 3).
* **Step 2: Synchronize information.** Producers share the latest local opinions. Assisted by blockchain, they can pack their opinions into transactions for secure sharing.
* **Step 3: Calculate overall opinion** Each producer collects all received recommended opinions and averages them as the overall opinion. Note that the opinions are weighted before calculating the average. For any recommended opinion from \(P_{n}\) to \(ESP_{n}\), the weight is \(\alpha_{1}\times Familiarity+\alpha_{2}\times Value\). \(Familiarity\) is defined as the number of historical interactions between \(P_{n}\) and \(ESP_{n}\). \(Value\) equals the total service fee for these interactions. Finally, \(\alpha_{1}\) and \(\alpha_{2}\) are two weighting factors satisfying \(\alpha_{1}\) + \(\alpha_{2}\) = 1. Notably, the more interactions have been conducted, the larger the weight.
* **Step 4: Calculate reputation**. Every producer combines its local opinion with overall opinion and achieves the final opinion [\(p_{fin},n_{fin},u_{fin}\)]. The corresponding equation is shown in Fig. 3. Finally, reputation is measured by \(p_{fin}\) + \(u_{fin}\)\(\times\)\(n_{fin}\).
After reputation calculation, producers take Steps 2 -3 in Fig. 2, Part D, and select an ESP. Clearly, our reputation scheme successfully achieves all the design goals. Firstly, it quantifies the trustworthiness of ESPs. Hence, producers can easily determine which ESP is more reliable. In addition, producers do not need to only rely on the most familiar ESP, thereby alleviating the potential service congestion. Finally, since the reputation records are store on-chain and clear to all participants, ESPs are encouraged to provide high quality AIGC services for maximizing their profits.
### _Numerical Results_
To prove the validity of the proposed methods, we implement a demo of our AIGC lifecycle management framework and deploy the reputation-based ESP selection on it1. As shown in Fig. 3, the testbed consists of three ESPs (served by three virtual machines on Apple MacBook Pro with 8-Core Intel Core i9 CPU and AMD Radeon Pro 5500M GPU) and three producers (served by iPhones). The AIGC services are supported by _Draw Things_ application (_[https://drawthings.ai/_](https://drawthings.ai/_)). Factors \(\alpha_{1}\) and \(\alpha_{2}\) are set as 0.35 and 0.65, respectively. Additionally, \(u_{loc}\) is fixed to 0.515l. For each producer, it marks one interaction as "negative" if ESP fails to return the AIGC proof within the pre-confirmed time. The service quality (i.e., the probability of receiving positive opinions) of \(ESP_{1}\), \(ESP_{2}\), and \(ESP_{3}\) are 95%, 70%, and 55%, respectively. Finally, after acquiring the ESPs' reputation, producers can utilize _Softmax_ function to determine the probability for selecting each ESP.
Footnote 1: [https://github.com/Lancelot1998/AIGCLifecycleManagement](https://github.com/Lancelot1998/AIGCLifecycleManagement)
Firstly, Fig. 4 illustrates the reputation trends of three ESPs. During the 1\(st\)-14\(th\) rounds, all ESPs accumulate reputation. Given the high service quality, the reputation of \(ESP_{1}\) directly reaches to the top and stays stable, while \(ESP_{2}\) and \(ESP_{3}\) gradually increase their reputation by providing more positive interactions. From the 15\(th\) round, we let \(ESP_{1}\) intentionally delay the AIGC services. Corresponding, its reputation drops dramatically, since more negative interactions are reported. In
Fig. 4: The reputation trends of three ESPs (from the perspective of a random producer).
Fig. 5: The total number of assigned tasks of three ESPs.
contrast, since \(ESP_{2}\) and \(ESP_{3}\) acquire the chance to handle more tasks, their reputation keeps increasing. We conclude that the proposed reputation scheme can effectively quantify the trustworthiness of ESPs. In this way, the producers can easily judge which ESP is the most reliable. On the other hand, ESPs are also supervised to keep performing honestly.
Then, Fig. 5 shows ESPs' workload under different ESP selection methods. Here, we suppose that all three producers request for AIGC services with the same frequency, and we let them randomly select ESPs during the first 5 rounds. From the 6_th_ round, two ESP selection methods are tested, namely traditional method and the proposed reputation-based method. Recall that traditionally, the producers tend to assign tasks to their most familiar ESPs. As a result, the workload among ESPs is imbalanced, causing long service latency. As shown in Fig. 5, most AIGC generation tasks congest in \(ESP_{3}\), while the computing power of \(ESP_{2}\) becomes wasted. Assisted by reputation, the producers can qualitatively evaluate the trustworthiness of ESPs and no longer need to rely on their empirical judgement. Consequently, the workload among ESPs is effectively balanced. Note that since there is no similar work regarding blockchain-empowered AIGC in the literature, we do not set a baseline to compare with.
## V Future Direction
### _Blockchain-Based AIGC Governance_
The rapid development of AIGC greatly enriches the Internet content, but it also brings _deepfake_[14]. Deepfake refers to synthetic media in which a person in an existing image or video is replaced with someone else's likeness. According to thesentinel (_[https://thesentinel.ai/_](https://thesentinel.ai/_)), the number of deepfake videos online has jumped from 14,678 in 2019 to 145,277 in 2021. Moreover, leveraging advanced AIGC models, such as GAN and autoencoders, deepfake is becoming more and more realistic and harder to be identified. Given the security property of blockchain, it can help AIGC against deepfake. For example, some distributed governance organization can be deployed on-chain, thus conducting the AIGC supervision and deepfake identification. However, since identifying deepfake requires off-chain knowledge, how to effectively bridge blockchain and physical AIGC is worth exploring.
### _Distributed AIGC Model Training_
This article mainly focuses on the AIGC product lifecycle. The AIGC model construction lifecycle, including model training, fine-tuning, and inference, is also a meaningful research topic. For instance, since the training of diffusion models is time-consuming and resource-intensive, new algorithms and frameworks for building the distributed AIGC model training are worth studying. In this way, the computing power in the entire edge network can be exploited and thus significantly improve the training speed. Meanwhile, blockchain can be applied to protect the security of the training process and reward the users who contribute their resources fairly.
### _Metaverse_
AIGC is a building block for metaverse, since it can create numerous multimodal content for rendering immersive and realistic virtual worlds [15]. For example, the text-to-3D AIGC allows machines to collect the background, locations, and characters of users, thereby generating personalized avatars in the metaverse environment. Although such a process brings high QoE and immersiveness, some sensitive personal information might be leaked. Since blockchain has shown great strength in protecting data storage and sharing, the metaverse-oriented AIGC storage, access control, and sharing based on blockchain technique are also worth investigating.
## VI Conclusion
In this article, we first review the progress of AIGC and its deployment in edge networks. Then, we point out four major concerns of the AIGC product lifecycle. Hence, we present a blockchain-empowered framework, realizing the lifecycle management for AIGC products. Specifically, Proof-of-AIGC solves the ownership tampering and plagiarization of AIGC products. Additionally, an incentive mechanism is proposed to encourage the AIGC circulation. Moreover, we design a reputation scheme to help producers select reliable ESPs, with numerical results to prove its validity. Last but not least, we discuss future directions regarding the combination of blockchain and AIGC.
|
2307.01610 | Overconfidence is a Dangerous Thing: Mitigating Membership Inference
Attacks by Enforcing Less Confident Prediction | Machine learning (ML) models are vulnerable to membership inference attacks
(MIAs), which determine whether a given input is used for training the target
model. While there have been many efforts to mitigate MIAs, they often suffer
from limited privacy protection, large accuracy drop, and/or requiring
additional data that may be difficult to acquire. This work proposes a defense
technique, HAMP that can achieve both strong membership privacy and high
accuracy, without requiring extra data. To mitigate MIAs in different forms, we
observe that they can be unified as they all exploit the ML model's
overconfidence in predicting training samples through different proxies. This
motivates our design to enforce less confident prediction by the model, hence
forcing the model to behave similarly on the training and testing samples. HAMP
consists of a novel training framework with high-entropy soft labels and an
entropy-based regularizer to constrain the model's prediction while still
achieving high accuracy. To further reduce privacy risk, HAMP uniformly
modifies all the prediction outputs to become low-confidence outputs while
preserving the accuracy, which effectively obscures the differences between the
prediction on members and non-members. We conduct extensive evaluation on five
benchmark datasets, and show that HAMP provides consistently high accuracy and
strong membership privacy. Our comparison with seven state-of-the-art defenses
shows that HAMP achieves a superior privacy-utility trade off than those
techniques. | Zitao Chen, Karthik Pattabiraman | 2023-07-04T09:50:33Z | http://arxiv.org/abs/2307.01610v1 | Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction
###### Abstract
Machine learning (ML) models are vulnerable to _membership inference attacks_ (MIAs), which determine whether a given input is used for training the target model. While there have been many efforts to mitigate MIAs, they often suffer from limited privacy protection, large accuracy drop, and/or requiring additional data that may be difficult to acquire.
This work proposes a defense technique, HAMP that can achieve both strong membership privacy and high accuracy, without requiring extra data. To mitigate MIAs in different forms, we observe that they can be unified as they all exploit the ML model's _overconfidence in predicting training samples_ through different proxies. This motivates our design to _enforce less confident prediction by the model_, hence forcing the model to behave similarly on the training and testing samples. HAMP consists of a novel training framework with high-entropy soft labels and an entropy-based regularizer to constrain the model's prediction while still achieving high accuracy. To further reduce privacy risk, HAMP uniformly modifies all the prediction outputs to become low-confidence outputs while preserving the accuracy, which effectively obscures the differences between the prediction on members and non-members.
We conduct extensive evaluation on five benchmark datasets, and show that HAMP provides consistently high accuracy and strong membership privacy. Our comparison with seven state-of-the-art defenses shows that HAMP achieves a superior privacy-utility trade off than those techniques.
+
Footnote †: publicationtext: 2024
Network and Distributed System Security (NDSS) Symposium 2024
26 February - 1 March 2024, San Diego, CA, USA
ISBN 1-891562-93-2
[https://dx.doi.org/10.14722/ndss.2024.23014](https://dx.doi.org/10.14722/ndss.2024.23014)
www.ndss-symposium.org
## I Introduction
Machine learning (ML) models are often trained with the sensitive or private user data like clinical records [22], financial information [31] and personal photos [21]. Unfortunately, ML models can also unwittingly leak private information [37, 10, 43, 12, 4]. One prominent example is _Membership inference attacks_ (MIAs) [37, 30, 48, 38, 27, 47, 3], which determine whether an input is used for training the target model, Hence, MIAs constitute a fundamental threat to data privacy. For instance, by knowing that an individual's clinical record was used to train a hospital's diagnostic model, the adversary can directly infer his/her health status.
MIAs exploit the ML model's differential behaviors on members and non-members [37, 30, 48, 27, 38, 8, 3]. _Members_ are the samples used to train the model (i.e., training samples) and _non-members_ are the samples not used for training (e.g., testing samples). Existing MIAs can be divided into score-based [37, 30, 17, 48, 38, 3] and label-only attacks [8, 27], where the former requires access to the model's _output score_ indicating the class probability, while the latter needs only the prediction label. These attacks all seek to learn distinctive statistical features from the model's predictions in different ways, such as training an attack inference model [30, 37], computing metrics like prediction loss [48] and entropy [37, 38], or using Gaussian likelihood estimate [3].
Defenses against MIAs can be categorized into provable and practical defenses. _Provable_ defenses provide provable guarantees through differential privacy (DP) [2], but they often incur severe accuracy degradation. _Practical_ defenses, instead, offer empirical membership privacy with the goal of maintaining high model accuracy [29, 41, 36, 19]. However, existing defenses still suffer from the following limitations: (1) limited privacy protection [19, 29]; (2) large accuracy drop [2, 41, 29]; (3) requiring additional public datasets that may not always be available in practice [32, 36]. To the best of our knowledge, no technique satisfies all these constraints, though they may address individual issues, e.g., high model accuracy but with limited privacy protection [19]; or strong privacy but with significant accuracy loss [2].
**Our Approach.** This paper proposes a practical defense called HAMP that can achieve both **H**igh **A**ccuracy and **M**embership **P**rivacy without requiring additional data. Existing MIAs employ diverse approaches in inferring membership, e.g., score-based MIAs may exploit prediction loss or entropy [48, 38, 30] while label-only MIAs [8, 27] can leverage adversarial robustness. Despite the different manifestations of these attacks, we identify a common exploitation thread among them - they are all learning to distinguish whether the model is _overly confident_ in predicting the training samples via different proxies. Our defense is therefore to _reduce the model's overconfident prediction on training samples while preserving the model's prediction performance_, which can simultaneously reduce membership leakage (from different MIAs) and maintain model accuracy.
HAMP consists of a training- and testing-time defense.
_Training-time defense_. Our key idea is to explicitly enforce the model to be less confident in predicting training samples during training. We first identify that the prevailing use of _hard labels_ in common training algorithms is one of the main factors that lead to the model's excessive confidence in predicting training samples. Hard labels assign 1 to the ground-truth label class and 0 elsewhere. The model is trained to produce outputs that match the labels, i.e., near 100% probability for the ground-truth class and 0% otherwise. On the other hand, a non-member sample that is not seen during training, is usually predicted with lower confidence, and can hence be distinguished by the adversary from member samples.
We therefore propose a new training framework that gets rid of hard labels and instead uses (1) _High-entropy soft labels_, which are soft labels with high entropy that assign a much lower probability to the ground-truth class and non-zero probability for other classes. This explicitly enforces the model to make less confident prediction on training samples. (2) HAMP also consists of an _entropy-based regularizer_, which is to penalize the model for predicting any high-confidence outputs via regularizing the prediction entropy during training.
The proposed training framework is able to significantly reduce the model's overconfident prediction and improve membership privacy, without (severely) degrading the model accuracy. Section III-B explains how it prevents privacy leakage from different sources (output scores and prediction labels). On the other hand, stronger membership privacy can also be achieved (e.g., by increasing the strength of regularization), but it would be at the cost of accuracy, which is undesirable as both privacy and accuracy are important considerations. This motivates our testing-time defense, whose goal is to gain higher membership privacy without degrading accuracy.
_Testing-time defense_. We propose to uniformly modify _all_ the outputs (from members and non-members) into low-confidence outputs, without changing the prediction labels. Our idea is to leverage the output scores from the _randomly-generated samples_, which are often predicted with low confidence due to the high dimensionality of the input space.
In our defense, all the values in each output score are replaced by those from random samples, and we keep the relative ordering of different classes unchanged to maintain the same prediction labels (e.g., a dog image is still predicted as a dog but with different output scores). Both the high-confidence outputs (on training samples) and low-confidence outputs (on testing samples) are uniformly replaced by such low-confidence outputs from random samples. This further reduces the membership leakage from the output scores.
**Evaluation.** We evaluate HAMP on five benchmark datasets (Purchase100, Texas100, Location30, CIFAR100 and CIFAR10), and perform comprehensive evaluation on a total of nine diverse MIAs (including the state-of-art LiRA attack [3]).
We compare HAMP with seven leading defenses: AdvReg [29], MemGuard [19], SELENA [41], DMP [36], Label Smoothing (LS) [40], Early-stopping [38], and DP-SGD [2].
An ideal privacy defense should offer strong protection for both members and non-members. Therefore, we follow Carlini et al. [3] to use attack true positive rate (TPR) controlled at low false positive rate (FPR), and attack true negative rate (TNR) at low false negative rate (FNR) to evaluate membership privacy. The former metric evaluates the privacy protection for members, and the latter for non-members.
**Contributions.** We summarize our contributions below.
* Develop a novel training framework with high-entropy soft labels and an entropy-based regularizer to enforce less confident prediction by the model, which can significantly mitigate diverse MIAs and incur minimal accuracy drop.
* Propose a novel testing time defense technique to modify all the output scores into low-confidence outputs, which further improves membership privacy without degrading accuracy.
* Integrate the training and testing framework as HAMP, and conduct rigorous evaluation under a wide range of attacks on five different datasets. We compare HAMP against seven leading defenses and show that HAMP outperforms existing defenses by achieving a superior privacy-utility trade off.
Fig. 1 summarizes the results of HAMP versus other defenses. We find that existing defenses often bias towards either privacy (e.g., DP-SGD) or utility (e.g., MemGuard). In contrast, HAMP is able to provide strong membership privacy for both members and non-members, and preserve model accuracy. HAMP reduces the attack TPR @0.1% FPR by 94% and the attack TNR @0.1% FNR by 97% respectively, with only 0.46% accuracy loss on average. This represents a much better privacy-utility trade off than other defenses.
## II Background
### _Machine Learning Primer_
This work focuses on supervised training for classification problem. A ML model can be expressed as a function \(F_{\theta}:X\to Y\), where \(X\in\mathbb{R}^{d}\) denotes the input space and \(Y\in\mathbb{R}^{k}\) the output space, and \(F\) is parameterized by weights \(\theta\). During training, the network is given a training set \((x,y)\in D_{tr}\) where \(y\) is the ground truth label. \(y\) is commonly expressed in the one-hot encoding format, where the ground-truth class is indicated with 1 and 0 elsewhere. The training objective is to minimize the prediction loss on the training set:
\[\min_{\theta}\frac{1}{|D_{tr}|}\sum_{x\in D_{tr}}\mathcal{L}(F_{\theta}(x),y), \tag{1}\]
Fig. 1: Privacy and utility evaluation on each defense (results averaged across datasets). Negative accuracy delta means accuracy drop compared with the undefended models. DP-SGD is reported at \(\epsilon=4\). HAMP _simultaneously_ achieve strong membership privacy (for both members and non-members) and high prediction accuracy, hence providing a better privacy-utility trade off than existing defenses.
where \(|D_{tr}|\) denotes the size of the training set, and \(\mathcal{L}\) the prediction loss such as cross-entropy loss. The model's output \(F_{\theta}(x)\) indicates the probability of \(x\) belonging to each class with \(\sum_{j=0}^{k-1}F_{\theta}(x)_{j}=1\) that sums up to 1.
To prevent the model from overfitting on the training set, a separate validation set different from \(D_{tr}\) is commonly used to serve as an unbiased proxy of the testing set. One can use the accuracy on the validation set to assess how good the model will be when evaluated on test data and prevent overfitting.
Hereafter, we refer to \(F\) as the trained model \(F_{\theta}\), \(F(x)\) as the output score of \(F\) on \(x\), and \(D_{te}\) as the test set.
### _Threat Model_
_Attacker_. Following prior work [19, 41, 29], we assume a black-box adversary who can query the target ML model with any input and observe the prediction output. The adversary's goal is to infer the membership of the training samples \((x,y)\in D_{tr}\) for a given model \(F\). Like previous defenses [29, 41, 36], we assume a strong adversary with the knowledge of half of the training members and an equal number of non-members. Further, we assume the adversary has full knowledge of the defense technique and can therefore train shadow models in the same way as the target model is trained, which facilitates a strong adversary in evaluating the defenses.
_Defender_. We assume the defender has a private set \(D_{tr}\) and his/her goal is to train a model that can both achieve high classification accuracy and protect against MIAs. We do not assume the defender has access to any additional data.
### _Membership Inference Attacks_
The attack model \(h(x,y,F(x))\rightarrow[0,1]\) outputs the membership probability. We refer to \(D_{tr}^{A},D_{te}^{A}\) as the set of members and non-members that are known to the adversary. The adversary's goal is to find a \(h\) that can best distinguish between \(D_{tr}^{A}\) and \(D_{te}^{A}\). The empirical gain of the attack can be measured as:
\[\sum_{(x,y)\in D_{tr}^{A}}\frac{h(x,y,F(x))}{|D_{tr}^{A}|}+\sum_{(x,y)\in D_{te }^{A}}\frac{1-h(x,y,F(x))}{|D_{te}^{A}|} \tag{2}\]
We categorize existing MIAs into _score-based_ and _label-only_ attacks as follows.
_Score-based MIAs:_ This class of attacks either trains an inference model to infer membership [30, 37] or computes custom metrics such as prediction loss [48] to derive a threshold for distinction.
_NN-based attack_[30, 37] trains an neural network (NN) \(A\), to distinguish the target model's prediction on members and non-members: \(A:F(x)\rightarrow[0,1],x\in[D_{tr}^{A},D_{te}^{A}]\). By querying the target model with \(D_{tr}^{A},D_{te}^{A}\), the resulting output \((F(D_{tr}^{A}),1)\), \((F(D_{te}^{A}),0)\) forms the training set for \(A\). In addition to output scores, other features like the ground-truth labels and prediction loss can also be used to train the inference model.
_Loss-based attack_[48] is based on the observation that the prediction loss on training samples is often lower than that on testing samples, as the loss on training samples are explicitly minimized during training. Specifically, the adversary can query the target model with \(D_{tr}^{A}\), and obtain the average loss on \(D_{tr}^{A}\) as the threshold \(\tau=-\frac{1}{|D_{tr}^{A}|}\sum_{(x,y)\in D_{te}^{A}}\mathcal{L}(F_{\theta}( x),y)\). Any sample with loss lower than \(\tau\) is considered as a member.
_Entropy-based attack_[37, 48] leverages that the output score of a training sample should be close to the one-hot encoded label, and hence its prediction entropy should be close to 0, which is lower than that on testing samples. Prediction entropy of a sample can be computed as \(-\sum_{j}F(x)_{j}\text{log}(F(x)_{j})\), where \(j\) is the class index.
_Modified-entropy-based attack_[38] is an enhanced version of the entropy-based attack by computing the following metric: \(-(1-F(x)_{y})\text{log}(F(x)_{y})-\sum_{j\neq y}F(x)_{j}\text{log}(1-F(x)_{j})\). This attack improves by taking into account class-dependent thresholds, as well as the ground truth label \(y\), which is shown to achieve higher attack effectiveness.
_Confidence-based attack_[48, 38] exploits the observation that the prediction confidence on training samples \(F(x)_{y}\) is often higher than that on testing samples. The attack threshold can be derived similar to the entropy-based attacks, and samples predicted with high confidence are deemed as members.
_Likelihood Ratio Attack (LiRA)_[3] is a state-of-art attack that can successfully infer membership when calibrated at low false positive. In LiRA, the adversary trains N shadow models, half of which are trained with target sample (called IN models) and the remaining half are trained without the target sample (called OUT models). It then fits two Gaussian distributions to approximate the output distributions by the IN and OUT models (a logit scaling step on the logit values is taken to ensure the outputs follow a Gaussian). Finally, LiRA conducts a parametric likelihood-ratio test to conduct membership inference (e.g., a sample is deemed as a member if its output is estimated to come from the IN models with high probability).
_Label-only MIAs:_ These attacks exploit training members' _higher_ degree of robustness to different perturbations (like adversarial perturbations, random noise), and develop different proxies to distinguish the degree of robustness by members and non-members.
_Prediction-correctness attack_[48] is the baseline label-only attack that simply determines any samples that are correctly classified as members. This attack is effective when the training accuracy is higher than the testing accuracy.
_Boundary attack_[8, 27] is based on the observation that it is easier to perturb a testing sample to change the prediction label than a training sample. This is because testing samples are often closer to the decision boundary and therefore more susceptible to perturbations. Using common attacks such as CW2 attack [5], the adversary measures the magnitude of perturbation needed to perturb \(x\in[D_{tr}^{A},D_{te}^{A}]\), based on which \(\tau\) can be derived. A sample is deemed as a member if the amount of perturbation needed to change the prediction label is higher than \(\tau\) (i.e., more difficult to be perturbed).
The adversary can also inject random noise to the samples (instead of adversarial perturbations), which is more efficient and useful in the cases where constructing the adversarial sample is difficult (e.g., for inputs with binary features) [8].
_Augmentation attack_[8] makes use of the samples' robustness to data augmentation and the idea is that training samples are often more resilient to data augmentation than testing samples. For instance, if an image was used to train a model, it should still be classified correctly when it is slightly translated. For each input \(x\), the adversary first generates multiple augmented versions of \(x\), and computes how many of them are correctly classified. Based on the classification outcome, the adversary trains an attack inference model to predict whether or not \(x\) is a member.
### _Defenses against MIAs_
This section presents an overview of representative defenses against MIAs (a comprehensive survey of existing defenses is in Section VI).
_Adversarial regularization (AdvReg)_[29] trains the model to both achieve good model performance and protection against a shadow MIA adversary. During training, the defender first trains an attack inference model that tries to maximize the MIA gain, after which the protected model is trained to minimize the MIA gain and maximize the classification accuracy. This is instantiated as a min-max game in [29].
_Distillation for membership privacy (DMP)_[36]. Shejwalkar et al. propose DMP to defend against MIAs based on knowledge distillation. The idea is to distill the knowledge from an undefended model (trained on a private dataset) into a new public model using a new reference set. Privacy protection is enabled by thwarting the access of the public model to the private dataset as the public model is trained on a separate reference set. Such a reference set can be curated by assuming the availability of a public dataset or by using synthetic data. We consider the latter since we do not assume access to external data. This is because in many domains such as healthcare, the training data is private/proprietary, and thus such a public dataset may not be available. We hence consider a more realistic scenario in which the defender has no access to external data (similar to [41]).
_SELf ENsemble Architecture (SELENA)_[41]. SELENA also uses knowledge distillation. Its key idea is to partition the private dataset into different subsets and train a sub model on each of the subset (another technique with similar idea is proposed in [9]). For each sub model, there exists a subset of private dataset that was not used in its training, i.e., "reference set" for that sub model. Each sub model assigns the output scores on its "reference set", which constitutes the knowledge to the distilled. The knowledge from the ensemble of sub models is finally distilled into a new public model.
_Early stopping_[38, 6]. As the training proceeds, the model tends to overfit the training data and become susceptible to MIAs. Early stopping is a general solution in reducing overfitting [6] by training models with fewer epochs. Song et al. [38] find that this is useful in mitigating MIAs and we follow to include it as as a benchmark defense mechanism.
_Differential privacy (DP) based defenses_[2]. DP-based defenses leverage the formal framework of differential privacy to achieve rigorous privacy guarantee. This is done via injecting noise to the learning objective during training such as DP-SGD that adds noise to the gradients [2]. However, DP-based defenses often produce models with considerable accuracy drop, resulting in a poor privacy-utility tradeoff.
_MemGuard_[19]. Jia et al. propose to defend against MIAs via obfuscating the prediction scores. The idea is to fool the MIA adversary by constructing a noise vector to be added to the input (analogous to constructing adversarial samples), and make the outputs on members and non-members indistinguishable by the adversary.
_Label Smoothing_[40]. LS is a common regularization technique to improve model accuracy by using soft labels. LS replaces the one-hot label with a mixture of the one-hot label and uniform distribution using a smoothing intensity parameter. E.g., for a smoothing intensity of 0.3, the soft label becomes 80% cat, 10% dog, 10% frog; and a smoothing intensity of 0.6 yields 60% cat, 20% dog, 20% frog. LS trains with different smoothing intensities to produce model with high accuracy.
Both LS and HAMP use soft labels in their training, but they are two techniques built with different principles that require different soft labels. LS is used to improve model performance, which necessitates training with _low_-entropy soft labels. Unlike LS, HAMP consists of _high_-entropy soft labels, an entropy-based regularizer and a novel testing-time defense (details in the next section), which is to improve membership privacy while preserving model accuracy. This consequently results in the different privacy implications by the two techniques: LS improves model performance but the resulting model still suffers from _high_ MIA risk [20], while HAMP consistently contributes to very _low_ MIA risk. We refer to detailed comparison in Section IV-G.
## III Methodology
The main insight behind HAMP in mitigating diverse MIAs is to identify a common exploitation thread among different MIAs. HAMP is designed to overcome this exploitation so that it can defend against different MIAs regardless of their specific approaches. We first explain how existing MIAs can be unified via a common thread in Section III-A, and then discuss how we build HAMP to overcome this exploitation.
### _Overconfident Prediction Leads to Membership Leakage_
While existing MIAs employ diverse approaches to infer membership, we unify them by viewing them all as exploiting the model's overconfidence in predicting training samples. We explain below how different attacks can be viewed as different forms to quantify whether a model is overly confident in predicting a specific sample, in order to infer its membership.
Score-based MIAs leverage the prediction scores to infer membership through different proxies. The model's overconfident prediction on training samples can be exposed through high confidence scores [48], low prediction entropy [37, 38], low prediction loss [48], or using a neural network [37, 30]. For boundary and augmentation attacks, samples predicted with high confidence can be viewed as exhibiting high robustness against adversarial perturbations and data augmentation. Training samples can therefore be identified by the adversary based on whether they are more resilient to adversarial perturbation [8, 27] or data augmentation [8].
_What leads to the model's overconfidence in predicting training samples?_ As mentioned before, common training algorithms make use of the one-hot hard labels to minimize the prediction loss. Minimizing the training objective function (1) is equivalent to encouraging the model to produce outputs that are consistent with the labels, i.e., 100% for the ground-truth class and 0% for any other classes.
While training with hard labels has achieved success in a broad class of classification problems, we find that it undesirably contributes to the model's overconfidence in predicting training samples, which eventually leads to membership leakage. For example, on Purchase100, the difference between the average prediction confidence on training and testing samples is \(>\)25%, which means the model is much more confident in predicting training samples. Such differential behavior can be identified by the adversary to obtain \(>\)14% attack TPR @0.1% FPR. This indicates training with one-hot hard labels undesirably enables the adversary to identify a large fraction of training samples with very low false positives (and similarly identifying testing samples with low false negatives). This inspires our design principle of enforcing less confident prediction to mitigate MIAs, based on which we introduce a novel training and testing defense that can achieve both strong membership privacy and high model accuracy.
### _Overview_
Fig. 2 shows an overview of HAMP. It has two parts.
**Training-time defense**. Inspired by the observation in Section III-A, our main idea is to _train the model to produce less confident prediction even on training samples_, thereby enforcing the model to behave similarly on training and testing samples. We achieve this by two innovations: (1) replacing the hard labels with _high-entropy soft labels_; and (2) introducing an _entropy-based regularizer_.
The first step is to generate soft labels with high entropy from the hard labels. These high-entropy soft labels explicitly induce the model to produce less confident output during training by assigning a much lower probability for the ground-truth class. For instance, a hard label of [0, 1] can be changed into a soft label of [0.4, 0.6], which guides the model to predict the ground-truth class with 60% probability (instead of 100%). The probability of each class is determined by an _entropy threshold_ parameter, and a higher threshold generates a soft label with higher entropy (e.g., [0.5, 0.5] has the highest entropy) - details in the next section. The ground-truth class remains the same so that the model can learn correctly, e.g., a dog image is still trained to be predicted as a dog.
Second, we introduce an entropy-based regularizer to penalize the model for predicting any output with low entropy. Prediction entropy measures the prediction uncertainty, and can be used to regularize the confidence level of the prediction, e.g., low entropy indicates high-confidence output, and can be mitigated by the proposed regularizer to become low-confidence output.
The high-entropy soft labels encourages the model to produce outputs consistent with the labels, while the regularization term allows the model to produce any low-confidence outputs, even if the outputs do not closely match the labels. Both components are important for HAMP to mitigate overconfident prediction and achieve strong membership privacy.
**How HAMP's training-time defense mitigates membership leakage from different sources?** There are two sources leading to membership leakage, and we discuss below how HAMP can reduce leakage from both sources.
_Output scores_. With the high-entropy soft labels and entropy-based regularizer, HAMP forces the model to produce output scores on training samples with higher entropy (i.e., lower confidence), which resemble the output scores on testing samples. E.g., on Purchase100, the average prediction entropy on members and non-members are 0.389 and 0.576 on the undefended model, which are 4.485 and 4.490 on the HAMP model. HAMP therefore reduces the entropy difference by 31x (from 0.187 to 0.006) and effectively enforces the output scores on members and non-members to be indistinguishable (more details in Appendix B). Some score-based MIAs leverage both output scores _and_ label information (e.g., [38, 30]) and we explain next how HAMP prevents membership leakage from the labels.
_Prediction labels_. HAMP's training-time defense mitigates privacy leakage from the prediction labels by pushing the training samples closer to the decision boundary, so that training samples lie _similarly close_ to the decision boundary as the testing samples. We next use the boundary and augmentation attacks to explain (both attacks exploit label information in different manners to infer membership).
Boundary attack exploits the training samples' higher adversarial robustness than testing samples. Without HAMP, the adversary can discern that the training samples require more perturbations than the testing samples. With HAMP however, training samples are predicted with lower confidence, and therefore it takes a similar amount of perturbation to perturb training and testing samples. For instance, on CIFAR100, the average amount of perturbation to perturb the training samples on the undefended model is 0.342, and 0.226 on the testing samples. With HAMP, the perturbation on the training samples become 0.289 and 0.234 on the testing samples, which effectively reduces the perturbation difference between training and testing samples by \(>\)53%. This means the members and non-members become indistinguishable from the perspective of their adversarial robustness.
Augmentation attack exploits the training samples' higher resistance to data augmentation, i.e., the augmented variants of training samples are _more likely_ to be classified correctly.
Fig. 2: Overview of our training- and testing-time defense.
Performing data augmentation on the original samples can be viewed as drawing neighboring variants around the original samples in the sample space. Since the training samples are closer to the decision boundary under HAMP, their augmented variants are more likely to cross the decision boundary, and hence be classified _incorrectly_, which is similar to how testing samples would behave. We also evaluate the model's performance on the inputs added with random augmentations. We find HAMP mainly reduces the performance on the augmented training samples (from 64.38% to 55.12%), and the performance on the augmented testing samples remain similar before and after HAMP (46.12% and 46.36%). This reduces the accuracy difference between members and non-members from 18.26% to 8.76% (a 52% reduction), and enables them to exhibit similar resistance to data augmentation.
HAMP's training-time framework is able to reduce the model's overconfident prediction on training samples _without_ compromising the model's performance, i.e., strong membership privacy and prediction accuracy. Nevertheless, membership privacy can be further improved such as by pushing the training samples closer to the decision boundary, but at the cost of accuracy, which is undesirable. In light of this, we introduce a testing-time output modification defense that can attain higher membership privacy _without_ degrading accuracy.
**Testing-time defense**. Our idea is to modify all the output scores to become low-confidence scores, hence making the output scores from members and non-members less distinguishable. The key observation that underpins the testing-time defense is that _randomly-generated samples are often predicted with low confidence, and the low-confidence output scores can be used for output modification_. Specifically, we first uniformly generate random samples, which are highly unlikely to be part of the training set due to the high dimensionality of the input space (e.g., the entire Texas100 dataset contains only \(67,330\) samples while the input space has \(2^{6170}\) samples). As these random samples are unlikely to be members of the training set, they are often predicted by the model with low confidence. We then replace all the entries in each output score with those from random samples, where the replacement is to keep the predicted labels unchanged (all top-\(k\) labels) and modify the output scores only. In essence, HAMP returns only the ordering of the confidence scores and the ordering is represented by the random output scores arranged in a specific order.
The random samples _do not_ have any prerequisites (e.g., they do not need to come from a specific distribution, nor do they need to produce a specific prediction label), as long as they are valid inputs (e.g., pixel values are in [0, 255]).
In HAMP, the high-confidence outputs on members and low-confidence outputs on non-members, all become low-confidence outputs after being modified. This significantly increases the difficulty for the adversary to identify differential behaviors on members and non-members.
In Section V-A, we perform detailed ablation study to show that all three defense components in HAMP are crucial in achieving strong membership privacy and preserving high model accuracy. We next explain HAMP in details.
### _Training-time Defense_
_Generating high-entropy soft labels_. The first step is to generate high-entropy soft labels for training, where the class probabilities in the soft labels are controlled by an _entropy threshold_ parameter, denoted as \(\gamma\). The entropy of a soft label \(y^{\prime}\) can be calculated as:
\[\mathbb{H}(y^{\prime})=-\sum_{j=0}^{k-1}y^{\prime}_{j}*\text{log}(y^{\prime}_ {j}) \tag{3}\]
A soft label with uniform probability on each dimension has the highest entropy, based on which we choose a smaller entropy threshold. For a \(k\)-class classification problem, our goal is to find a \(y^{\prime}\) given \(\gamma\) such that,
\[\mathbb{H}(y^{\prime})\geq\gamma\mathbb{H}(y),y=\{\frac{1}{k},...\frac{1}{k} \}^{k},\gamma\in[0,1], \tag{4}\]
where \(y^{\prime}\) has the highest probability on its ground-truth class, and the probabilities on the remaining dimension are the same. For a hard label \(y\) whose ground-truth class is \(j_{truth}\) (\(k\) classes in total), the resulting soft label becomes:
\[\forall y^{\prime}_{j}\in y^{\prime},y^{\prime}_{j}=\begin{cases}p&\text{if } \;j=j_{truth}\\ (1-p)/(k-1)&\text{if }\;j\neq j_{truth}\end{cases} \tag{5}\]
\(p\) is the probability on the ground-truth class, and a larger \(\gamma\) indicates higher prediction entropy, which leads to a smaller \(p\) (i.e., smaller probability on the ground-truth class).
_Entropy-based regularization_. In addition, we introduce an entropy-based regularizer that measures the prediction entropy during training, and penalizes predictions that have low entropy, as such predictions indicate high-confidence output and may be exploited by the adversary.
Finally, the overall training objective can be formulated as:
\[\mathcal{L}_{\text{KL}}(F_{\theta}(x),y)=\sum_{j=0}^{k-1}y_{j}\text{log}(\frac {y_{j}}{F_{\theta}(x)_{j}}), \tag{6}\]
\[\min_{\theta}[\mathcal{L}_{\text{KL}}((F_{\theta}(X_{tr}),Y^{{}^{\prime}}_{tr} ),\theta)-\alpha\mathbb{H}(F_{\theta}(X_{tr}))], \tag{7}\]
where \(Y^{{}^{\prime}}_{tr}\) is the high-entropy soft labels, \(L_{\text{KL}}\) the Kullback-Leibler divergence loss, \(\alpha\) is to control the strength of regularization. Our goal is to train the model to mitigate the overconfident prediction on training samples while maintaining high prediction accuracy. We achieve this by using a large \(\gamma\) to train the model with soft labels in high entropy, and a \(\alpha\) to regularize the prediction entropy. Section IV-A explains how to select the parameters \(\gamma,\alpha\) in HAMP (\(p\) in Equation 5 is determined by \(\gamma\)).
### _Testing-time Defense_
The testing-time defense uniformly modifies the runtime outputs to achieve stronger privacy without jeopardizing accuracy. We first generate uniform random samples \(x_{rand}\), e.g., for Purchase100 with binary features, each feature is assigned with 0 or 1 with equal probability. For each runtime input \(x\in[D_{tr},D_{te}]\), all the entries in \(F(x)\) (that indicate the
probability for each class) are replaced by those in \(F(x_{rand})\), the resulting output is denoted as \(F^{x_{rand}}(x)\). The replacement is to only modify the entries in \(F(x)\) while ensuring \(F(x)\) and \(F^{x_{rand}}(x)\) give the same prediction labels. For example, let \(x\in[D_{tr},D_{te}],F(x)=[0.85,0.05,0.1]\), and \(x^{\prime}\in X_{rand},F(x^{\prime})=[0.2,0.3,0.5]\), then the final output produced by the model becomes: \(F(x_{i})=[0.5,0.2,0.3]\). This enforces the model to produce low-confidence outputs on both members and non-members, and reduces privacy leakage.
**Overall Algorithm.** Algorithm 1 gives the overall algorithm of HAMP. \(\gamma\) and \(\alpha\) are the two parameters in HAMP to regulate the confidence level of the model's prediction, e.g., a high entropy threshold or strong regularization can enforce the model to become less confident in prediction. Line 2 generates a template of high-entropy soft labels of \(y^{\prime}\), which is then used to generate soft labels for each of the hard labels. The condition in Line 3 ensures that the ground-truth labels remains unchanged so that the model can learn the correct labels.
At test time, each output is replaced by those from a random sample. The condition of \(\text{argsort}(F^{x_{rand}}(x))=\text{argsort}(F(x))\) in line 13 is to ensure both \(F^{x_{rand}}(x)\) and \(F(x)\) give the same labels (all top-\(k\) labels and not just the top-1 label). Line 11 and Line 12 are independent of each other, and hence can be executed independently to facilitate faster runtime inference (overhead evaluation in Appendix F).
```
1:Input:\((X_{tr},Y_{tr})\in D_{tr}\): Training set;
2:\(\gamma\): Entropy threshold;
3:\(\alpha\): Strength of regularization;
4:\(F\): an initialized ML model;
5:functionTraining(\((X_{tr},Y_{tr})\), \(\gamma\), \(\alpha\), \(F\))
6: Generate high-entropy soft labels \(y^{\prime}\) given \(\gamma\)
7:\(\text{Generates}\)\(Y^{\prime}_{tr}\) from \(Y_{tr}\) using \(y^{\prime}\), where \(\forall(Y_{tr}[i],Y^{\prime}_{tr}[i])\in(Y_{tr},Y^{\prime}_{tr})\), \(\text{argsort}(Y_{tr}[i])=\text{argmax}(Y^{\prime}_{tr}[i])\)
8:for number of training epochs do
9: Minimize (7) using Stochastic Gradient Descent
10:endfor
11:return\(F\)
12:endfunction
13:
14:functionTesting(\(F\), \(x\))
15: Generate \(F(x)\)
16: Generate random uniform sample \(x_{rand}\) and \(F(x_{rand})\)
17: Generate \(F^{x_{rand}}(x)\) by replacing \(F(x)\) with \(F(x_{rand})\), where \(\text{argsort}(F^{x_{rand}}(x))=\text{argsort}(F(x))\)* top-\(k\) labels unchanged */
18:return\(F^{x_{rand}}(x)\)
19:endfunction
```
**Algorithm 1** Training and testing phase of HAMP
## IV Evaluation
### _Experimental Setup_
**Datasets.** We consider five common benchmark datasets, and we describe them below.
**Purchase100**[37] includes 197,324 shopping records of customers, each with 600 binary features indicating whether a specific item is purchased. The goal is to predict the customer's shopping habits (100 different classes in total).
**Texas100**[37] contains 67,330 hospital discharge records, each containing 6,170 binary features indicating whether the patient has a particular symptom or not. The data is divided into 100 classes, and the goal is to predict the treatment given the patient's symptoms.
**Location30**[37] contains the location "check-in" records of different individuals. It has 5,010 data records with 446 binary features, each of which corresponds to a certain loation type and indicates whether the individual has visited that particular location. The goal is to predict the user's geosocial type (30 classes in total).
**CIFAR100**[23] is an image classification dataset that has 60,000 images in 100 object classes. Each image has a size of 32\(\times\)32\(\times\)3.
**CIFAR10**[23] is similar to CIFAR100 that also contains 60,000 images but with 10 different object classes.
We follow [36] to use the fully-connected (FC) networks on Purchase100, Texas100 and Location30, and a DenseNet-12 [16] on CIFAR100 and CIFAR10 (Appendix H conducts evaluation on more network architectures, including ResNet-18 [14], MobileNet [15] and ShuffleNet [50]). Purchase100 is trained with 20,000 samples, Texas100 with 15,000 samples, Location30 with 1,500 samples, CIFAR100 and CIFAR10 are with 25,000 samples. Section V-B reports additional experiments on more training sizes (from 2,500 to 50,000).
**Attacks.** We consider all nine attacks as in Section II-C. For NN-based attack, we use the black-box NSH attack from Nasr et al. [30], which uses the model loss, logit values from the target model, and the ground-truth label to train an attack inference model. We consider the loss-based attack from Yeom et al. [48] and confidence-, entropy- and modified-entropy-based attacks as in Song et al. [38]. For LiRA [3], we train 128 shadow models for each defense (64 IN and OUT models each), where each shadow model is trained following the _same_ procedure as the targeted defense (as per our threat model). E.g., for HAMP, this means the shadow model is trained with the same high-entropy soft labels and the entropy-based regularization as the defense model, and the shadow model also performs the same output modification as HAMP does.
We consider the boundary and augmentation attacks from Choquette et al. [8]. For the boundary attack on the two image datasets, we use the CW2 attack [5] to generate adversarial samples and derive the perturbation magnitude threshold to distinguish members and non-members. Likewise, for the other three non-image datasets that contain binary features, we compute the sample's robustness to random noise instead of adversarial perturbation. For each sample \(x\), we generate hundreds of noisy variants of \(x\), and the number of correctly classified noisy variants of \(x\) is used to determine a threshold that best distinguishes between members and non-members. For augmentation attack, we consider image translation as the augmentation method, and we similarly consider different degrees of translation to find the best attack.
**HAMP configuration.**\(\gamma,\alpha\) are the two parameters in configuring HAMP (for generating high-entropy soft labels and controlling the strength of regularization respectively). We perform grid search to select the parameters (\(\gamma\in[0.5,0.99],\alpha\in[0.0001,0.5]\)), and select the one with small train-validation gap and high validation accuracy. We also conduct evaluation to study how HAMP's performance varies under different parameters (please see Appendix E).
For the testing-time defense, we generate random samples (e.g., random pixels in [0, 255]) and perform output modification as in Section III-D. There are no any other requirements.
Our code is available at [https://github.com/DependableSystemsLab/MIA_defense_HAMP](https://github.com/DependableSystemsLab/MIA_defense_HAMP).
**Related defenses.** We consider seven major defenses: AdvReg [29], MemGuard [19], DMP [36], SELENA [41], Early stopping [38, 6], Label Smoothing (LS) [40] and DP-SGD [2]. We follow the original work to set up the defenses unless otherwise stated (more details in Appendix A).
**Evaluation metrics**. An ideal privacy defense should provide strong protection for both members and non-members, for which we follow the best practice [3] to consider (1) _attack true positive rate_ (TPR) evaluated at 0.1% false positive rate (FPR), which evaluates the protection for members, and (2) _attack true negative rate_ (TNR) at 0.1% false negative rate (FNR), which quantifies the protection for non-members.
**Result organization.** Table I reports the model accuracy for every defense. Fig. 3 compares each defense in terms of their membership privacy and model utility. Each defense is evaluated with multiple attacks, and we report the ones that achieve the highest attack TPR or TNR (detailed results for each attack are in Appendix K). Fig. 4 presents the average attack AUC (area under curve) by each defense, and the full ROC curves are in Appendix J. We leave the comparison with early stopping in Appendix D due to space constraint. Section V-A presents an ablation study, and Appendix F reports training and inference overhead evaluation. We next discuss the results by comparing HAMP with other defenses.
### _Comparison with Undefended Models_
**HAMP significantly reduces the MIA risk against both members and non-members.** Compared with the undefended models, HAMP achieves significantly lower attack TPR and TNR. The average attack TPR on the undefended model is 13.48%, which is reduced to 0.8% by HAMP (a 94.1% reduction). Similarly, HAMP reduces the attack TNR by 97%, from 19.89% to 0.59%. This effectively thwarts the adversary in inferring members or non-members from the target model.
In addition, we find that NN-based attack yields the highest attack TPR on the undefended models in many cases (as in Fig. 3), and we explain the reason in Appendix G.
**HAMP achieves strong membership privacy while preserving high model accuracy.** Across the five diverse datasets, HAMP is able to consistently produce models with comparable accuracy as the undefended models. HAMP has an accuracy drop of at most 1.1% (on Location30), and the average accuracy drop by HAMP is only 0.46%.
### _Comparison with MemGuard [19]_
**Both MemGuard and HAMP are capable of preserving model accuracy**. MemGuard does not incur any accuracy drop since it is a post-processing technique, and does not change the prediction label. Likewise, HAMP only incurs a minor accuracy drop of 0.46%.
**HAMP achieves considerably stronger membership privacy than MemGuard.** MemGuard offers very limited privacy protection because MemGuard only modifies the output scores without changing the prediction labels, which cannot prevent privacy leakage from the label information. On the contrary, HAMP consists of a training-time defense that can mitigate membership leakage from both output scores and label information (explained in Section III-B), and achieves much stronger membership privacy than MemGuard. The average attack TPR on MemGuard is 6.7%, which is 8.4x relative to that of HAMP. Similarly, the attack TNR by MemGuard is 10.9%, which is 18.3x relative to that of HAMP.
### _Comparison with AdvReg [29]_
**HAMP outperforms AdvReg with higher model accuracy and stronger membership privacy**. In terms of accuracy, HAMP consistently achieves higher accuracy than AdvReg. AdvReg incurs an average 7.45% accuracy drop, while HAMP incurs only 0.46% (94% lower than AdvReg).
In terms of privacy, HAMP outperforms AdvReg with
\begin{table}
\begin{tabular}{l l l l l} \hline Dataset & Defense & Training accy & Testing Acy & Acy delta \\ \hline \hline \multirow{8}{*}{Purchase100} & Undefended & 99.36 & 80.85 & 0.00 \\ & MemGuard & 99.36 & 80.85 & 0.00 \\ & AdvReg & 93.97 & 75.75 & -5.10 \\ & DPSGD & 61.06 & 54.05 & -26.80 \\ & LS & 99.54 & 85.60 & +4.75 \\ & SELENA & 85.19 & 76.50 & -4.35 \\ & HAMP & 91.12 & 81.15 & +0.30 \\ \hline \multirow{8}{*}{CIFAR100} & Undefended & 86.21 & 59.56 & 0.00 \\ & MemGuard & 86.21 & 59.56 & 0.00 \\ & AdvReg & 55.78 & 44.36 & -15.20 \\ & DMP & 53.37 & 47.52 & -12.04 \\ & LS & 88.80 & 63.24 & +3.68 \\ & SELENA & 62.15 & 57.64 & -1.92 \\ & HAMP & 68.44 & 58.92 & -0.64 \\ \hline \multirow{8}{*}{CIFAR10} & Undefended & 99.56 & 57.40 & 0.00 \\ & MemGuard & 99.56 & 57.40 & 0.00 \\ & AdvReg & 69.70 & 48.20 & -9.20 \\ & DPSGD & 36.37 & 28.00 & -29.40 \\ & DMP & 92.81 & 54.30 & -3.10 \\ & SELENA & 67.41 & 55.80 & -1.60 \\ & HAMP & 78.22 & 56.30 & -1.10 \\ \hline \multirow{8}{*}{CIFAR10} & Undefended & 98.72 & 86.72 & 0.00 \\ & MemGuard & 98.72 & 86.72 & 0.00 \\ & AdvReg & 86.73 & 82.16 & -4.56 \\ & DMP & 91.08 & 85.56 & -1.16 \\ & SELENA & 86.86 & 84.52 & -2.20 \\ & HAMP & 95.88 & 86.28 & -0.44 \\ \hline \multirow{8}{*}{Texas100} & Undefended & 76.79 & 54.80 & 0.00 \\ & MemGuard & 76.79 & 54.80 & 0.00 \\ \cline{1-1} & AdvReg & 62.76 & 51.60 & -3.20 \\ \cline{1-1} & DPSGD & 43.08 & 39.47 & -15.33 \\ \cline{1-1} & DMP & 46.92 & 43.07 & -11.73 \\ \cline{1-1} & LS & 75.52 & 56.33 & +1.53 \\ \cline{1-1} & SELENA & 58.58 & 53.60 & -1.20 \\ \cline{1-1} & HAMP & 68.56 & 54.40 & -0.40 \\ \hline \hline \multirow{8}{*}{_Average accuracy_} & Undefended & 0.00 & MemGuard & 0.00 \\ \cline{1-1} & AdvReg & -7.45 & DPSGD & -23.84 \\ \cline{1-1} & LS & +2.42 & DMP & -7.01 \\ \cline{1-1} & SELENA & -2.25 & HAMP & -0.46 \\ \hline \end{tabular}
\end{table} TABLE I: Model accuracy for each defense. Accuracy delta measures the accuracy difference with the undefended model.
both much lower attack TPR and TNR. The attack TPR is 1.70% with AdvReg and 0.8% with HAMP, which translate to a 87% and 94% reduction from those of the undefended models. Similarly, AdvReg reduces the attack TNR by 90% while HAMP reduces it by 97%, which is much higher.
### _Comparison with DMP [36]_
DMP [36] uses generative adversarial networks (GANs) trained on the private dataset to produce synthetic data as the reference set for knowledge distillation. We follow Shejwalker
Fig. 3: **Attack TPR @ 0.1% FPR** (first two rows) and **Attack TNR @ 0.1% FNR** (last two rows) on different datasets. The legend indicates the attack that yields the highest attack TPR/TNR. Negative prediction accuracy delta means accuracy drop compared with the undefended models. DP-SGD is reported at \(\epsilon=4\), and it is not evaluated on CIFAR100 and CIFAR10 due to its significant accuracy drop (similar case as DMP on Purchase100). LS is not evaluated on CIFAR10 and Location30 as LS did not lead to accuracy improvement. _Overall, HAMP offers strong privacy protection for both members and non-members, while preserving high model accuracy, thereby yielding a superior privacy-utility trade off over other defenses._
et al. [36] to train the two image datasets on DC-GAN [34]. The defender can generate unlimited data from the GAN, and hence he/she can create a reference set that is larger than the original training set. Therefore, we use 150K synthetic samples to train the model with higher accuracy (we do not consider more synthetic images as the improvement is negligible).
For the three datasets with binary features, we use CT-GAN [45] for modeling tabular data. We use 100K synthetic samples for Texas100, 10k for Location30. We do not consider Purchase100 as it incurs significant accuracy drop (over 30%). To validate that synthetic samples are useful for the domain task, we compare the performance of the models trained with GAN-generated synthetic data and those with random data (i.e., all features are randomly selected as 0 or 1 with equal probability) using Texas100. We find that models trained with random data only achieve accuracy from 5.8% to 14.8%; while those with GAN-generated data achieve over 40% accuracy.
**HAMP outperforms DMP by being able to consistently achieve strong privacy protection with high model accuracy across different datasets**. In terms of membership privacy, we find that DMP is able to achieve strong results in many (but not all cases, and it achieves an average attack TPR of 0.44% and TNR of 0.38% on Texas100, CIFAR100 and CIFAR10, where HAMP achieves 0.9% TPR and 0.65% TNR (DMP is slightly better). However, DMP's performance does not generalize across datasets. For instance, on Location30, DMP suffers from a much higher attack TPR of 7.26% and TNR of 23.33%. This is because the model is trained with limited data (1,500), and the GAN is _not_ able to generate diverse data that are different from the original training data. As a result, the teacher model assigns high confidence to the synthetic data, from which the student model learns to predict the training members with high confidence that eventually leads to high MIA risk. To validate this, we compare the difference between the prediction confidence on members and non-members by the DMP models. On Location30, the average difference is \(>\)30%, and only \(<\)5% on the other datasets, which is why DMP exhibits poor privacy protection on Location30. On the same dataset, HAMP yields a low TPR of 0.89% and TNR of 0.59%, and this trend is consistent across datasets.
In terms of accuracy, DMP suffers from different degrees of accuracy loss that are much higher than HAMP's. DMP incurs \(>\)30% accuracy loss on Purchase100 (as mentioned earlier), \(\sim\)12% accuracy drop on Texas100 and CIFAR100, 3.1% on Location30, and 1.2% on CIFAR10 (smaller accuracy loss as CIFAR10 has 10 classes only). In contrast, HAMP incurs average accuracy drop of \(<\)0.5% (at most 1.1%), which is significantly better than DMP.
### _Comparison with SELENA [41]_
**Both SELENA and HAMP achieve similarly strong privacy protection**. On average, HAMP has a slightly better membership privacy than SELENA, but neither technique has consistently better membership privacy overall (Fig. 3). The attack TPR of SELENA is \(0.53\%\sim 1.72\%\), with an average of 1.1%, and that of HAMP is \(0.4\%\sim 1.2\%\), with an average of 0.8%. They are able to reduce the attack TPR by 92% (SELENA) and by 94% (HAMP). In addition, the attack TNR of SELENA is \(0.42\%\sim 3.7\%\), with an average of 1.7%, and that of HAMP is \(0.44\%\sim 0.77\%\), with an average of 0.6%. This translates to a TNR reduction of 91% (SELENA) and 97% (HAMP), respectively.
**While providing comparable privacy benefits, HAMP outperforms SELENA by having lower accuracy loss, hence providing a better privacy-utility trade off**. The largest accuracy drop by SELENA is 4.4% and that by HAMP is only 1.1%. On average, SELENA incurs a 2.25% accuracy drop, while HAMP incurs a much smaller drop of 0.46%. Moreover, our additional experiment in Section V-B shows that HAMP continues to outperform SELENA with much lower accuracy drop when evaluated on a variety of different training sizes (2.2%\(\sim\)5.2% by SELENA and 0.04%\(\sim\)0.98% by HAMP).
### _Comparison with Label Smoothing (LS) [40]_
**Though LS is able to improve model accuracy, the model trained with LS still suffers from _high_ MIA risk. In contrast, the model trained with HAMP can maintain high model accuracy and exhibit very _low_ MIA risk**. For LS, we follow prior work by Kaya et al. [20] to train with different smoothing intensities from 0.01 to 0.995, and select the model with the highest accuracy (we omit CIFAR10 and Location30 as LS did not lead to accuracy improvement). We first discuss the qualitative difference between LS and HAMP, and then quantitatively compare their privacy risk.
While LS and HAMP use soft labels in their training, they are built with different purposes that require different soft labels. LS is used as a regularization technique to improve model accuracy, which necessitates training with _low_-entropy soft labels, and is able to increase the accuracy by 2.4% on average. However, the resulting model still suffers from high MIA risk, as LS causes the model to overfit on the smooth labels and exhibit discernible behavior on the training samples [20]. In contrast, HAMP is built to improve membership privacy, which consists of _high_-entropy soft labels, an entropy-based regularizer and a novel testing-timd defense to force the model to make less confident predictions, and to behave similarly on the training and testing samples.
To quantitatively compare the different soft labels used by both techniques, we measure the soft label entropy in LS and HAMP, and find that the label entropy in HAMP is considerably higher than that in LS, and is 4x\(\sim\)50x relative to that in LS (average 9x). This contributes to the low membership privacy risk by HAMP, unlike LS.
The average attack TPR on the LS models is 5.1%, 7.1x relative to that by HAMP (on the same datasets). The attack
Fig. 4: Average attack AUC by each defense (detailed results for each dataset can be found in Appendix I).
TNR on LS is 6.3x relative to that by HAMP (we observe a similar trend even when we train LS with other smoothing intensities that have comparable accuracy improvement - see Appendix C). Moreover, our results reveal that LS may amplify the MIA risk and render the model _more vulnerable_ than the undedended model. On Texas100, LS increases the attack TPR from 3.87% (on the undefended model) to 5.61%, which increases the MIA risk against training members by 45%. This suggests that LS may constitute a hidden privacy risk for the practitioners (a similar finding was identified recently by Kaya et al. [20]). On the contrary, HAMP consistently leads to low MIA risk and outperforms LS with significantly better membership privacy.
### _Comparison with DP-SGD [2]_
We use the canonical implementation of DP-SGD using Pytorch Opacus [1]. We first consider a fixed privacy budget \(\epsilon=4\) as per Tang et al. [41], and then evaluate DP-SGD with different values of \(\epsilon\).
#### Iv-H1 DP-SGD with fixed \(\epsilon=4\).
In this setting, the average attack TPR of the DP-SGD models is 0.36% and 0.3%, both of which are the lowest among all the defenses we evaluated. In comparison, HAMP yields 0.8% attack TPR and 0.6% TNR, which are slightly higher than DP-SGD. However, DP-SGD suffers from considerable accuracy loss, with an average loss of 23.84%, while HAMP a significantly smaller loss of 0.46%.
#### Iv-H2 DP-SGD with different \(\epsilon\).
We next evaluate DP-SGD by considering different noise_multipliers and clipping norms. We consider Purchase100, on which we used a noise_multiplier of 1.7 and a clipping norm of 1, for \(\epsilon=4\) in the earlier evaluation. We select different noise_multiplier values of 0.0 (no noise injected), 0.1 (\(\epsilon=12069.1\)), 0.5 (\(\epsilon=62.5\)) and 0.9 (\(\epsilon=10.9\)); and clipping norm values of 1, 5 and 10, totalling 12 different configurations. We report the results in Fig. 5.
Reducing the amount of injected noise and using a larger clipping norm allows DP-SGD to provide empirical privacy protection (but with a very large provable bound of \(\epsilon\)), and reduce the amount of accuracy loss. For instance, by using a clipping norm of 10 _without_ injecting any noise, DP-SGD is able to reduce the accuracy loss to be \(<\)1%, which can also reduce the attack TPR by 73% (from 14.37% to 3.86%), and the attack TNR by 36% (from 14.62% to 9.36%). Nevertheless, this performance is still (considerably inferior to that of HAMP, which can reduce the attack TPR and TNR by 97.2% and 96.7%, respectively.
Using a tighter clipping norm or injecting more noise can improve the membership privacy even more, but this comes at the cost of accuracy loss (the earlier result has negligible accuracy loss). For example, by using a small clipping norm of 1, the attack TPR can be reduced to 0.67% and attack TNR to 0.62%. However, this results in 8.2% accuracy loss. Increasing the noise_multiplier can further reduce privacy leakage, e.g., using a noise_multiplier value of 0.5 can reduce the attack TPR to 0.5% and attack TNR to 0.49% (and with a large \(\epsilon\) of 62.5), which are comparable to the 0.4% TPR and 0.44% TNR values by HAMP. However, DP-SGD degrades the accuracy by 13.6%, while HAMP incurs negligible accuracy drop.
Therefore, training a model with a small amount of noise or with a tight clipping norm is also a viable defense against MIAs, though it still incurs much larger accuracy loss than HAMP and results in large provable bounds \(\epsilon\).
## V Discussion
### _Ablation Study_
HAMP consists of three components, and we perform a detailed ablation study to investigate the effectiveness of each of these components - this includes a total of six configurations. We present the results in Table II.
The second to fourth rows in Table II shows the results on models using a single component in HAMP. For instance, training with high-entropy soft labels alone is able to produce a model with similar accuracy as the undefended model (trained with the one-hot hard labels), and reduce the attack TPR from 14.37% to 4.76%, and attack TNR from 14.62% to 4.22%. This also validates our earlier observation in Section III-A that training with one-hot hard labels could lead to high MIA risk, and the proposed high-entropy soft labels can be used to mitigate the high MIA risk. However, this is not enough as the model still suffers from relatively high TPR and TNR. We observe similar trends in the other two settings where we either train with the entropy-based regularizer alone, or directly perform output modification on the undefended model.
Strengthening the model with more defense components can further reduce the MIA risk while preserving model accuracy. For example, training with high-entropy soft labels and the entropy-based regularizer (fifth row in Table II) achieves
\begin{table}
\begin{tabular}{c|c c|c c} \hline Defense & Training & Testing & Attack TPR & Attack TNR \\ component & accuracy & accuracy & @0.1\% FPR & @0.1\% FNR \\ \hline \hline None (undefended) & 99.36 & 80.85 & 14.37 & 14.62 \\ \multirow{3}{*}{1} & 94.58 & 81.75 & 4.76 & 4.22 \\ & 98.06 & 81.10 & 3.39 & 4.19 \\ & 99.36 & 80.85 & 8.51 & 5.34 \\ & 91.12 & 81.15 & 1.86 & 1.07 \\ & 94.58 & 81.75 & 0.82 & 1.23 \\ & 2 + 3 & 98.06 & 81.10 & 2.90 & 3.76 \\ \hline \multirow{3}{*}{1} & \multirow{3}{*}{2} & \multirow{3}{*}{1} & \multirow{3}{*}{81.15} & \multirow{3}{*}{0.40} & \multirow{3}{*}{0.44} \\ & & & & \\ \cline{1-1} & & & & \\ \hline \end{tabular}
\end{table} TABLE II: Ablation study on different components of HAMP: 1. High-entropy soft labels; 2. Entropy-based regularizer; 3. Testing-time output modification.
Fig. 5: Results on DP-SGD under different clipping norms \(\in[1,5,10]\), and noise_multipliers \(\in[0.0,0.1,0.5,0.9]\).
a low TPR of 1.86% and a low TNR of 1.07%. We observe a similar trend even if we change to different configurations, as in the sixth and seventh rows in Table II, both of which exhibit better privacy protection than models equipped with a single component. Furthermore, we find that the resulting model continues to maintain high model accuracy, which means the different defense components in HAMP can be used together to improve membership privacy without jeopardizing model accuracy. Finally, the full defense consisting of all three defense components, as in HAMP, exhibits the best privacy protection while maintaining competitive model accuracy.
### _Evaluation on Different Training Sizes_
This section reports additional experiments where we vary the size of the training set. We evaluate six more different sizes on Purchase100, which is the largest dataset in our evaluation and allows us to comprehensively evaluate a wide range of sizes, namely: 2,500, 5,000, 7,500, 10,000, 30,000, 50,000 (up to 20x difference). We trained 64 shadow models in the LiRA attack for each defense, with over 2,300 different shadow models in total. Fig. 6 shows the results.
**We find that even when evaluated under a broad range of training sizes, HAMP consistently achieves superior performance on both privacy protection and model util
Fig. 6: Defense evaluation on models trained with different amounts of training data. The first two rows evaluate attack TPR and the last two rows evaluate attack TNR. HAMP consistently achieves strong privacy protection while preserving model accuracy.
ity.** The average attack TPR on the undefended model is 24.7% and attack TNR 22.9%. MemGuard achieves an average attack TPR of 13% and attack TNR 17.4%, both of which are significantly higher than the 1.3% and 1.5% by HAMP. AdvReg incurs an average accuracy loss of 6.3% while HAMP incurs only 0.2%. HAMP also outperforms AdvReg with better privacy protection: AdvReg reduces the attack TPR by 83% and attack TNR by 76.1%, while HAMP reduces them by 94.8% and 93.4%, respectively. LS improves the accuracy by 3.2%, but it still suffers from high MIA risk: its attack TPR and TNR are 8x and 4.1x relative to that of HAMP. Both SELENA and HAMP have similarly strong membership privacy: the average attack TPR on SELENA is 1.2%, and 1.3% on HAMP; the attack TNR are 1.4% and 1.5%, respectively. Under a similar privacy protection, HAMP still outperforms SELENA with a much lower accuracy drop. On average, SELENA degrades the accuracy by 3.97% (up to 5.2%), while HAMP degrades accuracy by only 0.15% (up to 0.98%).
### _Evaluation against Data-poisoning-based MIA [42]_
Recent work by Tramer et al. [42] shows that a more capable adversary can significantly amplify the MIA risk through data poisoning. Therefore, we conduct additional evaluation on whether HAMP can protect against such more capable attack.
The Tramer et al. attack increases the membership leakage against target points, by poisoning the training set to transform the target points into outliers. Each target point is replicated \(n\) times with a wrong label, and these replicas are added as the poison samples. If the target point is a member in the training set, the model will be fooled into believing that the correctly-labeled target point is "mislabeled" (due to the presence of other poisoned replicas), which would have a large influence on the model's output and can be identified by the adversary.
We follow [42] to conduct the evaluation on CIFAR10, and select 250 random target points (containing both members and non-members), each replicated 8 times. We train 128 shadow models, which include a total of 32,000 target points. Without data poisoning, the adversary achieves 8.23% attack TPR and 10.15% attack TNR on the undefended model. These are increased to 52.44% and 24.52% after data poisoning, respectively. Even under such a powerful attack, HAMP is able to reduce the attack TPR from 52.44% to 0.34%, and attack TNR from 24.52% to 0.71%. Further, HAMP achieves such strong protection with a negligible accuracy drop of 0.6%.
### _Limitation_
First, it requires re-training and hence incurs additional training overhead. Nevertheless, re-training is commonly required by many existing defenses [29, 36, 41], and training is a one-time effort prior to deployment. Further, our evaluation shows that HAMP incurs only a modest training overhead compared with other defenses (see Appendix F).
The second limitation is that HAMP's testing-time defense incurs an overhead in every inference, which may be undesirable for the computations that have stringent real-time constraints. Nevertheless, HAMP incurs a low latency of only 0.04\(\sim\)0.38\(ms\) per inference. In comparison, MemGuard, the other defense that also contains post-processing modification, introduces a latency of 335.42\(\sim\)391.75\(ms\). In addition, this process also changes the output scores to be randomized, which may affect the usefulness of the output scores. Nevertheless, we try to reduce the impact by ensuring the prediction labels derived from the output scores remain unchanged (all top-k labels), and thus the model accuracy is unaffected. This can still provide meaningful information in the output scores without leaking membership privacy.
Finally, though HAMP empirically provides superior privacy-utility tradeoff, it does not offer provable guarantees. This is a limitation common to all practical defenses [29, 36, 41, 19]. Hence, a more capable adversary may mount stronger attacks, such as the data poisoning attack by Tramer et al. [42]. Our preliminary evaluation shows that HAMP still exhibits strong privacy protection and preserves model accuracy even under the presence of such a data-poisoning adversary, but we leave further investigation to future work.
## VI Related Work
_Membership inference attacks._ Depending on the adversary capabilities, MIAs can be divided into black-box [37, 48, 17, 38, 8, 47, 27] and white-box attacks [25, 18, 30]. The former has access only into the output of the target model while the latter has visibility into information such as the internal model gradients to facilitate membership inference. Black-box MIA assumes a more realistic adversary, and hence is hence widely adopted in prior defense studies [19, 41, 29] (and in HAMP). Such attacks can be mounted by either shadow-training [37, 29, 48] or computing statistical metrics based on the partial knowledge of the private dataset [38, 8, 27]. Many of those attacks require full or partial access to the output scores by the model, and may be defeated if the model only reveals the prediction label. This motivates a new class of attacks called, label-only attacks, which can be launched either with [8] or without [27] partial knowledge of the membership information. Carlini et al. [3] introduce the LiRA attack that can succeed in inferring membership when controlled at low false positive or false negative, through a well-calibrated Gaussian likelihood estimate.
In addition to supervised classification, MIAs have also been explored in other domains, including contrastive learning [28], generative models [7, 13], federated learning [30], graph neural networks [51], and recommender systems [49].
_Defenses against membership inference attacks._ These defenses can be divided into provable and practical defenses. The former can provide rigorous privacy guarantee, such as DP-SGD [2], PATE [32]. Nevertheless, these defenses often incur severe accuracy drop when used with acceptable provable bounds [35, 33]. Another line of practical defenses aim to achieve empirical privacy without severely degrading accuracy. Common regularization techniques such as dropout [39], weight decay [24] are shown to be able to reduce privacy leakage, but with limited effectiveness [37, 36]. Other defenses enforce specific optimization constraint during training to mitigate MIAs [29, 26], or perform output obfuscation [19, 46]. Knowledge distillation is used by different techniques to mitigate MIAs, including PATE [32], DMP [36], SELENA [41] and KCD [9]. However, existing defenses are often biased towards either privacy or utility. In contrast, HAMP both achieves strong membership privacy and high accuracy, which offers a much better privacy-utility trade off.
_Other privacy attacks_. In addition to membership privacy, common ML models are found to leak different private properties [43, 44, 11, 10, 12, 4]. Model extraction attacks can duplicate the functionality of a proprietary model [43, 44]. Model inversion attacks are capable of inferring critical information in the input features such as genomic information [11, 10]. Property inference attacks are constructed to infer sensitive properties of the training dataset [12].
## VII Conclusion
This work introduces HAMP, a defense against Membership Inference Attacks (MIAs) that can achieve both high accuracy and membership privacy. HAMP has two innovations: (1) a training framework that consists of high-entropy soft labels and an entropy-based regularizer; and (2) an output modification defense that uniformly modifies the runtime output. HAMP significantly constrains the model's overconfidence in predicting training samples, and forces the model to behave similarly on both members and non-members, thereby thwarting MIAs. Our evaluation shows that HAMP outperforms seven leading defenses by offering a better trade off between utility and membership privacy.
## Acknowledgment
This work was funded in part by the Natural Sciences and Engineering Research Council of Canada (NSERC), and a Four Year Fellowship and a Public Scholar Award from the University of British Columbia (UBC).
|
2301.12284 | Assertion Inferring Mutants | Specification inference techniques aim at (automatically) inferring a set of
assertions that capture the exhibited software behaviour by generating and
filtering assertions through dynamic test executions and mutation testing.
Although powerful, such techniques are computationally expensive due to a large
number of assertions, test cases and mutated versions that need to be executed.
To overcome this issue, we demonstrate that a small subset, i.e., 12.95% of the
mutants used by mutation testing tools is sufficient for assertion inference,
this subset is significantly different, i.e., 71.59% different from the
subsuming mutant set that is frequently cited by mutation testing literature,
and can be statically approximated through a learning based method. In
particular, we propose AIMS, an approach that selects Assertion Inferring
Mutants, i.e., a set of mutants that are well-suited for assertion inference,
with 0.58 MCC, 0.79 Precision, and 0.49 Recall. We evaluate AIMS on 46 programs
and demonstrate that it has comparable inference capabilities with full
mutation analysis (misses 12.49% of assertions) while significantly limiting
execution cost (runs 46.29 times faster). A comparison with randomly selected
sets of mutants, shows the superiority of AIMS by inferring 36% more assertions
while requiring approximately equal amount of execution time. We also show that
AIMS 's inferring capabilities are almost complete as it infers 96.15% of
ground truth assertions, (i.e., a complete set of assertions that were manually
constructed) while Random Mutant Selection infers 19.23% of them. More
importantly, AIMS enables assertion inference techniques to scale on subjects
where full mutation testing is prohibitively expensive and Random Mutant
Selection does not lead to any assertion. | Aayush Garg, Renzo Degiovanni, Facundo Molina, Mike Papadakis, Nazareno Aguirre, Maxime Cordy, Yves Le Traon | 2023-01-28T19:51:25Z | http://arxiv.org/abs/2301.12284v1 | # Assertion Inferring Mutants
###### Abstract
Specification inference techniques aim at (automatically) inferring a set of assertions that capture the exhibited software behaviour by generating and filtering assertions through dynamic test executions and mutation testing. Although powerful, such techniques are computationally expensive due to a large number of assertions, test cases and mutated versions that need to be executed. To overcome this issue, we demonstrate that a small subset, i.e., 12.95% of the mutants used by mutation testing tools is sufficient for assertion inference, this subset is significantly different, i.e., 71.59% different from the subsuming mutant set that is frequently cited by mutation testing literature, and can be statically approximated through a learning based method. In particular, we propose _AIMS_, an approach that selects _Assertion Inferring Mutants_, i.e., a set of mutants that are well-suited for assertion inference, with 0.58 MCC, 0.79 Precision, and 0.49 Recall. We evaluate _AIMS_ on 46 programs and demonstrate that it has comparable inference capabilities with full mutation analysis (misses 12.49% of assertions) while significantly limiting execution cost (runs 46.29 times faster). A comparison with randomly selected sets of mutants, shows the superiority of _AIMS_ by inferring 36% more assertions while requiring approximately equal amount of execution time. We also show that _AIMS_'s inferring capabilities are almost complete as it infers 96.15% of ground truth assertions, (i.e., a complete set of assertions that were manually constructed) while _Random Mutant Selection_ infers 19.23% of them. More importantly, _AIMS_ enables assertion inference techniques to scale on subjects where full mutation testing is prohibitively expensive and _Random Mutant Selection_ does not lead to any assertion.
## I Introduction
Software specifications aims at describing the software's intended behavior, and can be used to distinguish the corresponding correct/expected software behaviour from the incorrect/unexpected one. While these are typically described informally (e.g. API documentation), specifications become significantly more useful when expressed formally, in the form of executable constraints/assertions. Executable specifications are typically composed of code assertions for various program points, such as method preconditions and postconditions, that must hold true during the program execution. These are known to be useful in many software engineering tasks, e.g., test generation [14, 45], bug finding [30, 36] and automated debugging [31, 15, 39]. However, they are tedious to write and maintain, and as a result developers often avoid writing them [8, 49].
To address this issue, specification inference techniques aim at automatically inferring assertions for specific program points that capture the exhibited software behaviour [34, 35, 44]. These techniques evolve candidate assertions and use dynamic test executions to determine which of those assertions are consistent with the behaviours exhibited by a provided test suite, and mutation testing to discard ineffective/weak assertions that are unable to detect any artificially seeded fault (mutant), i.e., assertions never falsified during mutant's execution. Though powerful, these techniques are computationally expensive due to a large number of tests, assertions and mutant executions involved. The problem is further escalated when working with large programs as the number of mutants grows proportionally to the program size. For instance, state of the art technique SpecFuzzer [34] times out (requires more than 90 minutes to run) in programs with 180 lines of code.
To reduce the computational demands, it is imperative to limit the number of mutants involved since fewer mutants yield fewer executions. Interestingly, we find that the majority of the mutants used by existing assertion inference techniques are redundant, meaning that discarding them do not impact the quality of inferred assertions. We denote as _Assertion Inferring Mutants_ to the subset of mutants produced by a mutation testing tool that can be used to identify all the effective candidate assertions (i.e. those falsified during mutants' execution) that can also be identified as effective by the entire set of mutants.
We demonstrate that _Assertion Inferring Mutants_ represent 12.95% of the mutants supported by Major [26] (the mutation testing tool employed in previous studies), allowing for drastic assertion inference overhead reductions. At the same time, _Assertion Inferring Mutants_ are significantly different from subsuming mutants (which have been studied by the literature [21, 38]) with 71.59% of them not being subsuming. This means that subsuming mutant selection techniques are ineffective for assertion inference, as they would miss many assertions (48.53% according to our results).
We thus propose _AIMS1_, a learning-based technique to statically identify assertion inferring mutants given their contextual information. In particular, _AIMS_ learns the associations between mutants and their surrounding code with respect to the assertion inference task. This means that our learning scope is the area around the mutation point that identifies locally, the mutants that are most likely to be useful from those that are not.
Footnote 1: _Assertion Inferring Mutant Selector (AIMS)_
_AIMS_ operates at the lexical level, with a simple code pre-processing that represents mutants and their surrounding code as vectors of tokens with all user defined identifiers (e.g. variable names) replaced by predefined and predictable identifier names. This representation allows us to restrict the related vocabulary and the learning scope to a relatively small fixed size of tokens around the mutation points enabling inter-project predictions. Code embeddings extracted from an encoder-decoder architecture [27] that we train on code fragments, are extracted and learned with corresponding labels using a classifier [9].
We implement _AIMS_ and evaluate its ability to predict _Assertion Inferring Mutants_ on a large set of 46 programs, composed of 40 taken from previous studies [34, 35, 44] and 6 large Maven projects taken from GitHub to evaluate scalability. Our results demonstrate that _AIMS_ can statically select _Assertion Inferring Mutants_ with 0.79 Precision and 0.49 Recall, overall yielding 0.58 MCC2. At the same time, since _AIMS_ selects fewer mutants than previous work, it improves assertion inference scalability allowing it to run on all the projects we considered where previous work failed.
Footnote 2: _Matthews Correlation Coefficient_ (MCC) [50] is a reliable metric of the quality of prediction models [41], relevant when the classes are of different sizes, e.g., _12.95% Assertion Inferring Mutants_ in total (in comparison to 87.0% low utility mutants), for subjects in our dataset.
Surprisingly, by performing assertion inference based only on _AIMS_'s predicted mutants (instead of all mutants), we reduce assertion inference time (wall clock) by 46.29 times with only 12.49% assertion missed. Additionally, when comparing with randomly selected sets of mutants (same number as those selected by _AIMS_), we observe a clear superiority _AIMS_ in terms of effectiveness, i.e., _AIMS_ infers 36% more assertions while taking approximately equal amount of execution time as _Random Mutant Selection_.
Finally, we show that _AIMS_'s inferring capabilities are almost complete as it infers 96.15% of ground truth assertions, (i.e., the complete set of assertions that were manually validated) while _Random Mutant Selection_ infers 19.23% of them. More importantly, _AIMS_ enables assertion inference techniques to scale by allowing its operation on all 6 real-world subjects we selected, where full mutation testing is prohibitively expensive. In half of these subjects, _Random Mutant Selection_ does not lead to any assertion inference and it is subsumed by _AIMS_ in the other half of the subjects.
To sum up, our paper makes the following contributions:
1. We show that effective assertion inference can be performed using only 12.95% of the mutants. We also show that these set of assertion inferring mutants is significantly different (i.e., 71.59% different) from the subsuming mutant set, a reference class of mutants frequently used by mutation testing literature.
2. We propose _AIMS_, a static mutant selection technique that predicts _Assertion Inferring Mutants_ with a good performance (0.58 MCC, 0.79 Precision, and 0.49 Recall). When performing assertion inference, _AIMS_ allows inferring 46.29 times faster the assertions, that could be inferred with the full set of mutants, at the expense of 12.49% missed assertions. We also show that _AIMS_ is significantly more effective than random and subsuming mutant selection baselines. That is, _AIMS_ infers 36% and 30% more of the assertions, that can be inferred with the full mutant set, than random and subsuming mutant selection.
3. We show that _AIMS_'s inferring capabilities are almost complete as it is capable to infer 96.15% of ground truth assertions (i.e., the complete set of assertions that were manually constructed to evaluate previous work). It should be noted that the alternative baselines approaches - _Subsuming Mutant Selection_ and _Random Mutant Selection_ infer far fewer assertions, i.e., 67.31% and 19.23% of the ground truth assertions, respectively.
4. Finally, we show that _AIMS_ improves the scalability of SpecFuzzer by allowing it to run in programs for which it was not able to run before. Precisely, _AIMS_ allows assertion inference in cases where random selection fails (50% of the cases we tried).
## II Background & Related Work
### _Assertion Inference_
A code assertion is a logical expression capturing a property that should hold at a specific program location. It is often used as an executable description of the expected software behavior that has widespread applications in software design [33], software testing [4], and verification [13, 19]. Assertion inference is the problem of generating an assertion from existing software artifacts, e.g., documentation, source code, etc. It is closely related to the oracle problem [7], i.e., deciding whether or not a program execution is coherent with
Fig. 1: Assertion Inference with Filtering via Mutation Analysis
the desired behavior of the program. Assertion inference has many applications in software development including testing, e.g., to verify the expected outcome of a given test case [18]. Assertions can also capture program properties that should hold at specific program locations. In this paper, we focus on _postcondition_ based assertions that define the expected properties that must hold at the end of a given function's execution.
Figure 1 depicts the process that existing assertion inference techniques ([34, 35, 44]) follow to infer assertions. First, based on an assertion generation approach utilized (e.g. GAssert [44] and EvoSpex [35] use evolutionary search algorithm, SpecFuzzer [34] uses fuzzing), a technique generates candidate assertions for a given program/function. Then, the program's test suite is executed to determine which of those assertions are consistent with the behaviours exhibited by the actual program behaviour. Lastly, the validated assertions (i.e., those that are consistent with the test suite executions) go through the mutation analysis process for filtration of weak assertions. Here, a validated assertion that is also consistent with all the mutants' execution of a given program, is considered as weak because it is unable to distinguish between the correct and a buggy program behaviour, and is hence discarded. The inferred assertions are the ones that are coherent with the actual program behaviour but do not satisfy the buggy program behaviour (at least kill 1 mutant). In the following section, we elaborate further on the existing techniques.
### _Assertion Inference Techniques_
_Daikon_[16] is a dynamic technique that infers assertions by monitoring test executions. Given a program under analysis, Daikon requires a test suite in order to infer specifications for such program. It uses the test suite to exercise the program, monitors program states at various program points, and then considers a set of candidate assertions obtained by instantiating assertion patterns. Those assertions that are _not invalidated_ by any test at a given program point are reported as likely invariants at the program point.
_GAssert_[44] and _EvoSpex_[35] are assertion inference techniques based on evolutionary search algorithms. Similar to Daikon, these tools execute a test suite of the program under analysis and observe the execution to infer assertions that are consistent with the observations. The components of their evolutionary processes are specifically designed to handle their respective assertion language supported, and thus, changing or extending the assertion languages implies redefining the corresponding evolutionary operators and other elements of the evolutionary processes, which is a non-trivial task.
_SpecFuzzer_[34] is a recently proposed assertion inference technique that _outperforms_ the previous techniques. It uses a combination of static analysis, grammar-based fuzzing, and mutation analysis to infer assertions. First, it uses a lightweight static analysis to produce a grammar for the assertion language, which is tuned to the software under analysis. Second, it uses a grammar-based fuzzer to generate candidate assertions from that grammar. Then, a dynamic detector determines which of those assertions are consistent with the behavior exhibited by a provided test suite. In the final step, which is consistent with the previous techniques, SpecFuzzer eliminates redundant and irrelevant assertions using a selection mechanism based on mutation analysis. A salient feature of SpecFuzzer is that developers can easily adjust the specifications produced by tuning the grammar as opposed to making changes in the tool.
### _Assertion Inferring Mutants_
_Candidate assertions_ that assertion inference techniques generate undergo a two-step filtering process (see Figure 1). In the first step, the test suite of a target class \(C\) falsifies the assertions that are invalid, i.e. are not satisfied by the legit program behaviour that the test suite executes. Though important to identify _valid assertions_, such filtering is not enough as it leaves room for _weak assertions_, i.e, assertions that are trivial to satisfy and would not trigger any error if the target class \(C\) had incorrect behaviour. For instance, a tautology, such as \(assert(x>=y\mid\mid x<=y)\), it is a valid proposition that cannot be falsified, but it is unlikely to be useful. In the case of SpecFuzzer [34], the fuzzer reports thousands of constraints, (i.e., candidate assertions), and only a few are invalidated by the test suite. Such weak assertions are not useful and should be discarded.
Mutation analysis is used to discard weak assertions [34, 35, 44]. In general, the underlying idea is that valid assertions that are also coherent with every mutant's execution of target class \(C\) are weak because they represent properties that hold also for buggy versions of \(C\) (the mutants). On the contrary, assertions that do not hold for at least one mutant of \(C\), are useful because they are capable of distinguishing buggy versions of the code, aka mutants. We refer to such mutants that are killed by candidate assertions as _Assertion Inferring Mutants_.
Despite its effectiveness in discarding weak assertions, mutation analysis suffers from scalability issues because many mutants can be generated from even a small piece of code, and most of these mutants are redundant. This adversely affects the overall performance of assertion inference techniques, especially on large subjects. To deal with this problem, we introduce _AIMS_, a _static_ technique that predicts _Assertion Inferring Mutants_ without requiring any dynamic analysis and aims to enhance efficiency of assertion inference techniques.
### _Subsuming Mutants_
Mutation analysis is computationally expensive even beyond its use for assertion inference. This is mainly due to the large number of mutants that it introduces, all of which require analysis and execution. In traditional mutation testing - where the goal is to assess the ability of a test suite to "kill" mutants (i.e., to distinguish the observable behavior between the mutant and the original program) - one can reduce the number of mutants to analyze by identifying the _subsuming mutants_[3, 21, 28]. Given two mutants \(M_{1}\) and \(M_{2}\), \(M_{1}\) subsumes \(M_{2}\) if every test case \(T\) killing \(M_{1}\) also kills \(M_{2}\). Then, the
computational cost of mutation analysis can be reduced by identifying the minimal subset of subsuming mutants, such that any test suite able to kill these mutants can also kill the entire set of killable mutants (excluding mutants that are functionally equivalent to the original program and cannot be killed). Hence, practitioners can perform mutation testing efficiently by analyzing only subsuming mutants.
Given the potential of subsuming mutants in reducing mutation testing overheads, we investigate whether they are suitable for assertion inference (can help filter weak assertions). As we discuss in Section VII-A, subsuming mutants are not sufficient for the assertion inference task as their use results in losing almost half of the inferred assertions (compared to considering all mutants).
## III Illustrative Example
Figure 2 shows the mutants generated for the function _getFront()_ of class _QueueAr_, one of our test subjects. The graph depicts the mutants' subsumption hierarchy, which is a standard way to represent subsumption relations between a set of mutants generated for the same code. Here, nodes represent mutants of the function and every edge connects mutants to other mutants that the former subsume. In our example, mutant 39 subsumes mutants 2, 3 and 42. Mutually subsuming mutants are also represented in the same node - e.g. 40, 41 and 43. In this figure, we highlight in purple which mutants are subsuming mutants (at the top of the hierarchy) and in green which ones are _Assertion Inferring Mutants_.
We execute SpecFuzzer [34] to infer assertions for subject _QueueAr_getFront_ with its default configuration, i.e., by using all mutants available. SpecFuzzer infers _27_ assertions while the assertion filtering step via mutation analysis (rightmost part of Figure 1) took 91 minutes on our infrastructure (see Section VI). By contrast, using only subsuming mutants in the filtering step takes only 2.5 minutes (36.4 times faster) but would only produce 5 assertions.
These results confirm that, while reducing the number of mutants to analyze can improve the computational efficiency of the filtering process, subsuming mutants are not appropriate for this task. Intuitively, this is because the initial purpose of subsuming mutants is to minimize the number of tests needed to kill all mutants. In the context of assertion inference one rather aims to infer all valid assertions that can distinguish the mutants from the original code, that is, generate as many assertions that capture the specific code properties. For instance, in our _QueueAr_getFront_ example, mutant 5 satisfies all valid assertions except for five of them. In other words, considering mutant 5 for analysis would result in inference of only 5 assertions. On the other hand, mutant 6 filters 21 valid assertions, which means that considering mutant 6 for analysis would result in inference of 21 assertions. Considering only subsuming mutants for analysis discards mutant 6 as it is subsumed by mutant 5 and hence it results in losing 21 strong assertions that could have been inferred.
The above example demonstrates the difference between _Subsuming Mutants_ and _Assertion Inferring Mutants_, and the need for an approach that can efficiently identify the latter in order to save time on the mutation analysis step while maintaining the benefits of assertion inference. We propose _AIMS_, the first mutant selection method for assertion inference. Applying it with SpecFuzzer on the _QueueAr_getFront_ example, _AIMS_ predicts mutant 6 as assertion inferring mutant and helps to infer 21 assertions (out of 27 assertions when using all mutants), for only a fraction of the computation time, i.e., 30 seconds (instead of 91 minutes taken to analyze all mutants).
## IV Approach
The main objective of _AIMS_ is to predict whether a mutant (of a previously unseen piece of code) is likely to be assertion inferring. In order for our approach to be lightweight in terms of engineering and computational effort, we want _AIMS_ to be able to (a) learn relevant features of _Assertion Inferring Mutants_ without requiring manual feature definition, and (b) do so without costly dynamic analysis of mutant executions. To achieve this, we decompose our problem into two parts: learn a representation of mutants using code embedding techniques, and learn to predict, based on such embeddings, whether the represented mutants are _Assertion Inferring Mutants_.
### _Overview of_ AIMS
Figure 3 shows an overview of _AIMS_. We decompose our approach into three steps that we detail later in this section:
1. _Build a token representation: AIMS_ pre-processes the original code in order to remove irrelevant information and produce abstracted code, which is then tokenized to form a sequence of tokens. Each mutant is ultimately transformed into its corresponding token representation and undergoes the next step.
Fig. 2: Mutant subsumption hierarchy for subject QueueAr_getFront showing the positions of _Assertion Inferring Mutants_ and _Subsuming Mutants_
2. _Representation learning_: We train an encoder-decoder model to generate an embedding, aka vector representation of the mutant. This step is where _AIMS_ automatically learns the relevant features of mutants without requiring an explicit definition of these features.
3. _Classification: AIMS_ trains a classification model to classify the mutants (based on their embeddings) as _Assertion Inferring Mutants_ or not. The true labels used for training the model are obtained by running SpecFuzzer on the original code, and checking whether the mutants are _Assertion Inferring Mutants_ with respect to the candidate (and test-suite validated) assertions that SpecFuzzer generates.
It is interesting to note that the mutant representation learned by _AIMS_ does not depend on the particular set of assertions that SpecFuzzer (or any other assertion inference technique) would check against the mutant. _AIMS_ rather aims to learn properties of the mutants (and their surrounding context) that are generally useful for assertion inference. This is in line with the recent work on contextual mutant selection [11, 21, 25] that aims at selecting high utility mutants for mutation testing. This characteristics makes _AIMS_ applicable to pieces of code that it has not seen during training. In particular, our experiments reveal the capability of _AIMS_ to be effective on projects not seen during training. Certainly, the assertion inference technique that we use to build the true labels in the classification tasks is important because this technique should produce a sufficiently large set of useful assertions - an essential condition for our classifier to provide relevant prediction results. We use SpecFuzzer [34] for its state of the art performance, i.e., SpecFuzzer outperforms the existing techniques (GAssert [44] and EvoSpex [35]) in assertion inference (SpecFuzzer infers 7 times and 15 times more assertions than GAssert and EvoSpex) and achieves better performance with respect to the ground truth by achieving better Recall and F-1 score than the existing.
### _Token Representation_
A major challenge in learning from raw source code is the huge vocabulary created by the abundance of identifiers and literals used in the code [2, 46, 47]. In our case, this large vocabulary may hinder _AIMS_'s ability to learn relevant features of _Assertion Inferring Mutants_. Thus, we first abstract original (non-mutated) source code by replacing user-defined entities (function names, variable names, and string literals) with generic identifiers that can be reused across the source code file. During this step, we also remove code comments. This pre-processing yields an abstracted version of the original source code, as the abstracted code snippet in Figure 3.
To perform the abstraction, we use the publicly available tool _src2abs_[46]. This tool first discerns the type of each identifier and literal in the source code. Then, it replaces each identifier and literal in the stream of tokens with a unique ID representing the type and role of the identifier/literal in the code. Each ID \(<\)TYPE\(>\_\#\) is formed by a prefix, (i.e., \(<\)TYPE\(>_{\_}\) ) which represents the type and role of the identifier/literal, and a numerical ID, (i.e., \(\#\)) which is assigned sequentially when reading the code. These IDs are reused when the same identifier/literal appears again in the stream of tokens. Although we use src2abs, as an alternative, one can use any utility that identifies user-defined entities and replaces such with reusable identifiers.
Next, to represent a mutant, we annotate the abstracted code with a mutation annotation on the statement where the mutation is to be applied. These annotations have the general shape "MST statement MSP MutationOperator", where MST and MSP denote mutation annotation start and stop, respectively, and these are followed by a MutationOperator that indicates the applied mutation operation (as shown in figure 3). We repeat the process for every mutant.
Fig. 3: Overview of _AIMS_: Source code is abstracted and annotated to represent a mutant, which is further flattened to create a space separated sequence of tokens. An encoder-decoder model is trained on token sequences to generate mutant embeddings. A classifier is trained on these embeddings and their corresponding labels (whether or not the mutant is assertion inferring). The trained classifier can then be used for label prediction of an unseen mutant.
Finally, we flatten every mutant (by removing newline, extra whitespace, and tab characters) to create a single space separated sequence of tokens. Using these sequences, we intend to capture as much code as possible around the mutant without incurring an exponential increase in training time [20, 21, 46, 48], we found a sequence length of 500 tokens to be a good fit for our task as it does not exceed 24 hours of training time (wall clock) on a Tesla V100 GPU.
### _Embedding Learning with Encoder-Decoder_
Our next step is to learn embedding, aka vector representation (that can later be used to train a classification model) from mutants' token representation. We develop an encoder-decoder model, a neural architecture commonly used in representation learning task [27]. The key principles of our encoder-decoder architecture is that the encoder transforms the token representation into an embedding and the decoder attempts to retrieve the original token representation from the encoded embedding. The learning objective is then to minimize the binary cross-entropy between the original token representation and the decoded one. Once the model training has converged, we can compute the embedding from any other mutant's token representation by feeding the latter into the encoder and retrieve the output.
We use a bi-directional Recurrent Neural Network (RNNs) [10] to develop our encoder-decoder, as previous works on code learning have demonstrated the effectiveness of these models to learn useful representations from code sequences [5, 20, 21, 43]. We build _AIMS_ on top of _tf-seq2seq_[1], an established general-purpose encoder-decoder framework. We use a Gated Recurrent Units (GRU) network [12] to act as the RNN cell, which was shown to perform better than simpler alternatives (e.g. simple RNNs) both in software engineering and other learning tasks [21, 42]. To achieve good performance with acceptable model training time, we utilize AttentionLayerBahdanau [6] as our attention class, configured with 2 layered AttentionDecoder and 1 layered BidirectionalRNNEncoder, both with 256 units.
To determine an appropriate number of training epochs for model convergence, we conducted a preliminary study involving a small validation set (independent of both the training and test sets used in our evaluation) where we monitor model's performance in replicating (as output) the same mutant sequence provided as input. We pursue training the model till the training performance on the validation set does not improve anymore. We found 10 epochs for the sequences up to a length of 500 tokens to be a good default for our validation sets.
### _Classifying_ Assertion Inferring Mutants
Next, we train a classification model in predicting whether a mutant (represented through the embedding produced by the RNN encoder) is likely to be _Assertion Inferring Mutants_. The learning objective here is to maximize the classification performance (which we mainly measure with Matthews Correlation Coefficient (MCC), see Section VI-B). To obtain our true classification labels, we run an assertion inferring technique (viz. SpecFuzzer) using all available mutants and exhaustively determine which mutants are assertion inferring. As for the classification model, we rely on _random forests_[9] because these are lightweight to train and have shown to be effective in solving various software engineering tasks [24, 40]. We used standard parameters for random forests, viz. we set the number of trees to 100, use Gini impurity for splitting, and set the number of features (i.e. embedding logits) to consider at each split to the square root of the total number of features.
Once the model training has converged, we can use the random forest to predict whether an unseen mutant is likely to be _Assertion Inferring Mutants_. We make the mutant go through the pre-processing pipeline to obtain its abstract token representation, then feed it into the encoder-decoder architecture to retrieve its embedding and finally input it into the classifier to obtain the predicted label (_Assertion Inferring Mutants_ or not).
## V Research Questions
We start our analysis by investigating whether _Assertion Inferring Mutants_ can be approximated by other sets of mutants, such as _randomly selected_ and _Subsuming mutants_, and contrast their performance with _AIMS_ in the context of assertion inference. We compare with random mutant selection since it is an untargeted method that is often superior to many mutant selection strategies [23, 51] and is considered by the literature as a strong baseline [11, 21, 29]. We also compare with subsuming mutants since they form the main objective of mutant selection [21, 29, 37] with numerous strategies targeting them [21, 22, 25, 32]. Hence, we check the effectiveness (completeness w.r.t. to using all mutants) and efficiency (how much time is required) of SpecFuzzer [34], a state of the art assertion inference technique, when utilizing mutant subsets over all supported mutants. Therefore we ask:
**RQ1**: _Performance Evaluation:_ How effective and efficient is _AIMS_ in comparison to subsuming, randomly selected and all mutants baseline methods with respect to the assertion inference task?
For this task, we considered the dataset provided by _Molina et al._[34]. We re-executed SpecFuzzer on 40 subjects, initially without discarding any mutant, and later by selecting the mutants following _AIMS_ and our two baseline mutant selection techniques (subsuming and random mutant selection). In their work [34], _Molina et al._ carefully studied the subjects and manually produced corresponding (complete) _Ground Truth_ assertions capturing the intended behavior of the subjects. In our execution of SpecFuzzer, it was able to infer the ground truth assertions for 26 subjects, when all mutants were considered for assertion inference. Hence, we also compared the effectiveness of all three mutant selection techniques (as explained in the RQ1) in inferring _Ground Truth_ assertions. Hence, we ask:
**RQ2**: _Ground Truth Evaluation:_ How _AIMS_ compares with the subsuming and randomly selected mutants in terms of inferred ground truth assertions?
In the above questions, comparisons between the three mutant selection techniques were feasible because SpecFuzzer inferred assertions (at least one) when considering all mutants. Now, we investigate if _AIMS_'s predicted _Assertion Inferring Mutants_ can help SpecFuzzer to scale, i.e., if SpecFuzzer can infer assertions by considering only _AIMS_'s predicted mutants in scenarios where SpecFuzzer timed out during mutation analysis and was not able to infer any assertion when all mutants were considered for analysis. For this task, we conducted experiments on 6 subjects from GitHub (table I) where SpecFuzzer timed out. We also compared SpecFuzzer's performance when it considered _AIMS_'s predicted mutants vs an equal number of randomly selected mutants (state of the art in mutant selection). Hence, we ask:
* _Scalability Evaluation:_ Can _AIMS_ improve the scalability of assertion inference techniques?
## VI Experimental Setup
### _Data and Tools_
We selected 46 Java methods; 40 subjects that were used in previous studies [34, 35, 44] for evaluating performance in RQ1 and RQ2, and 6 larger subjects from GitHub for the scalability evaluation in RQ3. In their study, _Molina et al._[34] manually constructed _Ground Truth_ assertions capturing the intended behavior of these 40 subjects. We use these assertions to answer RQ2. Table I records the details of our dataset.
To perform mutation testing we used Major [26] mutation testing tool and to construct comprehensive test suites (and improve the chances to infer true assertions) we used EvoSuite [17] and Randoop [36] to augment the developer test suites, similarly to what was done by previous work [34].
### _Prediction Performance Metrics_
_Assertion Inferring Mutants_ prediction modeling is a binary classification problem, thus it can result in four types of outputs: Given a mutant is assertion inferring, if it is predicted as assertion inferring, then it is a true positive (TP); otherwise, it is a false negative (FN). Vice-versa, if a mutant does not infer any assertion and, if it is predicted as assertion inferring then it is a false positive (FP); otherwise, it is a true negative (TN). From these, we can compute the traditional evaluation metrics such as _Precision_ and _Recall_, which quantitatively evaluate the prediction accuracy of prediction models.
\[\textit{Precision}=\frac{TP}{TP+FP}\quad\textit{Recall}=\frac{TP}{TP+FN}\]
Intuitively, _Precision_ indicates the ratio of correctly predicted positives over all the considered positives. _Recall_ indicates the ratio of correctly predicted positives over all actual positives. Yet, these metrics do not take into account the true negatives and can be misleading, especially in the case of imbalanced data. Hence, we complement these with the _Matthews Correlation Coefficient (MCC)_, a reliable metric of the quality of prediction models [50]. It is regarded as a balanced measure that can be used even when the classes are of very different sizes [41], e.g. _12.95% Assertion Inferring Mutants_ in total, for 40 subjects in our dataset (table I). _MCC_ is calculated as:
\[\textit{MCC}=\frac{TP\times TN-FP\times FN}{\sqrt{(TP+FP)(TP+FN)(TN+FP)(TN+ FN)}}\]
_MCC_ returns a coefficient between 1 and -1. An MCC value of 1 indicates a perfect prediction, while a value of -1 indicates a perfect inverse prediction, i.e., a total disagreement between prediction and reality. MCC value of 0 indicates that the prediction performance is equivalent to random guessing.
### _Experimental Procedure_
To answer our RQs we executed SpecFuzzer to infer assertions for all subjects (Table I) with its default configuration, i.e., using all mutants to filter candidate assertions during the mutation testing step (Figure 1). We also determined _Assertion Inferring Mutants_ and _Subsuming Mutants_ from SpecFuzzer execution logs for the 40 subjects used in RQ1 and RQ2. Once we labeled, we re-execute SpecFuzzer by employing the following 3 mutant selection techniques:
* _Subsuming Mutant Selection_. We execute SpecFuzzer by only considering subsuming mutants for mutation analysis and discarding the rest of the mutants from the original set.
* _AIMS_. We train models on _Assertion Inferring Mutants_ and perform k-fold cross validation (where k = 5) at the project level, i.e., we train on 32 subjects and evaluate/test on 8 unseen during testing subjects. Once we get the predictions for all 40 subjects, we re-execute SpecFuzzer by only considering the predicted mutants and by discarding all other mutants from the original set.
* _Random Mutant Selection_. We randomly select an equal number of mutants (equal to the number of predicted mutants) from the original set of mutants and re-execute SpecFuzzer by only considering these randomly selected and by discarding all other mutants. We repeat this step 10 times to eliminate the chances to report coincidental results. We report the median case results.
To answer _RQ1_, we compute the Prediction Performance Metrics of _AIMS_ in order to show its learning ability. This is a sanity check that our prediction modeling framework indeed manages to predict something well. However prediction results does not reflect the end-task (assertion inference) performance since mutants are not independent, i.e., there are large overlaps between the tests and assertions that lead to mutant kills. This means that subsuming or randomly selected mutants may perform similarly to _AIMS_. We thus, measure the cost of the employed mutant selection technique, i.e., how many assertions are _not_ inferred (which are inferred when all mutants are considered), and the benefit gained, i.e., the improvement in assertion inference in terms of wall clock time.
To answer _RQ2_, we check the results of RQ1 and compare how many Ground Truth assertions SpecFuzzer infers with each mutant selection technique (Completeness). It should be noted that it was able to infer the ground truth assertions for 26 subjects out of the 40, when all mutants were considered
for mutation analysis. Hence, we analyze the results only for these 26 subjects.
To answer _RQ3_, i.e., if _AIMS_'s predicted _Assertion Inferring Mutants_ can help SpecFuzzer to infer assertions for 6 subjects where it was not able to infer any assertion (timed out when all mutants were considered for analysis), we retrain _AIMS_ on all 40 subjects (with available labeled mutants) and predict likely _Assertion Inferring Mutants_ for these 6 subjects. We re-execute SpecFuzzer by only using the predicted mutants and by discarding all other mutants from the original set. Additionally, we randomly select mutants in a similar fashion as before (following RQ1 experimental procedure) and re-execute SpecFuzzer accordingly to compare performance with _Random Mutant Selection_. Thus to answer _RQ3_ we measure 1) in how many subjects, the selected mutants lead to assertion inference, and 2) the ratio of assertion inferring mutants from the entire set of mutants.
## VII Experimental Results
### _Performance Evaluation (RQ1)_
_AIMS_ predicted _Assertion Inferring Mutants_ with a prediction performance of 0.58 MCC, 0.79 Precision, and 0.49 Recall. These values indicate that using _AIMS_ should gain significant improvements in terms of inferred assertions over
baseline methods. Figure 4 shows a Venn diagram recording the distribution of _Assertion Inferring Mutants_, _AIMS_ and _Subsuming_ mutant sets. The figure shows that a large number of _Assertion Inferring Mutants_ (450 out of 525) are not subsuming. At the same time _AIMS_ detects almost half of them (258 out 525), indicating relatively good performance.
Table II records SpecFuzzer's performance w.r.t. assertion inference by employing different mutant sets, i.e, _Subsuming Mutant Selection_, _AIMS_, and _Random Mutant Selection_. The results show that when SpecFuzzer uses _AIMS_'s predicted mutants, it infers 87.51% of total assertions, i.e. only 12.49% missed assertions (the cost of considering only _AIMS_'s predicted mutants) with 46.29 times faster mutation analysis (2.5 times faster than considering subsuming mutants). _AIMS_ enables SpecFuzzer to infer at least one assertion for all subjects (inferring all assertions for 23 subjects).
When SpecFuzzer uses the subsuming mutants, it infers 57.77% of total assertions. It infers all assertions for 5 subjects but fails to infer any for 7 subjects. Although it misses 42.23% of the assertions (the cost of considering only the subsuming mutants), it reaps the benefit of an improved mutant analysis time of 19.16 times faster than when using all mutants.
A similar improvement, with subsuming mutants, in the mutation testing time is noted when SpecFuzzer uses randomly selected mutants, but it fails to infer 48.53% of total assertions. In two cases it infers all assertions and fails to infer any assertion for 2 other cases. _AIMS_ outperforms _Random Mutant Selection_ with a statistically significant3 sizeable difference.
Footnote 3: We compared the inferred assertion percentages using Wilcoxon sign-rank-test and obtained a \(p-value<5.39\)e\(-7\) with _Random Mutant Selection_.
Answer to RQ1: _AIMS_ predicts _Assertion Inferring Mutants_ with 0.58 MCC value. _AIMS_ enables SpecFuzzer to infer assertions for all subjects, running 46.29 times faster at the expense of 12.49% of the assertions. At the same time, _AIMS_ enables SpecFuzzer to infer 36% and 30% more assertions than _Random Mutant Selection_ and _Subsuming Mutant Selection_, while runs 2.5 times faster than _Subsuming Mutant Selection_ and requires similar execution time (wall clock) to _Random Mutant Selection_.
### _Completeness Evaluation (RQ2)_
Table III records SpecFuzzer's performance in ground truth assertion inference by employing different mutant selection techniques, i.e, _Subsuming Mutant Selection_, _AIMS_, and _Random Mutant Selection_. When SpecFuzzer considers only subsuming mutants, it infers 67.31% of all ground truth assertions (inferred without any mutant selection technique). It infers all ground truth assertions for 17 subjects but fails to infer any for 8 subjects. On considering _AIMS_'s predicted mutants, SpecFuzzer infers almost all ground truth assertions, i.e, 96.15%. _AIMS_'s predicted mutants enable SpecFuzzer to infer at least one ground truth assertion for all subjects except for one subject. When SpecFuzzer considers randomly selected mutants, it infers 19.23% of all ground truth assertions. It infers all assertions for 5 subjects whereas fails to infer
Fig. 4: Mutant distribution
assertion for 21 subjects. _AIMS_ outperforms _Random Mutant Selection_ with a statistically significant4 sizeable difference.
Footnote 4: We compared the inferred ground truth assertion percentages and obtained a \(p-value<7.74\)e\(-6\) with _Random Mutant Selection_.
Answer to RQ2: _AIMS_'s predicted mutants enable SpecFuzzer to infer ground truth assertions for almost all subjects except one, inferring 96.15% of the total assertions which is superior to both _Subsuming Mutant Selection_ (infers 67.31%) and _Random Mutant Selection_ (infers 19.23%).
### _Scalability Evaluation (RQ3)_
Table IV records the results of the SpecFuzzer's performance in inferring assertions when it employs _AIMS_ and _Random Mutant Selection_, for the subjects where mutation testing timed out. _AIMS_ selected 2.99% mutants from the entire mutant set. Among the predicted mutants, 83.33% mutants are assertion inferring. When an equal number of mutants are selected using _Random Mutant Selection_, only 16.67% of mutants selected are assertion inferring. When SpecFuzzer considers only _AIMS_'s predicted mutants for assertion filtering, it infers assertions for all subjects as shown in table IV with a statistically significant5 sizeable difference. On the other hand, for 50% of the subjects (3 out of 6), SpecFuzzer fails to infer any assertion if it uses _Random Mutant Selection_.
Footnote 5: We compared the percentages of _Assertion Inferring Mutants_ among the selected mutants, using Wilcoxon sign-rank-test and obtained a \(p-value<9.98\)e\(-6\) with _Random Mutant Selection_.
## VIII Threats to Validity
_External Validity_: Threats may relate to the subjects we used. Although our evaluation expands to projects of various sizes, the results may not generalize to other projects. We consider this threat of low importance since we have a large sample of subjects (40 subjects from the previous studies [34, 35, 44] and 6 subjects from GitHub for scalability evaluation). Moreover, our predictions are based on the local mutant context, that has been shown to be determinant of mutants' utility [21, 25]. Other threats may relate to the assertion inference technique that we used for evaluation. This choice was made since SpecFuzzer is the current state of the art and operates similarly to other techniques (the main differences lie in the grammar used). We consider this threat of low importance since _AIMS_ deals with mutation testing, which is used in the same way by all assertion inference techniques [34, 35, 44], and are directly impacted by the number of mutants involved. Nevertheless, in case other techniques require different predictions, one could re-train, tune and use _AIMS_ for the specific method of interest, as we did here with SpecFuzzer.
_Internal Validity_: Threats may relate to the restriction that we impose on sequence length, i.e., a maximum of _500_ tokens. This was done to enable reasonable model training time, approximately _24_ hours to learn mutant embeddings on Tesla V100 gpu. Other threats may be due to the use of _tf-seq2seq_[1] for learning mutant embeddings. This choice was made for simplicity, to use the related framework out of the box, similar to the related studies [20, 46]. Other internal validity threats could be related to the test suites we used and the mutants considered as assertion inferring. To deal with this issue, we used well-tested programs and state-of-the-art tools to generate extensive pools of tests (Evosuite [17] and Randoop [36]) as done by previous work [34, 35, 44]. This is also a typical process followed in mutation testing studies [21, 25, 29, 37]. To be more accurate, our underlying assumption is that the extensive pool of tests used in our experiments is a valid approximation of the program's test executions.
_Construct Validity_: Our assessment metrics, mutation filtered assertions inferred, ground truth assertions inferred, and incurred time during mutation analysis may not reflect the actual cost / benefit values. These metrics are intuitive, i.e., the inferred assertions are the output of assertion inference techniques, and the incurred time during mutation testing is the wall clock time these techniques invest in filtering assertions. Overall, we mitigate these threats by following suggestions from mutation testing and assertion inference literature, using state of the art tools, performing several simulations, and got consistent and stable results across our subjects.
## IX Conclusion
We presented _AIMS_, a method that learns to select _Assertion Inferring Mutants_ (a small subset of mutants that is suitable for assertion inference) from given mutant sets. Our experiments on 40 subjects show that _AIMS_ identified assertion inferring mutants with 0.58 MCC, 0.79 Precision, and 0.49 Recall. These predictions enable 42.29 times faster inference with minor effectiveness loss (12.49% less assertions) compared to the use of all mutants. Similarly, _AIMS_'s predictions infer 96.15% of the total ground truth assertions, which is 40% more than _Subsuming Mutant Selection_ and 5 times more than _Random Mutant Selection_. Moreover, _AIMS_ enables assertion inference technique SpecFuzzer to scale on all our large subjects (by inferring assertions where SpecFuzzer failed previously due to timeouts) in comparison to _Random Mutant Selection_ which failed to infer any assertion in 50% of the large subjects.
|
2303.16843 | An Optimal Design Framework for Lasso Sign Recovery | Supersaturated designs investigate more factors than there are runs, and are
often constructed under a criterion measuring a design's proximity to an
unattainable orthogonal design. The most popular analysis identifies active
factors by inspecting the solution path of a penalized estimator, such as the
lasso. Recent criteria encouraging positive correlations between factors have
been shown to produce designs with more definitive solution paths so long as
the active factors have positive effects. Two open problems affecting the
understanding and practicality of supersaturated designs are: (1) do optimal
designs under existing criteria maximize support recovery probability across an
estimator's solution path, and (2) why do designs with positively correlated
columns produce more definitive solution paths when the active factors have
positive sign effects? To answer these questions, we develop criteria
maximizing the lasso's sign recovery probability. We prove that an orthogonal
design is an ideal structure when the signs of the active factors are unknown,
and a design constant small, positive correlations is ideal when the signs are
assumed known. A computationally-efficient design search algorithm is proposed
that first filters through optimal designs under new heuristic criteria to
select the one that maximizes the lasso sign recovery probability. | Jonathan W. Stallrich, Kade Young, Maria L. Weese, Byran J. Smucker, David J. Edwards | 2023-03-29T16:49:16Z | http://arxiv.org/abs/2303.16843v2 | # Optimal Supersaturated Designs for Lasso Sign Recovery
###### Abstract
Supersaturated designs, in which the number of factors exceeds the number of runs, are often constructed under a heuristic criterion that measures a design's proximity to an unattainable orthogonal design. Such a criterion does not directly measure a design's quality in terms of screening. To address this disconnect, we develop optimality criteria to maximize the lasso's sign recovery probability. The criteria have varying amounts of prior knowledge about the model's parameters. We show that an orthogonal design is an ideal structure when the signs of the active factors are unknown. When the signs are assumed known, we show that a design whose columns exhibit small, positive correlations are ideal. Such designs are sought after by the \(Var(s+)\)-criterion. These conclusions are based on a continuous optimization framework, which rigorously justifies the use of established heuristic criteria. From this justification, we propose a computationally-efficient design search algorithm that filters through optimal designs under different heuristic criteria to select the one that maximizes the sign recovery probability under the lasso.
_Keywords:_ Screening experiments; constrained-positive \(Var(s)\)-criterion; variable selection; \(UE(s^{2})\)-criterion; Gauss Dantzig selector
Introduction
A screening experiment aims to learn which \(k\) of \(p\) factors most influence the response variable with as few runs, \(n\), as possible. Achieving this goal is challenging because a small \(n\) induces bias on some or all of the \(2^{p}\) factorial effects. However, effective screening can still be performed if the model is sparse, i.e., it depends on only a few effects (Box and Hunter, 1961a,b; Mee, 2009; Xu et al., 2009; Mee et al., 2017). Supersaturated screening designs, or SSDs, push screening to its limits with \(n<p+1\) and assume a main effects model \(\mathbf{y}=\beta_{0}\mathbf{1}+\mathbf{X}\mathbf{\beta}+\mathbf{e}\) where \(\mathbf{X}\) is the \(n\times p\) design matrix with elements \(x_{ij}=\pm 1\), \(\mathbf{\beta}=(\beta_{1},\ldots,\beta_{p})^{T}\) is a sparse vector with \(k<p\) nonzero elements, and \(\mathbf{e}\sim N(\mathbf{0},\sigma^{2}\mathbf{I})\). Without loss of generality, assume \(\sigma^{2}=1\), making \(\beta\) the signed signal-to-noise ratios. Then the analysis goal is recovery of the support of \(\beta\), denoted \(\mathcal{A}=\{j:|\beta_{j}|>0\}\), although we will be more interested in recovery of the sign vector of \(\beta\), denoted \(\mathbf{z}=\mbox{sign}(\beta_{j})\).
The least-squares estimator for \(\beta\) is not unique for SSDs, complicating support/sign recovery. If there were designs with unique least-squares estimators, the ideal design would satisfy \(\mathbf{S}=\mathbf{L}^{T}\mathbf{L}=n\mathbf{I}_{p+1}\) where \(\mathbf{L}=(\mathbf{1}|\mathbf{X})\), as its \(\hat{\beta}_{j}\) would have the minimum possible variance across all designs. These orthogonal designs only exist when \(n=0\,(\mbox{mod }4)\). For arbitrary \(n\), a design can instead be selected by minimizing a variance-based criterion such as the \(D\)-criterion, \(|(\mathbf{L}^{T}\mathbf{L})^{-1}|\), or \(A\)-criterion, \(\mbox{tr}[(\mathbf{L}^{T}\mathbf{L})^{-1}]\). This optimal design framework based on least-squares is well-developed and tractable (Pukelsheim, 2006; Goos and Jones, 2011), but is not directly applicable for SSDs.
Most SSD construction methods focus on optimizing heuristic criteria that \(\mathbf{S}\) to the ideal structure \(n\mathbf{I}_{p+1}\). Booth and Cox (1962) proposed the criterion \(E(s^{2})=\sum_{1<i<j\leq p}s_{ij}^{2}\) assuming balanced designs (i.e., \(\mathbf{X}^{T}\mathbf{1}=\mathbf{0}\)). Lin (1993) and Wu (1993) constructed designs to minimize a column-balanced version of this criterion (see also Nguyen, 1996; Li and Wu, 1997; Tang and Wu, 1997; Ryan and Bulutoglu, 2007); others have considered the
same criterion but without the balance constraint (Marley and Woods, 2010; Weese et al., 2015). Similarly, Jones and Majumdar (2014) proposed the _unconditional_\(E(s^{2})\)-criterion, or \(UE(s^{2})=\sum_{0\leq i<j\leq p}s_{ij}^{2}\) that includes the \(s_{0j}^{2}\) elements. Other criteria have been proposed that measure a design's quality in terms of subsets of columns (Deng et al., 1996, 1999; Jones et al., 2009); this approach is consistent with a stepwise or all-subsets analysis (Lin, 1995; Abraham et al., 1999; Westfall et al., 1998; Liu et al., 2007). Chen and Lin (1998) and Sarkar et al. (2009) investigated model support recovery properties of SSDs, but under a sequential least-squares framework. Li and Lin (2002) and Phoa et al. (2009) have advocated for SSDs to be analyzed under penalized estimation to induce sparse estimates. Heuristic criteria such as \(E(s^{2})\) stem from least-squares theory and so are not clearly optimized for such estimation.
This paper develops optimality criteria for SSDs that target maximizing the probability of sign recovery for the lasso estimator (Tibshirani, 1996):
\[(\hat{\beta}_{0},\boldsymbol{\hat{\beta}})=\operatorname*{arg\,min}_{\beta_{ 0},\boldsymbol{\beta}}\frac{1}{2n}||\boldsymbol{y}-\beta_{0}\boldsymbol{1}- \mathbf{X}\boldsymbol{\beta}||_{2}^{2}+\lambda\sum_{j=1}^{p}|\beta_{j}|,\]
where \(\lambda>0\). Denote the estimated support and sign by \(\hat{\mathcal{A}}=\{j:|\hat{\beta}_{j}|>0\}\) and \(\hat{\boldsymbol{z}}=\operatorname*{sign}(\hat{\beta}_{j})\), respectively. Then \(P(\hat{\mathcal{A}}=\mathcal{A})\) and \(P(\hat{\boldsymbol{z}}=\boldsymbol{z})\) depend on the unknown parameter values and \(\lambda\). We first propose a local optimal design approach to maximize \(P(\hat{\boldsymbol{z}}=\boldsymbol{z})\) assuming the parameters' values are known and \(\lambda\) is fixed. We then develop criteria that relax the model assumptions and summarize \(P(\hat{\boldsymbol{z}}=\boldsymbol{z})\) across a range of \(\lambda\). As optimization of the proposed criteria is challenging, we redefine the lasso criteria as functions of \(p\times p\) correlation matrices of \(\mathbf{X}\) and show the ideal designs under different heuristic criteria are optimal under different assumptions about \(\boldsymbol{z}\). Surprisingly, we find cases where a hypothetical orthogonal design is suboptimal. We then propose a computationally efficient construction algorithm that leverages the speed of exchange algorithms for optimizing heuristic criteria.
Some existing work has tried to bridge the gap between heuristic criteria and support
recovery of penalized estimators. Marley and Woods (2010), Draguljic et al. (2014), and Weese et al. (2015) have compared optimal SSDs under different criteria via simulation with the Gauss-Dantzig selector, or GDS (Candes and Tao, 2007), and found little difference between the SSDs in terms of support recovery. One notable exception are \(Var(s+)\)-optimal designs (Weese et al., 2017, 2021) that minimize
\[Var(s)=UE(s^{2})-UE(s)^{2}\text{ such that }\frac{UE^{*}(s^{2})}{UE(s^{2})}>c \text{ and }UE(s)>0\,\]
where \(UE^{*}(s^{2})\) is the optimal \(UE(s^{2})\) value for the given \(n\) and \(p\), and \(UE(s)=\sum_{0\leq i<j\leq p}s_{ij}\). The ideal design under \(Var(s+)\) has an \(\mathbf{S}\) with off-diagonal \(s_{ij}\)'s with low variance and a small, positive mean. Simulation studies showed \(Var(s+)\)-optimal designs (with \(c=0.8\)) performed similarly to other optimal SSDs under the GDS, but when the nonzero elements of \(\mathbf{z}\) were all positive, the \(Var(s+)\)-optimal design had consistently better support recovery properties. Section 3 provides a rigorous justification for this phenonmenon.
Singh and Stufken (2022) also noted the disconnect between the heuristic criteria and penalized estimation. They proposed a Pareto front optimization of two new heuristic criteria motivated by the restricted isometry property (Candes and Tao, 2007) and irrepresentable condition (Gai et al., 2013) of the GDS. These criteria are relaxations of orthogonality and an \(\mathbf{X}\) having such properties achieves desirable behavior of its corresponding GDS. The optimal designs of Singh and Stufken (2022) do not actually possess these two properties, and so may not maximize the probability of support/sign recovery.
The Dantzig selector and its statistical properties are closely related to the lasso estimator (Meinshausen et al., 2007; Lounici, 2008; Bickel et al., 2009; James et al., 2009; Asif and Romberg, 2010). Draguljic et al. (2014) show, in a screening design context, that the two perform similarly in terms of support recovery. The lasso is more mathematically tractable, making its optimal design framework more straightforward. Others have considered the role of \(\mathbf{X}\) in the statistical properties of the lasso, though not in the context
of SSDs. Using the lasso's KKT conditions, Zhao and Yu (2006) identified the strong irrepresentable condition (SIC) on \(\mathbf{X}\) for establishing support recovery consistency (Zhang and Huang, 2008; Jia and Rohe, 2015). Wainwright (2009) and Hastie et al. (2019) have studied random design construction where the rows of \(\mathbf{X}\) are independently generated from \(N(\mathbf{0},\Sigma)\). These random constructions are important to the overall theory of the lasso, but are inappropriate for SSDs.
Based on the lasso's SIC, Deng et al. (2013) constructed SSDs from nearly orthogonal Latin hypercube designs to minimize the off-diagonals of \(\mathbf{S}\) assuming factors have settings between \([0,1]\). This is essentially a construction technique for minimizing \(UE(s^{2})\) with more general factor settings, even though SSDs commonly assume fixed settings of \(\pm 1\) for practical purposes. Xing (2015) proposed a lasso SIC heuristic to construct optimal two-level SSDs. Huang et al. (2020) proposed a lasso optimal design theory that applies variance-based criteria to the approximate covariance matrix of the debiased lasso (Javanmard and Montanari, 2014), which is capable of performing inference via confidence intervals. Under the framework of approximate designs (i.e., \(n\rightarrow\infty\)) they note their criteria are not convex and give an equivalence theorem for establishing whether a design is locally optimal. They then propose an algorithmic construction for generating many local optimal approximate designs and implement a rounding procedure on the approximate design's replication weights to produce an exact design. Overall, their approach requires many stages of approximation, leading to a discrepancy between the approximate and exact design's covariance matrix.
The paper is organized as follows. Section 2 develops exact design optimality criteria targetting the lasso's sign recovery probability under different assumptions about the model. The criteria derive from the lasso's KKT conditions and the primal-dual witness technique (Wainwright, 2009). Section 3 presents a pseudo-approximate optimal design framework that targets the optimal correlation matrix of \(\mathbf{X}\) and justifies the ideal designs sought after
by heuristic criteria under different assumptions about \(\mathbf{z}\). Section 4 describes a computationally efficient algorithm for constructing exact optimal designs under our proposed criteria. We compare SSDs constructed under our new framework to designs constructed by Singh and Stufken (2022) in Section 5. We conclude the paper with a discussion in Section 6, describing important implications of our results and future work.
## 2 Exact Local Optimality Criteria
The lasso is often applied to the centered and scaled design matrix \(\mathbf{F}=(\mathbf{I}-\mathbf{P}_{1})\mathbf{X}\mathbf{V}^{-1/2}\) where \(\mathbf{P}_{1}=n^{-1}\mathbf{1}\mathbf{1}^{T}\) and \(\mathbf{V}\) is a diagonal matrix comprised of the diagonal elements of \(n^{-1}\mathbf{X}^{T}(\mathbf{I}-\mathbf{P}_{1})\mathbf{X}\). The analysis then targets support/sign recovery of \(\mathbf{\beta}^{*}=\mathbf{V}^{1/2}\mathbf{\beta}\). The diagonal elements of \(\mathbf{V}^{1/2}\) are nonnegative and bounded above by 1, making \(|\beta_{j}^{*}|\leq|\beta_{j}|\), and \(\text{sign}(\beta_{j}^{*})=\text{sign}(\beta_{j})\) when \(\mathbf{V}\) has all positive diagonal elements. The support, \(\mathcal{A}\), is estimated by the support of the lasso estimator
\[\hat{\mathbf{\beta}}^{*}=\operatorname*{arg\,min}_{\mathbf{\beta}^{*}}\frac{1}{2}\mathbf{ \beta}^{*T}\mathbf{C}\mathbf{\beta}^{*}-\frac{1}{n}\mathbf{y}^{T}\mathbf{F}\mathbf{\beta}^ {*}+\lambda\sum_{j=1}^{p}|\beta_{j}^{*}|\,\]
where \(\mathbf{C}=n^{-1}\mathbf{F}^{T}\mathbf{F}\) is the correlation matrix of the columns of \(\mathbf{X}\). We denote submatrices of \(\mathbf{X}\) and \(\mathbf{F}\) corresponding to column subsets \(\mathcal{T}\subseteq\{1,\dots,p\}\) by \(\mathbf{X}_{\mathcal{T}}\) and \(\mathbf{F}_{\mathcal{T}}\), respectively. For a \(p\times 1\) vector, \(\mathbf{v}\), \(\mathbf{v}_{\mathcal{T}}\) denotes the \(|\mathcal{T}|\times 1\) vector with the \(\mathcal{T}\) elements of \(\mathbf{v}\). For all other matrices, we will consider submatrices by selecting both rows and columns with two index sets \(\mathcal{U}\) and \(\mathcal{T}\). That is, for a matrix \(\mathbf{M}\), let \(\mathbf{M}_{\mathcal{U}\mathcal{T}}\) denote the submatrix of \(\mathbf{M}\) with rows and columns indexed by \(\mathcal{U}\) and \(\mathcal{T}\), respectively. For brevity, we will denote \(\mathbf{M}_{\mathcal{T}\mathcal{T}}=\mathbf{M}_{\mathcal{T}}\), which should not be confused with a subsetting of columns alone.
For \(\hat{\mathcal{A}}\) and \(\hat{\mathbf{z}}_{\hat{\mathcal{A}}}\), being the support and sign vector of \(\hat{\mathbf{\beta}}^{*}\), respectively, the following KKT
conditions hold (Zhao and Yu, 2006; Tibshirani, 2012):
\[\hat{\mathbf{\beta}}_{\mathcal{A}}^{*} =\frac{1}{n}\mathbf{C}_{\mathcal{\hat{A}}}^{-1}\mathbf{F}_{ \mathcal{\hat{A}}}^{T}\mathbf{y}-\lambda\mathbf{C}_{\mathcal{\hat{A}}}^{-1}\hat{ \mathbf{z}}_{\mathcal{\hat{A}}}\, \tag{1}\] \[\mathbf{0} <\hat{\mathbf{Z}}_{\mathcal{\hat{A}}}\hat{\mathbf{\beta}}_{\mathcal{ \hat{A}}}^{*}\,\] (2) \[\lambda\mathbf{1} \geq\left|\mathbf{C}_{\mathcal{\hat{I}}\mathcal{\hat{A}}}\hat{ \mathbf{\beta}}_{\mathcal{\hat{A}}}^{*}-\frac{1}{n}\mathbf{F}_{\mathcal{\hat{I}}}^ {T}\mathbf{y}\right|\, \tag{3}\]
where \(\mathbf{C}_{\mathcal{\hat{A}}}^{-1}=n(\mathbf{F}_{\mathcal{\hat{A}}}^{T} \mathbf{F}_{\mathcal{\hat{A}}})^{-1}\), \(\hat{\mathbf{Z}}_{\mathcal{\hat{A}}}=\mathrm{Diag}(\hat{\mathbf{z}}_{\mathcal{ \hat{A}}})\), and \(\mathcal{\hat{I}}=\{j:\hat{\beta}_{j}=0\}\). For given \(\mathbf{X}\), \(\mathbf{y}\), and \(\mathbf{\beta}\), we can check whether the resulting \(\hat{\mathbf{\beta}}^{*}\) has the true support and sign vector by setting \(\mathcal{\hat{A}}=\mathcal{A}\) and \(\hat{\mathbf{z}}_{\mathcal{\hat{A}}}=\mathbf{z}_{\mathcal{A}}\) and checking the KKT conditions. If they hold, \(\hat{\mathbf{\beta}}^{*}\) recovers the sign, which is more stringent than recovering the support. The proposed criteria in this section rank designs according to the probability of sign recovery: \(P(\hat{\mathbf{z}}=\mathbf{z}\,|\,\mathbf{F},\,\mathbf{\beta})\). To calculate the probability, one must specify a \(\lambda>0\) and a \(\mathbf{\beta}\), so our proposed framework falls in the class of local optimal designs commonly employed with nonlinear models and maximum likelihood estimation (Silvey, 1980; Khuri et al., 2006; Yang and Stufken, 2009).
Clearly \(P(\hat{\mathbf{z}}=\mathbf{z}\,|\,\mathbf{F},\,\mathbf{\beta})\) is a joint probability of two events. The first event follows from equations (1) and (2) and checks whether \(\hat{\mathbf{\beta}}_{\mathcal{A}}\) has the sign \(\mathbf{z}_{\mathcal{A}}\):
\[S_{\lambda}\,|\,\mathbf{F}_{\mathcal{A}},\,\mathbf{\beta}_{\mathcal{A}}=\left\{ \mathbf{u}<\sqrt{n}\mathbf{Z}_{\mathcal{A}}\mathbf{V}_{\mathcal{\hat{A}}}^{1/2} \mathbf{\beta}_{\mathcal{A}}\right\}\, \tag{4}\]
where \(\mathbf{u}\sim N(\lambda\sqrt{n}\,\mathbf{Z}_{\mathcal{A}}\mathbf{C}_{\mathcal{A} }^{-1}\mathbf{z}_{\mathcal{A}},\mathbf{Z}_{\mathcal{A}}\mathbf{C}_{\mathcal{A}}^ {-1}\mathbf{Z}_{\mathcal{A}})\). Note we define the event with respect to \(\mathbf{\beta}_{\mathcal{A}}\) rather than the design-dependent \(\mathbf{\beta}_{\mathcal{A}}^{*}\). The second event follows from equations (1) and (3) and coincides with all \(j\in\mathcal{I}\) having \(\hat{\beta}_{j}^{*}=0\):
\[I_{\lambda}\,|\,\mathbf{F},\,\mathbf{z}=\left\{|\mathbf{v}|\leq\lambda\sqrt{n}\mathbf{ 1}\right\}\, \tag{5}\]
where \(\mathbf{v}\)\(N(\lambda\sqrt{n}\mathbf{C}_{\mathcal{I}\mathcal{A}}\mathbf{C}_{\mathcal{A}}^ {-1}\mathbf{z}_{\mathcal{A}},\mathbf{C}_{\mathcal{I}}-\mathbf{C}_{\mathcal{I} \mathcal{A}}\mathbf{C}_{\mathcal{A}\mathcal{I}}^{-1}\mathbf{C}_{\mathcal{A} \mathcal{I}})\). The event depends on \(\mathbf{\beta}\) only though \(\mathbf{z}\).
**Remark 1** There is a tradeoff between the probabilities of (4) and (5) so both should be considered simultaneously. Deng et al. (2013) and Xing (2015) focus on heuristic criteria based on the lasso's SIC, which focuses on (5) and ignores (4).
**Remark 2** The probability of support recovery, \(P(\hat{\mathbf{\mathcal{A}}}=\mathbf{\mathcal{A}}\,|\,\mbox{ \bf F},\,\mathbf{\beta})\), requires consideration of all possible \(2^{k}\) sign vectors that have the same \(0\) elements as \(\boldsymbol{z}\) but alternative \(\pm 1\) elements indexed by \(\mathcal{A}\). Defining \(\mathcal{Z}_{\mathcal{A}}\) as the set of all such sign vectors, we have \(P(\hat{\mathbf{\mathcal{A}}}=\mathbf{\mathcal{A}}\,|\,\mbox{ \bf F},\,\mathbf{\beta})=\sum_{\tilde{\boldsymbol{z}}\in\mathcal{Z}_ {\mathcal{A}}}P(\tilde{\boldsymbol{z}}=\tilde{\boldsymbol{z}}\,|\,\mbox{\bf F },\,\mathbf{\beta})\). Each \(P(\tilde{\boldsymbol{z}}=\tilde{\boldsymbol{z}}\,|\,\mbox{\bf F},\,\mathbf{\beta})\) may be calculated by replacing \(\boldsymbol{z}_{\mathcal{A}}\) and \(\boldsymbol{z}\) with \(\tilde{\boldsymbol{z}}_{\mathcal{A}}\) and \(\tilde{\boldsymbol{z}}\), respectively, in events (4) and (5).
**Remark 3** As \(|\mathbf{\beta}_{j}^{\ast}|\to\infty\) for all \(j\in\mathcal{A}\), the probability of sign recovery and support recovery are equivalent. To see this, note that \(\mbox{\bf Z}_{\mathcal{A}}\mbox{\bf V}_{\mathcal{A}}^{1/2}\mathbf{ \beta}_{\mathcal{A}}=\mbox{\bf V}_{\mathcal{A}}^{1/2}|\mathbf{\beta$ }_{\mathcal{A}}|\) so \(P(S_{\lambda}\,|\,\mbox{\bf F}_{\mathcal{A}},\,\mbox{\boldmath$\beta}_{ \mathcal{A}})\to 1\). For any other \(\tilde{\boldsymbol{z}}\in\mathcal{Z}_{\mathcal{A}}\), \(\tilde{\mbox{\bf Z}}_{\mathcal{A}}\mbox{\bf V}_{\mathcal{A}}^{1/2}\mathbf{\beta}_{\mathcal{A}}\) would have negative elements, causing \(P(S_{\lambda}\,|\,\mbox{\bf F}_{\mathcal{A}},\,\mathbf{\beta}_{ \mathcal{A}})\to 0\). Finally, as \(I_{\lambda}\) is independent of \(|\mathbf{\beta}_{\mathcal{A}}|\), \(P(\hat{\mathbf{\mathcal{A}}}=\mathbf{\mathcal{A}}\,|\,\mbox{ \bf F},\,\mathbf{\beta})\to P(\hat{\boldsymbol{z}}=\boldsymbol{z}\,| \,\mbox{\bf F},\,\mathbf{\beta})\).
**Remark 4** We assume \(\mbox{\bf C}_{\mathcal{A}}^{-1}\) always exists, making \(\boldsymbol{u}\) nondegenerate, but \(\boldsymbol{v}\) will have a degenerate multivariate Normal distribution for SSDs because **C** cannot be full rank. The probability of this event can be calculated following some linear transformation of \(\boldsymbol{v}\).
### Criteria assuming known \(\boldsymbol{\beta}\)
For a fixed \(\lambda\) and known \(\boldsymbol{\beta}\), define the local optimality criterion
\[\phi_{\lambda}(\mbox{\bf X}\,|\,\mathbf{\beta})=P(\hat{\boldsymbol {z}}=\boldsymbol{z}\,|\,\mbox{\bf F},\,\mathbf{\beta})=P(S_{\lambda} \cap I_{\lambda}\,|\,\mbox{\bf F},\,\mathbf{\beta}). \tag{6}\]
A \(\phi_{\lambda}\)-optimal design is \(\mbox{\bf X}^{\ast}=\mbox{argmax}_{\mbox{\bf X}}\)\(\phi_{\lambda}(\mbox{\bf X}\,|\,\mathbf{\beta})\). This approach is impractical, particularly due to its perfect knowledge of \(\mathcal{A}\), but it is foundational for the more practical criteria that allow for uncertainty about \(\mathcal{A}\). The following is a fundamental result about the role of \(\boldsymbol{z}\) in \(\phi_{\lambda}(\mbox{\bf X}\,|\,\mathbf{\beta})\). Its proof and all future proofs may be found in the Supplementary Materials.
**Lemma 1**.: _For a given **X** and its **F**, consider a \(\boldsymbol{\beta}\) and its reflection \(-\mathbf{\beta}\). Then_
\[\phi_{\lambda}(\mbox{\bf X}\,|\,\mathbf{\beta})=P(S_{\lambda}\,|\, \mbox{\bf F}_{\mathcal{A}},\,\mathbf{\beta}_{\mathcal{A}})\times P(I_ {\lambda}\,|\,\mbox{\bf F},\boldsymbol{z})=\phi_{\lambda}(\mbox{\bf X}\,|\,- \mathbf{\beta})\,\]
_and an \(\mbox{\bf X}^{\ast}\) optimal for \(\phi_{\lambda}\) under \(\boldsymbol{\beta}\) is also optimal for \(\phi_{\lambda}\) under \(-\mathbf{\beta}\)._
Lemma 1 establishes events \(S_{\lambda}\) and \(I_{\lambda}\) are independent and provides a design equivalence
result. The following equivalence theorem is more general and further simplifies the implementation of our framework:
**Theorem 1**.: _Let \(\mathbf{X}^{*}\) be a locally optimal design for \(\phi_{\lambda}\) under a \(\mathbf{\beta}\) where \(\mathbf{z}_{\mathcal{A}}=\mathbf{1}\). Consider an alternative \(\tilde{\mathbf{\beta}}=\tilde{\mathbf{Z}}\mathbf{\beta}\) where \(\tilde{\mathbf{Z}}\) is any diagonal matrix comprised of \(\pm 1\). Then \(\phi_{\lambda}(\mathbf{X}\,|\,\tilde{\mathbf{\beta}})=\phi_{\lambda}(\mathbf{X}\tilde{\mathbf{ Z}}\,|\,\mathbf{\beta})\) and \(\tilde{\mathbf{X}}^{*}=\mathbf{X}^{*}\tilde{\mathbf{Z}}\) is locally optimal for \(\phi_{\lambda}\) under \(\tilde{\mathbf{\beta}}\)._
A consequence of Theorem 1 is that, when discussing criteria involving a known sign vector, we can assume \(\mathbf{z}_{\mathcal{A}}=\mathbf{1}\) without loss of generality.
The following criterion treats \(\mathbf{z}_{\mathcal{A}}\) as an unknown quantity:
\[\phi_{\lambda}^{\pm}(\mathbf{X}\,|\,\mathbf{\beta})=\frac{1}{2^{k}}\sum_{\tilde{z }\in\mathcal{Z}_{\mathcal{A}}}\phi_{\lambda}(\mathbf{X}\,|\,\tilde{\mathbf{Z}}\mathbf{ \beta})=\frac{1}{2^{k}}\sum_{\tilde{z}\in\mathcal{Z}_{\mathcal{A}}}\phi_{ \lambda}(\mathbf{X}\tilde{\mathbf{Z}}\,|\,\mathbf{\beta}). \tag{7}\]
It is the expected probability of sign recovery assuming all \(\tilde{\mathbf{z}}\in\mathcal{Z}_{\mathcal{A}}\) are equally likely. Calculating all \(2^{k}\) probabilities can be computationally intensive, but Lemma 1 allows us to halve the number of computations. We state this as a corollary:
**Corollary 1**.: _For a fixed \(\mathcal{A}\) where \(|\mathcal{A}|=k\), let \(\mathcal{Z}_{\mathcal{A}}^{\pm}\) denote the subset of \(\mathcal{Z}_{\mathcal{A}}\) including all \(2^{k-1}\) unique \(\mathbf{z}\) up to reflection. Then \(\phi_{\lambda}^{\pm}(\mathbf{X}\,|\,\mathbf{\beta})=2^{-(k-1)}\sum_{\tilde{z}\in \mathcal{Z}_{\mathcal{A}}^{\pm}}\phi_{\lambda}(\mathbf{X}\tilde{\mathbf{Z}}\,|\,\mathbf{ \beta})\)._
We now investigate designs that maximize \(\phi_{\lambda}\) and \(\phi_{\lambda}^{\pm}\). Knowledge of \(\mathcal{A}\) when constructing an optimal design under either criteria leads to a trivial construction for the columns \(\mathbf{X}_{\mathcal{I}}^{*}\).
**Proposition 1**.: _For a known \(\mathcal{A}\), there exists a local optimal design \(\mathbf{X}^{*}\) where \((\mathbf{I}-\mathbf{P}_{1})\mathbf{X}_{\mathcal{I}}^{*}=\mathbf{0}\), making \(\phi_{\lambda}(\mathbf{X}^{*}\,|\,\mathbf{\beta})=P(S_{\lambda}\,|\,\mathbf{F}_{\mathcal{A }},\mathbf{\beta}_{\mathcal{A}})\) and \(\phi_{\lambda}^{\pm}(\mathbf{X}^{*}\,|\,\mathbf{\beta})=2^{-(k-1)}\sum_{\tilde{z}\in \mathcal{Z}_{\mathcal{A}}^{\pm}}P(S_{\lambda}\,|\,\mathbf{F}_{\mathcal{A}}\tilde{ \mathbf{Z}},\mathbf{\beta}_{\mathcal{A}})\)._
Local optimal designs exploit knowledge about \(\mathcal{A}\) by completely confounding the columns of the inactive factors with the intercept, making \(P(I_{\lambda}\,|\,\mathbf{F},\,\mathbf{\beta})=1\). Finding the local optimal design then only considers \(\mathbf{X}_{\mathcal{A}}\) and the probability of the \(S_{\lambda}\) event(s). A design influences the probability of this event through \(\mathbf{u}\)'s mean vector and covariance matrix.
An orthogonal \({\bf X}_{\cal A}\) is a strong optimality candidate for \(\phi_{\lambda}^{\pm}\) as its \(P(S_{\lambda}\,|\,{\bf F}_{\cal A},\mathbf{\beta}_{\cal A})\) is invariant to \(\mathbf{z}_{\cal A}\). However, if \(\mathbf{z}_{\cal A}={\bf 1}\) is assumed known, then for any \({\bf X}_{\cal A}\) the bounds of integration for \(S_{\lambda}\) and the mean of \(u\) are proportional to \({\bf V}_{\cal A}^{1/2}\mathbf{\beta}_{\cal A}\) and \(\lambda{\bf C}_{\cal A}^{-1}{\bf 1}\), respectively. Hence, a design that maximizes \({\bf V}_{\cal A}^{1/2}\mathbf{\beta}_{\cal A}-\lambda{\bf C}_{\cal A }^{-1}{\bf 1}\) may maximize \(P(S_{\lambda}\,|\,{\bf F}_{\cal A},\mathbf{\beta}_{\cal A})\), although the covariance matrix also plays a role. For an orthogonal \({\bf X}_{\cal A}\), this difference is \(\mathbf{\beta}_{\cal A}-\lambda{\bf 1}\). We will concern ourselves with finding an \({\bf X}_{\cal A}\) with larger sign recovery probability than an orthogonal design by identifying such a design whose \({\bf V}_{\cal A}^{1/2}\) and \({\bf C}_{\cal A}\) satisfy
\[({\bf I}-{\bf V}_{\cal A}^{1/2})\mathbf{\beta}_{\cal A}\leq({\bf I}- {\bf C}_{\cal A}^{-1})\lambda{\bf 1}. \tag{8}\]
As \(({\bf I}-{\bf V}_{\cal A}^{1/2})\mathbf{\beta}_{\cal A}\geq{\bf 0}\), we need \(({\bf I}-{\bf C}_{\cal A}^{-1})\lambda{\bf 1}\geq{\bf 0}\). For even \(n\geq 6\), construct \({\bf X}_{\cal A}\) by choosing \(k\leq n-1\) columns from the \(n\times n\) matrix
\[\left(\begin{array}{c|c}2{\bf I}-{\bf J}&-{\bf J}\\ \hline{\bf J}&{\bf J}-2{\bf I}\end{array}\right)\, \tag{9}\]
where all \({\bf I}\) and \({\bf J}\) have size \(n/2\times n/2\). Then \({\bf V}_{\cal A}=(1-4/n^{2}){\bf I}\). Let \(k_{1}\) denote the number of columns chosen from the first \(n/2\) columns and \(k_{2}=k-k_{1}\) be the number from the remaining columns. If either \(k_{1}\) or \(k_{2}\) equal 0, \({\bf C}_{\cal A}\) will be completely symmetric with positive off-diagonal elements \(c=1-4n/(n^{2}-4)\). Thus \(({\bf I}-{\bf C}_{\cal A}^{-1})\lambda{\bf 1}\geq{\bf 0}\) and (8) becomes
\[\left(1-\frac{\sqrt{n^{2}-4}}{n}\right)\mathbf{\beta}_{\cal A}\leq \lambda\left(1-\frac{n^{2}-4}{kn^{2}-4k(n+1)+4n}\right){\bf 1}\.\]
When \(k_{1}\geq 1\) and \(k_{2}\geq 1\), \({\bf C}_{\cal A}\) and \({\bf C}_{\cal A}^{-1}\) have a \(2\times 2\) block partitioned form with completely symmetric, diagonal block matrices and constant off-diagonal block matrices. Then \({\bf C}_{\cal A}^{-1}{\bf 1}=(\xi_{1}{\bf 1}_{k_{1}}^{T},\xi_{2}{\bf 1}_{k_{2}}^{T}) ^{T}=\mathbf{\xi}\), and \({\bf C}_{\cal A}\mathbf{\xi}={\bf 1}\), which gives the equations
\[1 = \rho_{11}\xi_{1}+\rho_{12}\xi_{2}\] \[1 = \rho_{21}\xi_{1}+\rho_{22}\xi_{2}\,\]
where \(\rho_{ij}>0\) is the unique row sum for the corresponding \(k_{i}\times k_{j}\) block matrix of \({\bf C}_{\cal A}\).
Defining \(\tilde{k}_{i}=(n-2k_{i})\geq 0\), we have
\[\xi_{1}=\frac{\tilde{k}_{2}(n^{2}-4)}{(n-2)^{2}(k_{1}\tilde{k}_{2}+\tilde{k}_{1} k_{2})+\tilde{k}_{1}\tilde{k}_{2}}\geq 0\, \tag{10}\]
with equality if and only if \(k_{2}=n/2\). A similar expression holds for \(\xi_{2}\) with \(\tilde{k}_{1}\) in the numerator of (10), and \(\xi_{2}=0\) if and only if \(k_{1}=n/2\). Since the diagonal elements of \({\bf C}_{\cal A}\) are \(1\), \(\rho_{ii}\geq 1\) and it follows \({\bf 1}-{\boldsymbol{\xi}}>{\bf 0}\). Hence designs constructed in this way satisfy (8) for some combinations of \({\boldsymbol{\beta}}_{\cal A}\) and \(\lambda\), but whether this implies higher sign recovery probability depends on the covariance matrix of \({\boldsymbol{u}}\).
As a demonstration, for \(n=16\) and \(k=8\), we considered an orthogonal \({\bf X}_{\cal A}\) and two designs based on (9) where \(k_{1}=4\) and \(8\). For all designs, \({\bf C}_{\cal A}^{-1}{\bf 1}=\xi{\bf 1}\) where \(\xi=1\), \(0.1575\), and \(0.1607\) for the orthogonal, \(k_{1}=4\), and \(k_{1}=8\) designs, respectively. For the \(k_{1}=4\) and \(k_{1}=8\) designs, (8) is equivalent to \(\lambda^{-1}{\boldsymbol{\beta}}_{\cal A}\leq 53.92\times{\bf 1}\) and \(\lambda^{-1}{\boldsymbol{\beta}}_{\cal A}\leq 53.72\times{\bf 1}\), respectively. The three designs' \(\phi_{\lambda}\) and \(\phi_{\lambda}^{\pm}\) values were compared across a range of \(\lambda\) and three scenarios for \({\boldsymbol{\beta}}_{\cal A}\): \((0.3,0.4,\ldots,1)^{T}\), \({\bf 1}\), and \(3\times{\bf 1}\). The results for \(\phi_{\lambda}\) are shown in the left panels of Figure 1. For all scenarios, there is a range of large \(\lambda\) values where all designs have \(\phi_{\lambda}=0\), a middle range where the two proposed designs outperform the orthogonal design, and a range of small \(\lambda\) values where the orthogonal design is superior. The orthogonal design improves over the other two designs well before condition (8) is violated, due to the role of the covariance matrices. The improvement is negligible when \({\boldsymbol{\beta}}_{\cal A}=3\times{\bf 1}\), implying the covariance matrix is less important as the elements of \({\boldsymbol{\beta}}_{\cal A}\) increases. The results for \(\phi_{\lambda}^{\pm}\) are shown in the right panels of Figure 1, and clearly favor the orthogonal design.
### Relaxed local criteria
To make the criteria more practical, we introduce relaxations regarding the choice of \(\lambda\) and the assumptions of the underlying model. To reduce the dependency on \(\lambda\), we propose two summary measures of \(\phi_{\lambda}\) and \(\phi_{\lambda}^{\pm}\). The first summary takes the maximum sign recovery probability: \(\phi_{\max}({\bf X}\,|\,{\boldsymbol{\beta}})=\max_{\lambda>0}\phi_{\lambda}({ \bf X}\,|\,{\boldsymbol{\beta}})\). However, in practice one will perform some
tuning parameter selection strategy to choose a \(\lambda\), and so the benefits of a design that optimizes \(\phi_{\max}\) will only be realized if the strategy reliably picks the corresponding \(\lambda\) that maximizes \(\phi_{\lambda}\). Hence, we would like a design that maintains large probabilities for a wide range of \(\lambda\). It is common to calculate the lasso solution path with respect to \(\log(\lambda)\). Such a transformation stretches the region of \(\lambda\in(0,1)\), whose corresponding lasso estimates would receive the least amount of penalization. Therefore, we propose the criterion
\[\phi_{\Lambda}(\mathbf{X}\,|\,\boldsymbol{\beta})=\int_{0}^{\infty}\frac{ \phi_{\lambda}(\mathbf{X}\,|\,\boldsymbol{\beta})}{\lambda}\,d\lambda=\int_{ -\infty}^{\infty}\phi_{\exp(\omega)}(\mathbf{X}\,|\,\boldsymbol{\beta})d \omega\.\]
The definitions for \(\phi_{\max}^{\pm}(\mathbf{X}\,|\,\boldsymbol{\beta})\) and \(\phi_{\Lambda}^{\pm}(\mathbf{X}\,|\,\boldsymbol{\beta})\) are obvious.
Turning to relaxing assumptions about the model, recall \(\phi_{\lambda}^{\pm}\) already relaxed assumptions about \(\boldsymbol{z}_{\mathcal{A}}\). To relax specification of \(|\boldsymbol{\beta}_{\mathcal{A}}|\), we assume \(|\boldsymbol{\beta}_{A}|=\beta\mathbf{1}\) where \(\beta\geq 1\) so that an active factor's effect is at least of the same magnitude as the noise. We also fix \(k\) and assume all supports of this size are equally likely. Denote this set of supports by \(\mathcal{A}_{k}\). We first focus on the criterion that incorporates uncertainty about \(\mathcal{A}\), but treats \(\boldsymbol{z}_{\mathcal{A}}\) as known. This resembles models where the \(Var(s+)\)-optimal designs performed best. Suppose
Figure 1: Probability of sign recovery for \(n=16\) and \(k=8\) for an orthogonal design and two designs constructed from selecting columns from (9) with \(k_{1}=4\) and \(8\). The left and right panels correspond to \(\phi_{\lambda}\) and \(\phi_{\lambda}^{\pm}\), respectively.
depends only on whether \(j\in\mathcal{A}\) or \(\mathcal{I}\), and let \(\boldsymbol{z}^{*}\) be the \(p\times 1\) vector comprised of the elements \(z_{j}\) assuming \(j\in\mathcal{A}\). Following Theorem 1, we assume without loss of generality that \(\boldsymbol{z}^{*}=\boldsymbol{1}\) to identify the optimal design and then multiply the columns of the design by their actual corresponding \(z_{j}^{*}\). For a given \(\mathcal{A}\), let \(\mathbf{A}\) be the \(p\times p\) diagonal matrix of all zeroes except for the diagonal elements corresponding to \(\mathcal{A}\), which are set to \(1\). Then the true sign vector is \(\boldsymbol{z}=\mathbf{A}\boldsymbol{1}\) and the sign-dependent criterion for a given \(\beta\) is
\[\Phi_{\lambda}(\mathbf{X}\,|\,k,\beta)=\binom{p}{k}^{-1}\,\sum_{\mathcal{A} \in\mathcal{A}_{k}}\phi_{\lambda}(\mathbf{X}\,|\,\boldsymbol{\beta}=\beta \mathbf{A}\boldsymbol{1})\,\]
being the average probability across \(\mathcal{A}_{k}\) for a fixed \(\lambda\). The notation \(\Phi\) instead of \(\phi\) indicates consideration of all \(\mathcal{A}\in\mathcal{A}_{k}\). The sign-independent criterion, \(\Phi_{\lambda}^{\pm}(\mathbf{X}\,|\,k,\beta)\), is similarly defined. The two summary measures originally defined on \(\phi_{\lambda}\) and \(\phi_{\lambda}^{\pm}\) are straightforward to define on these new criteria and share similar notation, e.g., \(\Phi_{\max}(\mathbf{X}\,|\,k,\beta)\) and \(\Phi_{\Lambda}(\mathbf{X}\,|\,k,\beta)\). Computational details of the criteria may be found in the Supplementary Materials.
Optimizing any of the \(\Phi\)-type criteria is analytically and algorithmically challenging. Indeed, simply evaluating such criteria for a given \(\mathbf{X}\) can be cumbersome. We next discuss properties of these criteria and, motivated by approximate design theory (Pukelsheim, 2006), we propose a more tractable framework centered on finding optimal \(\mathbf{C}\) matrices, which we refer to as lasso information matrices. Section 4 will leverage the optimal forms of \(\mathbf{C}\) to efficiently identify optimal, or at least nearly-optimal, exact designs.
## 3 Optimal Lasso Information Matrices
The criteria from Section 2.2 assume \(|\boldsymbol{\beta}_{\mathcal{A}}|=\beta\boldsymbol{1}\) and evaluate an \(\mathbf{X}\) across all possible supports, and so are invariant to column permutations of \(\mathbf{X}\). Additionally, \(\Phi_{\lambda}^{\pm}\) and its summaries across \(\lambda\) are invariant to sign-flipping the columns of \(\mathbf{X}\). Traditional eigenvalue-based criteria, being functions of \(\mathbf{M}=\mathbf{X}^{T}(\mathbf{I}-\mathbf{P}_{1})\mathbf{X}\), also enjoy these two invariance properties, expressed as simultaneous row/column permutations and sign transformations
of \({\bf M}\). Indeed, these properties have been exploited to find optimal forms of \({\bf M}\) when the domain of the functions is expanded to all symmetric, positive definite matrices, denoted \({\cal M}\). For an \({\bf M}\in{\cal M}\), define its permutation-averaged form to be \(\overline{{\bf M}}=(p!)^{-1}\sum_{\mathbf{\Pi}\in{\cal P}}\mathbf{\Pi}\,{\bf M}\,\mathbf{\Pi}^{T}\) where \({\cal P}\) is the set of all \(p\times p\) permutation matrices. Then \(\overline{{\bf M}}\in{\cal M}\) is a completely symmetric matrix. Further averaging of \(\overline{{\bf M}}\) across all \(2^{p}\) sign transformations leads to a matrix proportional to the identity matrix, i.e., an \({\bf M}\) for an orthogonal design. Assuming the criterion is concave, the criterion value for \(\overline{{\bf M}}\) is greater than or equal to that for \({\bf M}\). Hence the search for the optimum \({\bf M}\) may be restricted to the class of completely symmetric matrices in \({\cal M}\).
Similar to defining eigenvalue-based criteria on \({\cal M}\), the criteria in Section 2 can be defined with respect to the design's \({\bf C}\) and \({\bf V}\) matrix. We can then cast the optimal design problem in terms of identifying an optimal pairing, \(({\bf C}^{*},{\bf V}^{*})\), and then try to identify a design having such matrices. This is essentially the approach taken by heuristic orthogonality criteria, presuming \({\bf C}={\bf I}\) is the optimal form. We will argue \({\bf C}={\bf I}\) is the optimal form for summaries of \(\Phi_{\lambda}^{\pm}({\bf X}\,|\,k,\beta)\), and that some \({\bf C}\) with all positive off-diagonals is the optimal form for summaries of the sign-dependent \(\Phi_{\lambda}({\bf X}\,|\,k,\beta)\).
The matrix \({\bf V}\) has diagonal elements between 0 and 1, and only contributes to event \(S_{\lambda}\). Clearly \({\bf V}={\bf I}\) maximizes \(P(S_{\lambda})\), so we fix \({\bf V}={\bf I}\) and optimize the criteria with respect to \({\bf C}\) across all symmetric, nonnegative definite matrices with 1's on the diagonal. If the criteria were concave with respect to all such matrices we could restrict our optimality search to completely symmetric matrices. We have thus far been unable to establish such a property. Our current investigations have led us to conjecture the criteria are log concave for certain combinations of \(\lambda\) and \(\beta\). We comment on this conjecture in a later example. Nonetheless, we will focus on identifying an optimal information matrix among all completely symmetric matrices \({\cal C}=\{{\bf C}=(1-c){\bf I}+c{\bf J}\,|\,-(k-1)^{-1}<c<1\}\). This makes the optimization problem more tractable, as it involves a single unknown value, \(c\), and the criteria, now being
invariant to \(\mathcal{A}\), are equal to \(\phi_{\lambda}\) and \(\phi_{\lambda}^{\pm}\), respectively.
The probabilities for events (4) and (5) for \(\textbf{C}\in\mathcal{C}\) are given in the following lemma:
**Lemma 2**.: _Let \(\textbf{V}=\textbf{I}\) and \(\textbf{C}\in\mathcal{C}\). Then for \(\mathcal{A}\) where \(|\mathcal{A}|=k\) and \(\boldsymbol{\beta}_{\mathcal{A}}\) with sign vector \(\boldsymbol{z}_{\mathcal{A}}\), \(P(S_{\lambda}\,|\,c,\boldsymbol{\beta}_{\mathcal{A}})=P(\boldsymbol{u}<\sqrt{ n}|\boldsymbol{\beta}_{\mathcal{A}}|)\) and \(P(I_{\lambda}\,|\,c,\boldsymbol{z}_{\mathcal{A}})=P(|\boldsymbol{v}|\leq\lambda \sqrt{n}\textbf{1})\) where_
\[\boldsymbol{u} \sim N\left(\frac{\lambda\sqrt{n}}{1-c}\left[\textbf{1}-z_{ \mathcal{A}}\gamma\boldsymbol{z}_{\mathcal{A}}\right],\ \frac{1}{1-c}\left[\textbf{I}_{k}-\gamma\boldsymbol{z}_{\mathcal{A}} \boldsymbol{z}_{\mathcal{A}}^{T}\right]\right)\] \[\boldsymbol{v} \sim N\left(\lambda\sqrt{n}z_{\mathcal{A}}\gamma\textbf{1},(1-c) \left[\textbf{I}+\gamma\textbf{J}\right]\right)\]
_with \(z_{\mathcal{A}}=\textbf{1}^{T}\boldsymbol{z}_{\mathcal{A}}\) and \(\gamma=c/(1+c(k-1))\). Moreover, if \(\boldsymbol{\beta}_{\mathcal{A}}\) does not depend on \(\mathcal{A}\), the probabilities of the two events are constant across all such \(\mathcal{A}\)._
The criterion \(\Phi_{\lambda}(\textbf{X}\,|\,k,\beta)\) meets the latter condition of Lemma 2 and assumes \(\boldsymbol{z}_{\mathcal{A}}=\textbf{1}\), leading to the random vectors
\[\boldsymbol{u}\sim N\left(\frac{\lambda\sqrt{n}}{1+c(k-1)}\textbf{1},\frac{1}{ 1-c}\left[\textbf{I}-\gamma\textbf{J}\right]\right),\qquad\boldsymbol{v}\sim N \left(\lambda\sqrt{n}k\gamma\textbf{1},(1-c)\left[\textbf{I}+\gamma\textbf{J} \right]\right)\.\]
The corresponding criterion defined with respect to \(\mathcal{C}\) and for a fixed \(\lambda\) is denoted
\[\psi_{\lambda}(c\,|\,k,\beta)=P(S_{\lambda}\,|\,c,\boldsymbol{\beta}_{ \mathcal{A}}=\beta\textbf{1})\times P(I_{\lambda}\,|\,c,\boldsymbol{z}_{ \mathcal{A}}=\textbf{1}). \tag{11}\]
The analog to \(\Phi_{\lambda}^{\pm}(\textbf{X}\,|\,k,\beta)\) is denoted
\[\psi_{\lambda}^{\pm}(c\,|\,k,\beta)=\frac{1}{2^{k-1}}\sum_{\tilde{\boldsymbol {z}}\in\mathcal{Z}_{\mathcal{A}}^{\pm}}P(S_{\lambda}\,|\,c,\boldsymbol{\beta} _{\mathcal{A}}=\beta\tilde{\boldsymbol{z}})\times P(I_{\lambda}\,|\,c, \boldsymbol{z}_{\mathcal{A}}=\tilde{\boldsymbol{z}}). \tag{12}\]
The summarized versions of (11) and (12) across \(\lambda\) will be denoted similarly to their exact design counterparts from Section 2, replacing \(\Phi\) with \(\psi\).
The criteria \(\psi_{\lambda}\) and \(\psi_{\lambda}^{\pm}\) involve a single variable, but are still challenging to optimize analytically. Numerical optimization of these criteria, however, is straightforward and computationally efficient. We demonstrate numerical optimization of these new criteria for the situation of \(p=10\), \(k=4\), \(\beta=2\), and \(n=10\). Figure 2 shows the contour plots of \(\psi_{\lambda}\) and \(\psi_{\lambda}^{\pm}\). The optimal \(c\) values for the corresponding \(\psi_{\Lambda}\) and \(\psi_{\Lambda}^{\pm}\) are \(0.14\) and \(0\), respectively. The resulting optimal **C** matrices then match the ideal forms for the \(Var(s+)\) and
\(UE(s^{2})\)-criterion. While these ideal \(\mathbf{C}\) are not possible for SSDs, this example provides further justification for these two heuristic SSD criteria for sign recovery under the lasso.
If \(\psi_{\lambda}^{\pm}\) were concave in \(\mathcal{C}\) across all \(\lambda\), the matrix averaging technique would prove the optimality of \(c=0\), as \(\mathbf{C}=\mathbf{I}\) is the unique matrix after averaging across all permutations and sign transformations. Figure 2 shows that there are some \(\lambda\) for which \(c=0\) does not maximize \(\psi_{\lambda}^{\pm}\), and so the function cannot be concave or log-concave. For example \(\log(\lambda)>0.640\) shows a nearly \(0\) value for \(\psi_{\lambda}^{\pm}\) when \(c=0\) but a nonzero probability for \(c\in(0.2,0.6)\). This phenomenon is due to the improved sign recovery for \(\boldsymbol{z}_{\mathcal{A}}=\mathbf{1}\) for these \(c\) values in this range of \(\log(\lambda)\). However, the summary criteria may still be concave or log concave even if \(\psi_{\lambda}^{\pm}\) is not for all \(\lambda\).
We now study the behavior of \(\psi_{\lambda}\) and \(\psi_{\lambda}^{\pm}\) in a neighborhood about \(c=0\). We were initially optimistic that the inequalities of Sidak (1968) would aid in our study, but their results assume a fixed mean. Direct application of the multivariate Leibniz integral rule gives the following lemma.
**Lemma 3**.: _For all \(\mathcal{A}\) where \(|\mathcal{A}|=k\) and \(\beta>0\), \(\frac{d}{dc}P(I_{\lambda}|c,\boldsymbol{z}_{\mathcal{A}}=\mathbf{1})\big{|}_{ c=0}=0\) for all \(\lambda\) and \(\frac{d}{dc}P(S_{\lambda}|c,\boldsymbol{\beta}_{\mathcal{A}}=\beta\mathbf{1}) \big{|}_{c=0}\geq 0\) for \(\lambda\) satisfying_
\[2\lambda_{n}\geq\frac{g(\tau_{n})}{G(\tau_{n})}\, \tag{13}\]
_where \(G(\cdot)\) and \(g(\cdot)\) represent the \(N(0,1)\) cumulative distribution function and probability
density function, respectively, with \(\lambda_{n}=\lambda\sqrt{n}\) and \(\tau_{n}=\sqrt{n}(\beta-\lambda)\)._
A direct consequence of Lemma 3 is that \(\psi_{\lambda}(0|k,\beta)\) can be improved upon by some \(c>0\). We state this important result as a theorem.
**Theorem 2**.: _For all \(\mathcal{A}\) where \(|\mathcal{A}|=k\), \(\frac{d}{dc}\psi_{\lambda}(c|k,\beta)\big{|}_{c=0}>0\) for \(\lambda\) satisfying (13). Hence there exists some \(c_{\lambda}>0\), where \(\psi_{\lambda}(c_{\lambda}|k,\beta)>\psi_{\lambda}(0|k,\beta)\)._
Applying this result to the situation shown in Figure 2 with known sign, inequality (13) holds for \(\log(\lambda)\geq-22.763\), which covers the entire region of \(\lambda\) values for which \(\psi_{\lambda}(c)>0\). It follows then that some \(c>0\) will maximize the summary measures \(\psi_{\Lambda}(c)\) or \(\psi_{\max}(c)\). Theorem 2 thus provides a mathematically rigorous justification that the ideal designs under the \(Var(s+)\)-criterion maximize the probability of sign recovery for known sign.
The following theorem establishes a similar justification for heuristic orthogonality measures under unknown signs.
**Theorem 3**.: _For all \(\mathcal{A}\) where \(|\mathcal{A}|=k\) and \(\beta>0\), \(\frac{d}{dc}\psi_{\lambda}^{\pm}(c|k,\beta)\big{|}_{c=0}=0\) for all \(\lambda\). Moreover, \(c=0\) is a local maximum for \(\psi_{\lambda}^{\pm}(c|k,\beta)\) when_
\[\begin{split}\frac{q}{\binom{k}{2}}\ \frac{\lambda_{n}g( \lambda_{n})}{G(\Delta_{n})}\left(k(1-\lambda_{n}^{2})+(q-1)\frac{\lambda_{n}g (\lambda_{n})}{G(\Delta_{n})}\right)\leq\\ \frac{g(\tau_{n})}{G(\tau_{n})}\left(\beta_{n}+\lambda_{n}+ \lambda_{n}^{2}\tau_{n}-\left[\frac{\beta_{n}^{2}-\lambda_{n}^{2}}{2}+\beta_{ n}\lambda_{n}\right]\frac{g(\tau_{n})}{G(\tau_{n})}\right)\end{split} \tag{14}\]
_where \(\beta_{n}=\beta\sqrt{n}\) and \(G(\Delta_{n})=G(\lambda_{n})-G(-\lambda_{n})\)._
Applying this result to the situation shown in Figure 2 with unknown sign, inequality (69) holds for \(\log(\lambda)\in[-0.988,0.640]\). For \(\log(\lambda)\) outside this region, there are clearly some \(c>0\) for which \(\psi_{\lambda}^{\pm}(c)>0\), although the probabilities are relatively small. These small probabilities do not influence \(\psi_{\max}^{\pm}\) and have minimal influence on \(\psi_{\Lambda}^{\pm}\), making \(c=0\) a global maximum for both criteria. We conjecture that generally \(c=0\) and some \(c>0\) are global maxima for the unknown and known sign criteria, respectively.
## 4 Exact Design Evaluation and Construction
Section 3 identified optimal forms of completely symmetric \(\mathbf{C}\) matrices under different assumptions about \(\mathbf{z}\). Unfortunately, no SSD exists whose \(\mathbf{C}\) achieves these forms because \(n<p+1\). Ideally, one would implement a design search algorithm that ranks SSDs according to a summary criterion of the \(\Phi_{\lambda}\)- or \(\Phi_{\lambda}^{\pm}\)-criterion. However, these criteria demand intense computations to evaluate a single SSD and search algorithms require many evaluations. Heuristic criteria such as \(UE(s^{2})\) and \(Var(s+)\), however, are computationally efficient and can help identify SSDs that are close to achieving such forms. We now describe an algorithmic construction that reconciles the rigorous but computationally-prohibitive criteria in Section 2 with the computationally-efficient heuristic criteria that were justified in Section 3.
Figure 3 demonstrates the relationship between heuristic measures and the sign recovery criteria across 100 designs with varying \(Var(s)\) and \(UE(s^{2})\) values for \(n=10\), \(p=12\), \(k=4\), and \(\beta=3\). Since \(Var(s+)\) is favorable when \(\mathbf{z}_{\mathcal{A}}=\mathbf{1}\), the left panel of Figure 3 compares \(Var(s)\) values to \(\Phi_{\Lambda}\), while the right panel compares \(UE(s^{2})\) values to \(\Phi_{\Lambda}^{\pm}\). There is a clear correlation between the heuristic measures and the sign recovery criteria, but Figure 3 shows that designs with the same heuristic measure can differ with respect to sign recovery probability. Hence improvements can be made by re-evaluating the optimal heuristic SSDs using the sign recovery criteria.
A computationally-friendly approach to constructing optimal SSDs in terms of our criteria is to first perform a search algorithm on many randomly generated SSDs using one or more heuristic criteria and retaining the best-performing SSDs. The retained designs can then be sifted again based on one of the sign recovery criteria. The design that performs best among these SSDs is then declared "optimal". We refer to this approach as the _Heuristic-Initiated Lasso Sieve_, or HILS, since it utilizes a sequential filter of designs based
on heuristic and lasso sign recovery criteria. Although HILS can significantly reduce the number of \(\Phi_{\lambda}\) evaluations compared to constructing an optimal design using, for instance, a coordinate exchange algorithm (Meyer and Nachtsheim, 1995; Weese et al., 2017), there are settings when even a more modest number of evaluations is not computationally feasible. We now discuss some ways to efficiently evaluate the \(\Phi_{\lambda}\) and \(\Phi_{\lambda}^{\pm}\)-criteria, and hence their corresponding summarized versions.
For a support size of \(k\) and design \(\mathbf{X}\), one full evaluation of \(\Phi_{\lambda}\) and \(\Phi_{\lambda}^{\pm}\) requires consideration of all \(\binom{p}{k}\) supports of size \(k\). For \(\Phi_{\lambda}\), an additional \(2^{k}\) computations are required for each \(\mathcal{A}\) to consider all sign vectors. Evaluating either criteria would benefit from a reduction in the number of supports considered. Randomly sampling from \(\mathcal{A}_{k}\) is intuitive, but may require large subsamples to be representative. As high correlations between two factors can result in diminished lasso support recovery performance due to low \(P(I_{\lambda}\,|\,\mathbf{F},\,\boldsymbol{z})\), a support sampling method that balances (or approximately balances) the number of times pairwise sets of factors are included in \(\mathcal{A}\) together can be advantageous. Smucker and Drew (2015) utilized nearly balanced incomplete block designs (NBIBDs) to approximate a model
Figure 3: Sign recovery probabilities across all supports of size \(k=4\) compared to \(Var(s)\) and \(UE(s^{2})\) values for \(n=10\) and \(p=12\). The left panel compares \(Var(s)\) to \(\Phi_{\Lambda}\) and the right panel shows the relationship between \(UE(s^{2})\) and \(\Phi_{\Lambda}^{\pm}\).
space with only a relatively small number of blocks, in the context of model-robust optimal designs. Hence, we recommend implementing the NBIBD sampling method to adequately represent \(\mathcal{A}_{k}\) with between 64-128 supports for modest \(p\) and \(k\) values. We denote such a subset of supports by \(\tilde{\mathcal{A}}_{k}\).
From Lemma 1, for a fixed \(\mathcal{A}\), the probabilities are equal between \(\mathbf{z}_{\mathcal{A}}\) and \(-\mathbf{z}_{\mathcal{A}}\). Thus, only \(2^{k-1}\) sign vectors need be considered for the \(\Phi_{\lambda}^{\pm}\)-criterion. While this cuts the computation in half, prior knowledge from the practitioner can be leveraged to select an even smaller set of representative sign vectors to evaluate. For example, if there is no knowledge on sign direction, the most likely sign vectors are those with an equal, or nearly equal, number of \(\pm 1\). For \(k=10\), one would need to only consider, up to reflection, the 126 possible sign vectors having five \(+1\)'s and \(-1\)'s rather than all \(2^{10-1}=512\) vectors. There may also be strong prior belief about the signs of some or all of the factors that can reduce the set of sign vectors to a manageable number, or even to a single element. We generally denote such a subset of supports by \(\tilde{\mathcal{Z}}_{k}^{\pm}\).
The HILS algorithm is formally stated in Algorithm 1 and specifically focuses on the \(UE(s^{2})\)- and \(Var(s+)\)-criteria as the initiating heuristic measures. The algorithm is easily generalized to include other heuristic measures if desired. Besides the obvious necessary inputs for the desired design size and the desired sign-recovery criterion, the algorithm requires the user to specify the number of initial designs for the construction algorithms for the two heuristic measures as well as the number of such final designs to retain. In the case of \(UE(s^{2})\), one can generate \(UE(s^{2})\)-optimal designs using the techniques described in Jones and Majumdar (2014). For the \(Var(s+)\)-criterion, one can also manipulate the threshold for the \(UE(s^{2})\) efficiency. Additionally, the required \(\Phi\)-criterion input represents the option of evaluating designs using the fixed \(\lambda\) measure (\(\Phi_{\lambda}\)), \(\Phi_{\Lambda}\), or \(\Phi_{\max}\) as well as deciding between \(\Phi\) and \(\Phi^{\pm}\). Optional inputs include \(\tilde{\mathcal{A}}_{k}\), \(\tilde{\mathcal{Z}}_{k}^{\pm}\), as well as a supplementary collection of designs, \(\mathcal{X}_{e}\), such as the Pareto-efficient designs by Singh and Stufken (2022).
## 5 Examples
We now demonstrate optimal SSDs found under our criteria via HILS under two example scenarios. For each case, we compare the optimal SSD under HILS with the Pareto efficient designs (PEDs) of Singh and Stufken (2022) by inspecting the \(\Phi_{\lambda}\) and \(\Phi_{\lambda}^{\pm}\) curves as a function of \(\log(\lambda)\). The PEDs were generated assuming \(\mathbf{z}_{\mathcal{A}}=\mathbf{1}\) and were based on heuristic criteria corresponding to the Dantzig selector.
### Scenario 1: \(n=9\), \(p=10\)
The HILS algorithm was performed for \(k=3\) and \(\beta=3\) assuming unknown signs. In this example, it was feasible for \(\Phi_{\lambda}^{\pm}\) to average over all possible supports and sign vectors. The HILS design was generated from 100 random starting designs with \(m_{u}=50\) for the \(UE(s^{2})\)-criterion and \(m_{v}=50\) the \(Var(s+)\)-criterion. The construction algorithm for the \(Var(s+)\)-criterion randomly selected \(UE(s^{2})\)-efficiency values from \(\{0.5,0.6,0.7\}\). The PED from Singh and Stufken (2022) was added as an extra design to evaluate. Two HILS-optimal designs were found using \(\Phi_{\max}^{\pm}\) and \(\Phi_{\Lambda}^{\pm}\), denoted by \(\text{HILS}_{\max}\) and \(\text{HILS}_{\Lambda}\), respectively.
Interestingly, both \(\text{HILS}_{\max}\) and \(\text{HILS}_{\Lambda}\) were \(Var(s+)\)- and \(UE(s^{2})-\)optimal. These two designs were instances of separate runs of the \(Var(s+)\) coordinate exchange algorithm and have different \(Var(s)\) values: \(\text{HILS}_{\max}\) had a \(Var(s)=2.380\) with \(UE(s)=0.273\)
while HILS\({}_{\Lambda}\) had a \(Var(s)=2.438\) with \(UE(s)=0.127\). These designs were similar in terms of both \(\Phi_{\max}^{\pm}\) (0.785426 for HILS\({}_{\max}\) and 0.785424 for HILS\({}_{\Lambda}\)) and \(\Phi_{\Lambda}^{\pm}\) (1.1216621 for HILS\({}_{\max}\) and 1.1216624 for HILS\({}_{\Lambda}\)).
The left panel of Figure 4 compares \(\Phi_{\lambda}^{\pm}\) between HILS\({}_{\max}\), HILS\({}_{\Lambda}\), and the PED. The difference between the \(\Phi_{\lambda}^{\pm}\) between HILS\({}_{\max}\) and HILS\({}_{\lambda}\) were on the order of \(10^{-5}\) so only one general HILS-optimal curve is shown. Clearly the HILS-optimal designs outperform the PED in terms of both \(\Phi_{\max}^{\pm}\) and \(\Phi_{\Lambda}^{\pm}\). Since the PED is constructed assuming \(\mathbf{z}_{\mathcal{A}}=\mathbf{1}\) and this scenario assumes no sign information, the performance of the HILS-optimal designs versus the PED was expected. The right panel of Figure 4 shows the contour plots of \(\psi_{\lambda}^{\pm}\), assuming \(\mathbf{C}\) could be completely symmetric. The probability is highest over a larger range of \(\log(\lambda)\) when \(c=0\), which is consistent with Theorem 3. It is unsurprising then that the two designs selected by HILS are \(UE(s^{2})\)-optimal, but the \(Var(s+)\)-criterion improved performance.
As the PED was created assuming \(\mathbf{z}_{\mathcal{A}}=\mathbf{1}\), we compared it to the two HILs designs with \(\Phi_{\lambda}\). As the HILS-optimal designs were also \(Var(s+)\)-optimal, these designs were expected
Figure 4: The left plot shows \(\Phi_{\lambda}^{\pm}\) vs. \(\log(\lambda)\) for the HILS-optimal designs and the PED under scenario 1. The differences in the \(\Phi_{\lambda}^{\pm}\) between HILS\({}_{\max}\) and HILS\({}_{\lambda}\) were indistinguishable, so a general HILS-optimal curve is shown against the PED. The right plot shows the sign recovery probability contour plots if \(\mathbf{C}\) were completely symmetric.
to also perform well. The left panel of Figure 5 compares \(\Phi_{\lambda}\) for the HILS\({}_{\max}\), HILS\({}_{\Lambda}\), and the PED. While the differences in \(\Phi_{\lambda}^{\pm}\) between HILS\({}_{\max}\) and HILS\({}_{\lambda}\) were very small, there are visible differences in \(\Phi_{\lambda}\). Both of the HILS-optimal designs again outperform the PED, but in different ways. Only the HILS\({}_{\max}\) had a larger \(\Phi_{\max}\) than the PED, but both HILS designs had a larger \(\Phi_{\Lambda}\). The right panel in Figure 5 shows the sign recovery probability contour plots if \(\mathbf{C}\) were completely symmetric. The probability is highest over a larger range of \(\log(\lambda)\) when \(c>0\), which is consistent with Lemma 3.
### Scenario 2: \(n=14\), \(p=20\)
This scenario assumed known effect directions with \(k=5\) and \(\beta=3\). Only the \(Var(s+)\)-criterion was considered in the HILS algorithm, setting \(m_{v}=50\) with randomly selected \(UE(s^{2})\)-efficiency constraints from \(\{0.5,0.6,0.7,0.8\}\) and a minimum \(UE(s)\) value was randomly selected from \(\{0,0.1\}\). The latter was done to further encourage positive off-diagonal values. While these are not exactly \(Var(s+)\) designs, they are similar and provide a wider candidate set of designs. The PED from Singh and Stufken (2022) was also evaluated. The evaluation of \(\Phi_{\max}\) and \(\Phi_{\Lambda}\) proved too computationally costly since there were \(\binom{20}{5}\) possible
Figure 5: The left plot shows \(\Phi_{\lambda}\) vs. \(\log(\lambda)\) for the HILS-optimal designs and the PED under scenario 1 with \(\boldsymbol{z}_{\mathcal{A}}=\mathbf{1}\). The right plot shows the sign recovery probability contour plots if \(\mathbf{C}\) were completely symmetric under scenario 1 with \(\boldsymbol{z}_{\mathcal{A}}=\mathbf{1}\). The optimal \(c\) value in this case is \(c=0.17\).
supports in \(\mathcal{A}_{k}\). Instead, \(\Phi_{\max}\) or \(\Phi_{\Lambda}\) were measured over a subset of \(\mathcal{A}_{k}\) selected using the NBIBD approach in Section 4. HILS selected the PED as best in terms of both \(\Phi_{\max}^{\pm}\) or \(\Phi_{\Lambda}^{\pm}\) but we also report the second best design found by HILS (a \(Var(s+)\)-optimal design with \(UE(s^{2})\)-efficiency 0.8). The left panel of Figure 6 shows \(\Phi_{\lambda}(\mathbf{X}|k,\beta)\) for the two designs and clearly shows the superiority of the PED.
From Lemma 3, it is expected that designs with small, and nearly constant positive \(UE(s)\) values will perform better in the known signs case. This is demonstrated by the right panel of Figure 6. The PED had \(Var(s)=5.861\) with \(UE(s)=0.590\) and \(UE(s^{2})\)-efficiency of 0.859. The \(Var(s+)\)-optimal design has \(Var(s)=5.240\) with \(UE(s)=0.838\) and \(UE(s^{2})\)-efficiency of 0.897. It is unclear from the heuristic measures alone which design will perform better. The PED had smaller \(UE(s)\) but larger \(Var(s)\), while the \(Var(s+)\)-optimal design had larger \(UE(s)\) but smaller \(Var(s)\). This highlights the potential downside of using the \(Var(s+)\)-criterion for choosing an SSD and also stresses the importance of considering not just \(Var(s+)\)-optimal designs in HILS, but also designs that are efficient in terms of \(Var(s+)\).
Discussion
The SSD literature has predominately constructed designs by optimizing heuristic criteria that measure a design's proximity to a (nonexistent) orthogonal design. The criteria are tractable in their optimization, in that optimal designs can generally be constructed directly or algorithmically in a reasonable amount of time. However, these criteria are not directly tied to a screening analysis method, so there is no guarantee that the resulting analysis under an optimal design will have good statistical properties. This article resolves this disconnect by optimizing criteria based on the probability of sign recovery under the lasso, which is well-defined even when \(n-1<p\). Our major contributions are:
1. A local optimality criterion assuming known \(\mathbf{\beta}\) and fixed \(\lambda\). A trivial design that confounds all inactive factors with the intercept is shown to be optimal for sign recovery. An exact design construction is given that can improve the probability of sign recovery over an orthogonal design for some \(\lambda\). The design has positive and nearly constant pairwise correlations, following the ideal structure for the \(Var(s+)\)-criterion.
2. More practical criteria that relax the assumptions about \(\mathbf{\beta}\) and \(\lambda\). Such criteria are computationally intensive and hence difficult to optimize both analytically and algorithmically, requiring computations across all supports of a given size and potentially many sign vectors.
3. A study of the optimal form of \(\mathbf{C}\), the lasso information matrix, with and without known \(\mathbf{z}\). By conditioning on completely symmetric matrices, we arrive at a univariate optimization problem. The framework mimics the approximate design approach for least-squares analyses.
4. In the case of known \(\mathbf{z}\), we prove that across all completely symmetric matrices,
\(\mathbf{C}=\mathbf{I}\), i.e., an orthogonal design, is suboptimal for nearly all \(\lambda>0\). The optimal form instead takes on positive, constant off-diagonal elements. In the case of unknown sign information, \(\mathbf{C}=\mathbf{I}\) is shown to be a local maximum for a range of \(\lambda\). These optimal forms rigorously justify the \(UE(s^{2})\)- and \(Var(s+)\)-criterion in the cases of unknown and known \(\mathbf{z}\), respectively. This at least partially solves the open problem of how \(Var(s+)\)-optimal designs achieved better screening properties in simulations.
5. From our justification of the heuristic criteria, an exact design construction algorithm, named the Heuristic-Initiated Lasso Sieve. Essentially, the proposed criteria are used as a secondary ranking of many high-quality designs generated under heuristic criteria. HILS compromises the computationally-efficient heuristic criteria with the more statistically sound, but computationally prohibitive, sign recovery criteria.
Our work provides an alternative to using simulation to compare SSDs, which can be tedious and difficult to reproduce independently. In particular, there are at least two reasons to be skeptical of conclusions drawn from simulations in the context of SSDs. First, simulation can be misleading when subtly different versions of complicated statistical procedures are used. In our attempts to reproduce and compare the simulation studies of Singh and Stufken (2022) and Weese et al. (2021), we discovered the regularization methods used were sensitive to two different aspects of the procedure implementation. In particular, these papers used the Gauss-Dantzig selector with different ways of exploring the tuning parameter space. They also differed in how they implemented a secondary thresholding of the estimates which determines a subset of the active factors that should remain active. Typically this threshold removes estimates whose absolute magnitude are less than \(\sigma^{2}\), but thresholding in this way is of dubious reliability because there is no natural way to estimate \(\sigma^{2}\) and thus threshold levels are more or less arbitrary. The approach we present in this paper avoids these difficulties by eliminating the need for simulation at all, in a similar
way that closed-form power analysis procedures are routinely used in simple, replicated experimental design settings.
Another danger of the heuristic criteria is that there is no guarantee the optimal design has any statistical value. For example, there technically exists a \(UE(s^{2})\)-optimal design with \(n=2\) runs and \(p=100\) factors, but this design has no statistical value. Our criteria, however, do reflect statistical value and so can be used to give objective information about design quality. Experimenters can specify a minimum effect size of interest as well as an educated guess regarding the number of factors that are likely to have such an effect size and investigate the sign-recovery probabilities as a function of \(\lambda\). If the maximum average probability is close to 0, it is unlikely that the design can provide reliable sign recovery for the specified effect size. Such a low value could lead to a reconsideration of the runs budget, expected sparsity, and/or the size of effects that are deemed of interest to detect.
Our methodology allows an even deeper investigation, if desired. For example, we can determine whether a low average probability is due to an inability to reliably exclude all inactive effects (the \(I_{\lambda}\) event) or to reliably include all active effects (the \(S_{\lambda}\) event). In screening experiments, we are much more concerned with identifying the true effects than with allowing inactive effects through the filter. This suggests that the \(S_{\lambda}\) event is more important to the experimenter than the \(I_{\lambda}\) event.
There are many avenues of future research to be explored. First, it is important to establish the concavity (or log concavity) of the criteria that are summarized over \(\lambda\). We have already seen that such criteria are not concave for a fixed \(\lambda\). Second, new construction techniques for designs approaching the desired structure of the \(Var(s+)\)-criterion would be incredibly useful. Continuing with exact design construction, while our algorithm is faster relative to Singh and Stufken (2022), it is much slower than that for the heuristic criteria. More work is needed to make the design construction algorithms faster. Finally, we are currently extending the criteria to include the case where inactive factors have small, but
nonzero, effects as well as considering the case of criteria for a thresholded lasso.
**SUPPLEMENTARY MATERIAL**
Following the References are proofs of all results as well as computational details on the evaluation of our criteria. R code to identify optimal \(c\) values for the criteria in Section 3 and construct designs with the HILS algorithm, as well as code and designs to replicate the examples in Section 5, may be found at [https://github.com/hkyoung361/Lasso_Optimal_SSD](https://github.com/hkyoung361/Lasso_Optimal_SSD).
|
2303.07796 | Limit laws of maximal Birkhoff sums for circle rotations via quantum
modular forms | In this paper, we show how quantum modular forms naturally arise in the
ergodic theory of circle rotations. Working with the classical Birkhoff sum
$S_N(\alpha)=\sum_{n=1}^N (\{ n \alpha \}-1/2)$, we prove that the maximum and
the minimum as well as certain exponential moments of $S_N(r)$ as functions of
$r \in \mathbb{Q}$ satisfy a direct analogue of Zagier's continuity conjecture,
originally stated for a quantum invariant of the figure-eight knot. As a
corollary, we find the limit distribution of $\max_{0 \le N<M} S_N(\alpha)$ and
$\min_{0 \le N<M} S_N(\alpha)$ with a random $\alpha \in [0,1]$. | Bence Borda | 2023-03-14T11:11:18Z | http://arxiv.org/abs/2303.07796v1 | ###### Abstract
###### Abstract
In this paper, we show how quantum modular forms naturally arise in the ergodic theory of circle rotations. Working with the classical Birkhoff sum \(S_{N}(\alpha)=\sum_{n=1}^{N}(\{n\alpha\}-1/2)\), we prove that the maximum and the minimum as well as certain exponential moments of \(S_{N}(r)\) as functions of \(r\in\mathbb{Q}\) satisfy a direct analogue of Zagier's continuity conjecture, originally stated for a quantum invariant of the figure-eight knot. As a corollary, we find the limit distribution of \(\max_{0\leq N<M}S_{N}(\alpha)\) and \(\min_{0\leq N<M}S_{N}(\alpha)\) with a random \(\alpha\in[0,1]\).
**Limit laws of maximal Birkhoff sums for circle rotations via quantum modular forms**
**Bence Borda**
Graz University of Technology
Steyrergasse 30, 8010 Graz, Austria
Email: [email protected]
**Keywords:** continued fraction, Gauss map, Ostrowski expansion, Farey fraction,
quadratic irrational, Kashaev invariant, Sudler product
**Mathematics Subject Classification (2020):** 37A50, 37E10, 11F37, 11K60
## 1 Introduction
The main goal of this paper is to introduce methods originally developed in connection with Zagier's quantum modular forms [22] to the ergodic theory of circle rotations. We demonstrate the power of these tools by considering the classical Birkhoff sum \(S_{N}(\alpha)=\sum_{n=1}^{N}(\{n\alpha\}-1/2)\), where \(\{\cdot\}\) denotes the fractional part function. The history of the sum \(S_{N}(\alpha)\) goes back a hundred years to Hardy and Littlewood [11, 12], Hecke [14] and Ostrowski [19], with the original motivation coming from Diophantine approximation, lattice point counting in triangles and analytic number theory. We have \(S_{N}(\alpha)=o(N)\) for any irrational \(\alpha\), but the precise behavior is rather delicate and depends on the Diophantine properties of \(\alpha\). It is enough to consider \(\alpha\in[0,1]\), and we shall focus on the case of a randomly chosen \(\alpha\).
Throughout, \(X\sim\mu\) denotes the fact that a random variable \(X\) has distribution \(\mu\), \(\mu\otimes\nu\) denotes the product measure of \(\mu\) and \(\nu\), and \(\stackrel{{ d}}{{\to}}\) denotes convergence in distribution. The standard stable law of stability parameter \(1\) and skewness parameter \(\pm 1\), denoted by \(\operatorname{Stab}(1,\pm 1)\), is the law with characteristic function \(\exp(-|x|(1\pm i\frac{2}{\pi}\mathrm{sgn}(x)\log|x|))\). The standard stable law of stability parameter \(1\) and skewness parameter \(0\) is in fact the standard Cauchy distribution with characteristic function \(\exp(-|x|)\) and density function \(1/(\pi(1+x^{2}))\), and will be denoted simply by "Cauchy".
The first distributional result is due to Kesten [17], who proved that if \((\alpha,\beta)\sim\mathrm{Unif}([0,1]^{2})\), then
\[\frac{\sum_{n=1}^{N}(\{n\alpha+\beta\}-1/2)}{\sigma\log N}\stackrel{{ d}}{{\to}}\text{Cauchy} \tag{1}\]
as \(N\to\infty\), with an explicit constant \(\sigma>0\). Note that in addition to \(\alpha\), the starting point \(\beta\) of the orbit is also chosen randomly, independently of \(\alpha\). Whether a similar limit law holds for a fixed value of \(\beta\) is still open. Dolgopyat and Sarig [9] showed, however, that for any fixed \(\beta\in\mathbb{R}\) and \((\alpha,N)\sim\mathrm{Unif}([0,1]\times\{1,2,\ldots,M\})\), the limit law (1) holds as \(M\to\infty\) with the different constant \(\sigma=\frac{1}{3\pi\sqrt{3}}\). Let us also mention a theorem of Beck [4] concerning \(\beta=0\), a fixed quadratic irrational \(\alpha\) and \(N\sim\mathrm{Unif}(\{1,2,\ldots,M\})\), in which case \((S_{N}(\alpha)-c_{1}\log N)/(c_{2}\sqrt{\log N})\) converges in distribution to the standard Gaussian with suitable constants \(c_{1}\in\mathbb{R}\) and \(c_{2}>0\) depending on \(\alpha\).
In this paper, we work with \(S_{N}(\alpha)=\sum_{n=1}^{N}(\{n\alpha\}-1/2)\) with the fixed starting point \(\beta=0\), and instead of choosing \(N\) randomly, we consider the extreme values \(\max_{0\leq N<M}S_{N}(\alpha)\) and \(\min_{0\leq N<M}S_{N}(\alpha)\) as well as certain exponential moments of the values \(S_{N}(\alpha)\), \(0\leq N<M\). Our main distributional result is a limit law for the joint distribution of the maximum and the minimum.
**Theorem 1**.: _Let \(\alpha\sim\mu\) with a Borel probability measure \(\mu\) on \([0,1]\) which is absolutely continuous with respect to the Lebesgue measure. Then_
\[\left(\frac{\max_{0\leq N<M}S_{N}(\alpha)-E_{M}}{\sigma_{M}},\frac{\min_{0\leq N <M}S_{N}(\alpha)+E_{M}}{\sigma_{M}}\right)\stackrel{{ d}}{{ \rightarrow}}\mathrm{Stab}(1,1)\otimes\mathrm{Stab}(1,-1)\qquad\text{as }M \rightarrow\infty,\]
_where \(E_{M}=\frac{3}{4\pi^{2}}\log M\log\log M+D_{\infty}\log M\) with some constant \(D_{\infty}\in\mathbb{R}\), and \(\sigma_{M}=\frac{3}{8\pi}\log M\)._
In particular,
\[\frac{\max_{0\leq N<M}S_{N}(\alpha)-E_{M}}{\sigma_{M}}\stackrel{{ d}}{{ \rightarrow}}\mathrm{Stab}(1,1)\qquad\text{and}\qquad\frac{\min_{0\leq N<M}S_ {N}(\alpha)+E_{M}}{\sigma_{M}}\stackrel{{ d}}{{\rightarrow}} \mathrm{Stab}(1,-1).\]
The fact that the limit distribution in Theorem 1 is a product measure means that the maximum and the minimum of \(S_{N}(\alpha)\) are asymptotically independent. The formulation as a joint limit law has the advantage that we immediately obtain limit laws for quantities such as \(\max-\min\) (the diameter of the range of \(S_{N}(\alpha)\), \(0\leq N<M\)), and for \((\max+\min)/2\) (the center of the range) as well:
\[\frac{\max_{0\leq N<M}S_{N}(\alpha)-\min_{0\leq N<M}S_{N}(\alpha)-B_{M}}{2 \sigma_{M}}\stackrel{{ d}}{{\rightarrow}}\mathrm{Stab}(1,1), \quad\frac{\max_{0\leq N<M}S_{N}(\alpha)+\min_{0\leq N<M}S_{N}(\alpha)}{2 \sigma_{M}}\stackrel{{ d}}{{\rightarrow}}\mathrm{Cauchy}\]
with \(B_{M}=2E_{M}+\frac{4}{\pi}(\log 2)\sigma_{M}\). Indeed, if \(X,Y\sim\mathrm{Stab}(1,1)\) are independent random variables, then \(-X\sim\mathrm{Stab}(1,-1)\), \(\frac{X+Y}{2}-\frac{2}{\pi}\log 2\sim\mathrm{Stab}(1,1)\) and \(\frac{X-Y}{2}\sim\mathrm{Cauchy}\), as can be easily seen from the characteristic functions. Theorem 1 similarly implies that
\[\frac{\max_{0\leq N<M}|S_{N}(\alpha)|-E_{M}}{\sigma_{M}}\stackrel{{ d}}{{\rightarrow}}\max\{X,Y\}\qquad\text{as }M\rightarrow\infty.\]
The cumulative distribution function of \(\max\{X,Y\}\) is simply the square of that of \(\mathrm{Stab}(1,1)\).
Limit laws of Birkhoff sums for circle rotations \(\sum_{n=1}^{N}f(n\alpha+\beta)\) with some of the parameters \(N,\alpha,\beta\) chosen randomly have also been established for other \(1\)-periodic functions \(f\), such as the indicator of a subinterval of \([0,1]\) extended with period \(1\), or smooth functions with a logarithmic or power singularity. We refer to [8] for an exhaustive survey. In an upcoming paper we will prove similar limit laws for the maximum and the minimum of \(\sum_{n=1}^{N}f(n\alpha)\) with \(f\) the indicator of a subinterval of \([0,1]\) extended with period \(1\), using methods unrelated to the present paper.
Our approach relies on continued fractions and Ostrowski's explicit formula for \(S_{N}(\alpha)\), see Lemma 9 below. We will actually work with \(S_{N}(r)\) with rational \(r\) instead of an irrational \(\alpha\), and eventually let \(r\) be a suitable best rational approximation to a random \(\alpha\). As the main ingredient in the proof of our limit laws, we will show that while \(\max_{0\leq N<q}S_{N}(r)\) and \(\min_{0\leq N<q}S_{N}(r)\) are rather complicated as functions of the variable \(r\in(0,1)\cap\mathbb{Q}\), the functions
\[h_{\infty}(r)=\max_{0\leq N<q}S_{N}(r)-\max_{0\leq N<q^{\prime}}S_{N}(T^{2}r) \qquad\text{and}\qquad h_{-\infty}(r)=\min_{0\leq N<q}S_{N}(r)-\min_{0\leq N<q^ {\prime}}S_{N}(T^{2}r)\]
have better analytic properties in the sense that they can be extended to almost everywhere continuous functions on \([0,1]\); see Figures 1 and 2 below. Here \(T^{2}\) is the second iterate of the Gauss map, and \(q\) resp. \(q^{\prime}\) denotes the denominator of \(r\) resp. \(T^{2}r\) in their reduced forms. This makes the
functions \(\max_{0\leq N<q}S_{N}(r)\) and \(\min_{0\leq N<q}S_{N}(r)\) close relatives of Zagier's quantum modular forms, an observation we believe to be of independent interest.
We argue that \(S_{N}(\alpha)\) shows a close similarity to \(\tilde{S}_{N}(\alpha)=\sum_{n=1}^{N}\log|2\sin(\pi n\alpha)|\), the Birkhoff sum with the \(1\)-periodic function \(\log|2\sin(\pi x)|\) having logarithmic singularities at integers. This similarity is not surprising considering that \(\tilde{S}_{N}(\alpha)\) and \(\pi S_{N}(\alpha)\) are the real and the imaginary part of the complex-valued Birkhoff sum \(\sum_{n=1}^{N}\log(1-e^{2\pi in\alpha})\), defined with the principal branch of the logarithm. Note that \(e^{\tilde{S}_{N}(\alpha)}=\prod_{n=1}^{N}|1-e^{2\pi in\alpha}|\) is the so-called Sudler product, a classical object in its own right introduced by Sudler [21] and Erdos and Szekeres [10]. Confirming a conjecture of Zagier, in a recent paper Aistleitner and the author [1] proved that while \(\max_{0\leq N<q}\tilde{S}_{N}(r)\) and \(\min_{0\leq N<q}\tilde{S}_{N}(r)\) exhibit complicated behavior, the functions
\[\tilde{h}_{\infty}(r)=\max_{0\leq N<q}\tilde{S}_{N}(r)-\max_{0\leq N<q^{ \prime}}\tilde{S}_{N}(Tr)\qquad\text{and}\qquad\tilde{h}_{-\infty}(r)=\min_{ 0\leq N<q}\tilde{S}_{N}(r)-\min_{0\leq N<q^{\prime}}\tilde{S}_{N}(Tr)\]
can be extended to almost everywhere continuous functions on \([0,1]\). The results of the present paper suggest that such behavior is more prevalent than the original scope of Zagier's continuity conjecture.
It is rather surprising that the functions \(h_{\pm\infty}\) and \(\tilde{h}_{\pm\infty}\) with such a pathological behavior hold the key to limit laws such as Theorem 1. Improving our earlier result [7, Theorem 10], in this paper we also prove that if \(\alpha\sim\mu\) with an absolutely continuous probability measure \(\mu\) on \([0,1]\), then
\[\frac{\max_{0\leq N<M}\tilde{S}_{N}(\alpha)-\tilde{E}_{M}}{\tilde{\sigma}_{M} }\stackrel{{ d}}{{\to}}\mathrm{Stab}(1,1)\qquad\text{as $M\to\infty$}, \tag{2}\]
where \(\tilde{E}_{M}=\frac{3\mathrm{Vol}(4_{1})}{\pi^{3}}\log M\log\log M+\tilde{D}_{ \infty}\log M\) and \(\tilde{\sigma}_{M}=\frac{3\mathrm{Vol}(4_{1})}{2\pi^{2}}\log M\), with
\[\mathrm{Vol}(4_{1})=4\pi\int_{0}^{5/6}\log|2\sin(\pi x)|\,\mathrm{d}x=2.02988\ldots\]
denoting the hyperbolic volume of the complement of the figure-eight knot (see Section 2) and some constant \(\tilde{D}_{\infty}\in\mathbb{R}\). The maximum and the minimum now determine each other via the relation
\[\max_{0\leq N<M}\tilde{S}_{N}(\alpha)+\min_{0\leq N<M}\tilde{S}_{N}(\alpha)= \log M+o(\log M)\qquad\text{in $\mu$-measure},\]
which easily follows from [2, Eq. (17)]. This immediately yields a limit law for \(\min_{0\leq N<M}\tilde{S}_{N}(\alpha)\) as well, and shows that in contrast to Theorem 1, the joint distribution of
\[\left(\frac{\max_{0\leq N<M}\tilde{S}_{N}(\alpha)-\tilde{E}_{M}}{\tilde{ \sigma}_{M}},\frac{\min_{0\leq N<M}\tilde{S}_{N}(\alpha)+\tilde{E}_{M}}{\tilde {\sigma}_{M}}\right)\]
converges to a probability measure supported on a straight line in \(\mathbb{R}^{2}\) instead of a product measure. The difference in the definition of \(h_{\pm\infty}\) and \(\tilde{h}_{\pm\infty}\) (second vs. first iterate of the Gauss map) and in the joint behavior of the maximum and the minimum (asymptotically independent vs. asymptotically deterministic) ultimately boils down to the fact that \(S_{N}(\alpha)\) is odd, whereas \(\tilde{S}_{N}(\alpha)\) is even in the variable \(\alpha\). See also [13, 18] for the asymptotics of \(\tilde{S}_{N}(\alpha)\) at a.e. \(\alpha\).
In contrast to random reals, for a badly approximable irrational \(\alpha\) we have \(S_{N}(\alpha)=O(\log N)\), and this is sharp since
\[\limsup_{N\to\infty}\frac{S_{N}(\alpha)}{\log N}>0\qquad\text{and}\qquad\liminf _{N\to\infty}\frac{S_{N}(\alpha)}{\log N}<0, \tag{3}\]
as shown by Ostrowski [19]. For a quadratic irrational \(\alpha\), we can say more: general results of Schoissengeier [20] on \(S_{N}(\alpha)\) immediately imply that
\[\max_{0\leq N<M}S_{N}(\alpha)=C_{\infty}(\alpha)\log M+O(1)\qquad\text{and} \qquad\min_{0\leq N<M}S_{N}(\alpha)=C_{-\infty}(\alpha)\log M+O(1) \tag{4}\]
with some explicitly computable constants \(C_{\infty}(\alpha)>0\) and \(C_{-\infty}(\alpha)<0\), and implied constants depending only on \(\alpha\). Note that \(C_{\infty}(\alpha)\) resp. \(C_{-\infty}(\alpha)\) is the value of the limsup resp. liminf in (3). For example, we have
\[C_{\pm\infty}(\sqrt{2})=\pm\frac{1}{8\log(1+\sqrt{2})},\quad C_{\infty}(\sqrt {3})=\frac{1}{4\log(2+\sqrt{3})},\quad C_{-\infty}(\sqrt{3})=-\frac{1}{12\log( 2+\sqrt{3})}.\]
Similar results hold for \(\tilde{S}_{N}(\alpha)\). For all badly approximable irrational \(\alpha\) we have \(\tilde{S}_{N}(\alpha)=O(\log N)\), and this is sharp since \(\limsup_{N\to\infty}\tilde{S}_{N}(\alpha)/\log N\geq 1\) for all (not necessarily badly approximable) irrationals [18]. For a quadratic irrational \(\alpha\), we similarly have [2]
\[\max_{0\leq N<M}\tilde{S}_{N}(\alpha)=\tilde{C}_{\infty}(\alpha)\log M+O(1) \qquad\text{and}\qquad\min_{0\leq N<M}\tilde{S}_{N}(\alpha)=\tilde{C}_{-\infty }(\alpha)\log M+O(1).\]
Here the constants \(\tilde{C}_{\infty}(\alpha)\geq 1\) and \(\tilde{C}_{-\infty}(\alpha)\leq 0\) are related by \(\tilde{C}_{\infty}(\alpha)+\tilde{C}_{-\infty}(\alpha)=1\), but their explicit value is known only for a few simple quadratic irrationals such as the golden mean or \(\sqrt{2}\) (in both cases \(\tilde{C}_{\infty}=1\) and \(\tilde{C}_{-\infty}=0\)). Thus, once again, the maximum and the minimum of \(\tilde{S}_{N}(\alpha)\) determine each other, unlike those of \(S_{N}(\alpha)\) for which the constants \(C_{\infty}(\alpha)\) and \(C_{-\infty}(\alpha)\) do not satisfy a simple relation. We refer to our earlier paper [7] for a central limit theorem for the joint distribution of \((S_{N}(\alpha),\tilde{S}_{N}(\alpha))\) with a fixed quadratic irrational \(\alpha\) and \(N\sim\text{Unif}(\{1,2,\ldots,M\})\).
We elaborate on the connection to quantum modular forms, and state our main related results in Section 2. The main limit laws, including more general forms of Theorem 1 and formula (2) together with analogue results for random rationals are stated in Section 3. The proofs are given in Sections 4, 5 and 6.
## 2 Connections to quantum modular forms
A quantum modular form is a real- or complex-valued function \(f\) defined on \(\mathbb{P}^{1}(\mathbb{Q})=\mathbb{Q}\cup\{\infty\}\) (except perhaps at finitely many points) which satisfies a certain approximate modularity relation under the action of \(\text{SL}(2,\mathbb{Z})\) with fractional linear transformations on \(\mathbb{P}^{1}(\mathbb{Q})\). Instead of stipulating \(f(\gamma r)=f(r)\) for any \(\gamma\in\text{SL}(2,\mathbb{Z})\) (true modularity), the functions \(h_{\gamma}(r)=f(\gamma r)-f(r)\) are required, roughly speaking, to enjoy better continuity/analyticity properties than \(f\) itself in the real topology on \(\mathbb{P}^{1}(\mathbb{Q})\) (approximate modularity). Most known examples of quantum modular forms come from algebraic topology or analytic number theory.
Given a parameter \(-\infty\leq p\leq\infty\), \(p\neq 0\) and a rational number \(r\) whose denominator in its reduced form is \(q\), define
\[\tilde{J}_{p}(r)=\left(\sum_{N=1}^{q-1}\prod_{n=1}^{N}|1-e^{2\pi inr}|^{p} \right)^{1/p}\qquad p\neq\pm\infty,0,\]
and
\[\tilde{J}_{\infty}(r)=\max_{0\leq N<q}\prod_{n=1}^{N}|1-e^{2\pi inr}|,\qquad \tilde{J}_{-\infty}(r)=\min_{0\leq N<q}\prod_{n=1}^{N}|1-e^{2\pi inr}|,\]
where \(\prod_{n=1}^{N}|1-e^{2\pi inr}|\) is the Sudler product. The function \(\tilde{J}_{p}(r)\) is \(1\)-periodic and even in the variable \(r\), and by [2, Proposition 2] it also satisfies the identity \(\tilde{J}_{-p}(r)=q/\tilde{J}_{p}(r)\).
The original motivation came from algebraic topology, as \(\tilde{J}_{2}^{2}\) is (an extension of) the so-called Kashaev invariant of the figure-eight knot \(4_{1}\). The asymptotics along the sequence of rationals \(r=1/q\), \(q\in\mathbb{N}\) is
\[\log\tilde{J}_{2}(1/q)=\frac{\operatorname{Vol}(4_{1})}{4\pi}q+\frac{3}{4}\log q -\frac{1}{8}\log 3+o(1)\qquad\text{as $q\to\infty$}, \tag{5}\]
where \(\operatorname{Vol}(4_{1})\) is the hyperbolic volume of the complement of the figure-eight knot [3]. A similar asymptotic result for the Kashaev invariant of general hyperbolic knots is known as the volume conjecture, with a full asymptotic expansion in \(q\) predicted by the arithmeticity conjecture. Both conjectures have been solved for certain simple hyperbolic knots such as the figure-eight knot, but are open in general.
Calling \(\tilde{J}_{2}\) "the most mysterious and in many ways the most interesting" example of a quantum modular form, Zagier [22] formulated several conjectures about its behavior under the action of \(\operatorname{SL}(2,\mathbb{Z})\) on its argument by fractional linear transformations, including a far-reaching generalization of (5) known as the modularity conjecture. Zagier's modularity conjecture has a more general form which applies to all hyperbolic knots, but it has only been solved for certain simple knots such as the figure-eight knot [6], and remains open in general. We refer to [6] for further discussion on the arithmetic properties of quantum invariants of hyperbolic knots.
Since the fractional linear maps \(r\mapsto r+1\) and \(r\mapsto-1/r\) generate the full modular group, and the first of these transformations acts trivially on the argument of \(\tilde{J}_{p}(r)\), the function
\[\tilde{h}_{p}(r)=\log\frac{\tilde{J}_{p}(r)}{\tilde{J}_{p}(-1/r)},\qquad r\in \mathbb{Q}\backslash\{0\}\]
is the key to understanding the action of \(\operatorname{SL}(2,\mathbb{Z})\). Observe that \(\tilde{h}_{-p}(r)=-\tilde{h}_{p}(r)\), hence it is enough to consider \(p>0\). Numerical evidence presented by Zagier suggests that \(\tilde{h}_{2}\) is continuous but not differentiable at every irrational, and that it has a jump discontinuity at every rational but is smooth as we approach a rational from one side. The continuity of \(\tilde{h}_{2}\) at all irrationals is now known as Zagier's continuity conjecture. Aistleitner and the author [1] proved that \(\tilde{h}_{p}\) can be extended to a function on \(\mathbb{R}\) which is continuous at every irrational \(\alpha=[a_{0};a_{1},a_{2},\ldots]\) such that \(\sup_{k\in\mathbb{N}}a_{k}=\infty\), thereby confirming Zagier's continuity conjecture almost everywhere. In the same paper it was further shown that
\[\tilde{h}_{p}(r)=\frac{\operatorname{Vol}(4_{1})}{4\pi r}+O\left(1+\log\frac {1}{r}\right),\qquad r\in(0,1)\cap\mathbb{Q} \tag{6}\]
with an implied constant depending only on \(p\) (but it is uniform once \(p\) is bounded away from \(0\)). Numerical experiments suggest that in fact
\[\tilde{h}_{p}(r)=\frac{\operatorname{Vol}(4_{1})}{4\pi r}+\frac{p+1}{2p}\log \frac{1}{r}+O(1),\qquad r\in(0,1)\cap\mathbb{Q}.\]
Note that in [1] these results were stated only for \(p=2\), but the proof works mutatis mutandis for all \(0<p\leq\infty\).
In this paper, we interpret \(\tilde{J}_{p}\) as a natural quantity related to the Birkhoff sum \(\tilde{S}_{N}(r)=\sum_{n=1}^{N}\log|2\sin(\pi nr)|\), and \(\tilde{h}_{p}\) as the key to understanding the action of the Gauss map \(T\) on the argument of \(\tilde{J}_{p}\). Recall that \(T:[0,1)\to[0,1)\) is defined as \(Tx=\{1/x\}\), \(x\neq 0\) and \(T0=0\), thus \(\tilde{h}_{p}(r)=\log(\tilde{J}_{p}(r)/\tilde{J}_{p}(Tr))\). We show that the Birkhoff sum \(S_{N}(r)=\sum_{n=1}^{N}(\{nr\}-1/2)\) yields a function \(J_{p}(r)\) which exhibits remarkable similarity to \(\tilde{J}_{p}(r)\), thus demonstrating that quantum modular behavior can also naturally arise in ergodic theory. It would be very interesting to find further examples of Birkhoff sums, either for circle rotations or more general dynamical systems, with a similarly rich arithmetic structure.
Given a parameter \(-\infty\leq p\leq\infty\), \(p\neq 0\) and a rational number \(r\) whose denominator in its reduced form is \(q\), we thus define
\[J_{p}(r)=\left(\sum_{N=0}^{q-1}e^{pS_{N}(r)}\right)^{1/p},\qquad p\neq\pm\infty,0,\]
and
\[J_{\infty}(r)=\max_{0\leq N<q}e^{S_{N}(r)},\qquad J_{-\infty}(r)=\min_{0\leq N <q}e^{S_{N}(r)}.\]
Note that these are perfect analogues of \(\tilde{J}_{p}(r)\) with \(S_{N}(r)\) playing the role of \(\tilde{S}_{N}(r)\). Using the fact that \(S_{N}(r)\) is \(1\)-periodic and odd in the variable \(r\), we immediately observe the identities \(J_{p}(r+1)=J_{p}(r)\) and \(J_{-p}(r)=1/J_{p}(-r)\). In order to reveal the arithmetic structure of \(J_{p}\), we introduce the function
\[h_{p}(r)=\log\frac{J_{p}(r)}{J_{p}(T^{2}r)},\qquad r\in[0,1)\cap\mathbb{Q},\]
where \(T^{2}\) is the second iterate of the Gauss map.
The analogue of (5) for \(J_{p}\) is completely straightforward. Indeed, for \(r=1/q\), \(q\in\mathbb{N}\), we have
\[S_{N}(1/q)=\sum_{n=1}^{N}\left(\frac{n}{q}-\frac{1}{2}\right)=\frac{N(N+1-q)} {2q},\qquad 0\leq N<q,\]
and it is an easy exercise to show that (cf. Lemma 15 below)
\[h_{p}(1/q)=\log J_{p}(1/q)=\left\{\begin{array}{ll}\frac{1}{p}\log\frac{2}{ 1-e^{-p/2}}+o(1)&\mbox{if $0<p<\infty$},\\ -\frac{q}{8}+\frac{1}{2p}\log\frac{2\pi q}{|p|}+\frac{1}{4}+o(1)&\mbox{if $-\infty<p<0$} \end{array}\right.\qquad\mbox{as $q\to\infty$}. \tag{7}\]
Since \(S_{N}(1/q)\), \(0\leq N<q\) attains its maximum at \(N=0,q-1\) and its minimum at \(N=\lfloor\frac{q-1}{2}\rfloor,\lceil\frac{q-1}{2}\rceil\), for \(p=\pm\infty\) we even have the explicit formulas
\[h_{\infty}(1/q)=\log J_{\infty}(1/q)=0\qquad\mbox{and}\qquad h_{-\infty}(1/q) =\log J_{-\infty}(1/q)=-\frac{q}{8}+\frac{1}{4}-\frac{1}{8q}\mathds{1}_{\{q \mbox{\tiny odd}\}}.\]
Figure 1: The function \(\log J_{\infty}(r)=\max_{0\leq N<q}S_{N}(r)\) evaluated at all reduced rationals in \([0,1]\) with denominator at most \(150\). The graph of \(\log J_{p}(r)\) with \(0<p<\infty\) looks very similar, whereas the graph of \(\log J_{-p}(r)=-\log J_{p}(-r)\) is obtained by reflections.
As a direct analogue of (6), we establish a far-reaching generalization of the asymptotics (7) to general rationals.
**Theorem 2**.: _For any \(-\infty\leq p\leq\infty\), \(p\neq 0\) and any \(r\in(0,1)\cap\mathbb{Q}\),_
\[h_{p}(r)=\left\{\begin{array}{ll}\mathds{1}_{\{Tr\neq 0\}}\left(\frac{1}{8Tr}+ \frac{1}{2p}\log\frac{1}{Tr}\right)+O\left(\max\{1,\frac{1}{p}\log\frac{1}{p} \}\right)&\mbox{if }p>0,\\ -\frac{1}{8r}+\frac{1}{2p}\log\frac{1}{r}+O\left(\max\{1,\frac{1}{|p|}\log \frac{1}{|p|}\}\right)&\mbox{if }p<0\end{array}\right.\]
_with a universal implied constant._
We can express Theorem 2 in terms of the continued fraction expansion \(r=[0;a_{1},a_{2},\ldots,a_{L}]\) of \(r\in(0,1)\cap\mathbb{Q}\) as
\[h_{p}(r)=\left\{\begin{array}{ll}\frac{a_{2}}{8}+\frac{1}{2p}\log a_{2}+O \left(\max\{1,\frac{1}{p}\log\frac{1}{p}\}\right)&\mbox{if }p>0,\\ -\frac{a_{1}}{8}+\frac{1}{2p}\log a_{1}+O\left(\max\{1,\frac{1}{|p|}\log\frac{ 1}{|p|}\}\right)&\mbox{if }p<0.\end{array}\right.\]
**Remark**.: In all our results, it does not matter which of the two possible continued fraction expansions we choose for a rational number. In particular, to avoid the tedious case distinction between the length of the continued fraction being \(L=1\) or \(L\geq 2\), we consider the second partial quotient of \(r=1/q=[0;q]=[0;q-1,1]\) (when \(Tr=0\)) to be well defined as \(a_{2}=1\).
Our next result concerns the continuity of \(h_{p}\) at irrationals, as an analogue of Zagier's continuity conjecture. For the sake of readability, from now on we use the notation
\[\varepsilon_{p}=\left\{\begin{array}{ll}2&\mbox{if }p>0,\\ 1&\mbox{if }p<0.\end{array}\right. \tag{8}\]
**Theorem 3**.: _Let \(-\infty\leq p\leq\infty\), \(p\neq 0\), and let \(\alpha\in(0,1)\) be an irrational whose continued fraction expansion \(\alpha=[0;a_{1},a_{2},\ldots]\) satisfies \(\sup_{k\in\mathbb{N}}a_{2k+\varepsilon_{p}}=\infty\). Then \(\lim_{r\to\alpha}h_{p}(r)\) exists and is finite. In particular, \(h_{p}\) can be extended to a function on \([0,1]\) which is continuous at every irrational \(\alpha\) which satisfies \(\sup_{k\in\mathbb{N}}a_{2k+\varepsilon_{p}}=\infty\)._
Recall that Lebesgue-a.e. \(\alpha\) satisfies \(\sup_{k\in\mathbb{N}}a_{2k}=\infty\) and \(\sup_{k\in\mathbb{N}}a_{2k+1}=\infty\). In particular, the extension of \(h_{p}\) is a.e. continuous. We conjecture that the condition \(\sup_{k\in\mathbb{N}}a_{2k+\varepsilon_{p}}=\infty\) can be removed, so that Theorem 3 holds for all (including badly approximable) irrationals.
Figure 2: The functions \(h_{\pm\infty}(r)\) evaluated at all reduced rationals in \([0,1)\) with denominator at most \(150\). The asymptotics \(1/(8Tr)\) resp. \(-1/(8r)\) in Theorem 2 give a close fit to the graphs.
In contrast, \(h_{p}\) has a different behavior at rational numbers. The left-hand limit for \(p>0\), and the right-hand limit for \(p<0\) exist and are finite at all rationals, and their values are explicitly computable.
**Theorem 4**.: _Let \(a/q\in(0,1)\) and \(a^{\prime}/q^{\prime}=T^{2}(a/q)\in[0,1)\) be reduced rationals, and set_
\[W_{p}(a/q)=\frac{1}{p}\log\frac{\sum_{N=0}^{q-1}e^{p(S_{N}(a/q)-\mathrm{sgn}(p )N/(2q))}}{\sum_{N=0}^{q^{\prime}-1}e^{p(S_{N}(a^{\prime}/q^{\prime})-\mathrm{ sgn}(p)N/(2q^{\prime}))}}+\frac{\lfloor q/a\rfloor(\mathrm{sgn}(p)2a-1)}{8qq^{ \prime}},\qquad p\neq\pm\infty,0,\]
_and_
\[W_{\infty}(a/q) =\max_{0\leq N<q}\left(S_{N}(a/q)-\frac{N}{2q}\right)-\max_{0 \leq N<q^{\prime}}\left(S_{N}(a^{\prime}/q^{\prime})-\frac{N}{2q^{\prime}} \right)+\frac{\lfloor q/a\rfloor(2a-1)}{8qq^{\prime}},\] \[W_{-\infty}(a/q) =\min_{0\leq N<q}\left(S_{N}(a/q)+\frac{N}{2q}\right)-\min_{0 \leq N<q^{\prime}}\left(S_{N}(a^{\prime}/q^{\prime})+\frac{N}{2q^{\prime}} \right)+\frac{\lfloor q/a\rfloor(-2a-1)}{8qq^{\prime}}.\]
1. _If_ \(-\infty\leq p<0\)_, then_ \(\lim_{r\to(a/q)^{+}}h_{p}(r)=W_{p}(a/q)\)_._
2. _If_ \(0<p\leq\infty\) _and_ \(a\neq 1\)_, then_ \(\lim_{r\to(a/q)^{-}}h_{p}(r)=W_{p}(a/q)\)_._
Note that we excluded the rationals \(1/q\) for \(p>0\). Since \(Tr\to\infty\) as \(r\to(1/q)^{-}\), Theorem 2 implies that in this case \(\lim_{r\to(1/q)^{-}}h_{p}(r)=\infty\). As for approaching a rational point from the opposite side, numerical experiments suggest that \(h_{p}\) is right-continuous for \(0<p\leq\infty\), and left-continuous for \(-\infty\leq p<0\) at all rationals not of the form \(1/q\).
In addition to the pathological limit behavior (continuity at irrationals but jumps at rationals), the functions \(h_{p}\) also seem to have a clear self-similar structure, which becomes visible after subtracting the asymptotics established in Theorem 2. A self-similar structure of \(\tilde{h}_{p}\) was numerically observed in [1, 6]. It would be very interesting to actually prove self-similarity, and to gain a deeper understanding of the functions \(h_{p}\) and \(\tilde{h}_{p}\).
Given \(\alpha\in\mathbb{R}\) and \(M\in\mathbb{N}\), as a generalization of \(J_{p}\) we define
\[J_{p,M}(\alpha)=\left(\sum_{N=0}^{M-1}e^{pS_{N}(\alpha)}\right)^{1/p},\qquad p \neq\pm\infty,0,\]
Figure 3: The functions \(h_{\infty}(r)\) and \(h_{2}(r)\) evaluated at all reduced rationals in the interval \([0.37,0.38]\) with denominator at most \(600\). At the point \(3/8=0.375\) the values are \(h_{\infty}(3/8)=1/8\) and \(h_{2}(3/8)=0.650008\ldots\). By Theorem 4, the left-hand limits at \(3/8\) are \(W_{\infty}(3/8)=5/64=0.078125\) and \(W_{2}(3/8)=0.640180\ldots\). The graphs suggest right-continuity at \(3/8\).
and
\[J_{\infty,M}(\alpha)=\max_{0\leq N<M}e^{S_{N}(\alpha)},\qquad J_{-\infty,M}( \alpha)=\min_{0\leq N<M}e^{S_{N}(\alpha)}.\]
Let \(\tilde{J}_{p,M}(\alpha)\) be defined the same way, with \(\tilde{S}_{N}(\alpha)\) instead of \(S_{N}(\alpha)\). Letting \(p_{k}/q_{k}\) denote the convergents to \(\alpha\), roughly speaking, for \(M\approx q_{k}\) we have \(J_{p,M}(\alpha)\approx J_{p}(p_{k}/q_{k})\) and \(\tilde{J}_{p,M}(\alpha)\approx\tilde{J}_{p}(p_{k}/q_{k})\).
The asymptotics of \(\tilde{J}_{p,M}(\alpha)\) as \(M\to\infty\) at various irrational \(\alpha\) was studied in detail in [2, 6]. In particular, for a quadratic irrational \(\alpha\) it was shown that
\[\log\tilde{J}_{p,M}(\alpha)=\tilde{C}_{p}(\alpha)\log M+O(\max\{1,1/|p|\})\]
with some constant \(\tilde{C}_{p}(\alpha)\) and an implied constant depending only on \(\alpha\). Moreover, the constants satisfy the relation \(\tilde{C}_{p}(\alpha)+\tilde{C}_{-p}(\alpha)=1\). In this paper, we establish a similar result for \(J_{p,M}(\alpha)\).
**Theorem 5**.: _For any \(-\infty\leq p\leq\infty\), \(p\neq 0\) and any quadratic irrational \(\alpha\),_
\[\log J_{p,M}(\alpha)=C_{p}(\alpha)\log M+O(\max\{1,1/|p|\})\]
_with some constant \(C_{p}(\alpha)\) and an implied constant depending only on \(\alpha\)._
Relation (4) is a special case of Theorem 5 with \(p=\pm\infty\). Note that \(0<C_{\infty}(\alpha)\leq C_{p}(\alpha)\) if \(p>0\), and \(C_{p}(\alpha)\leq C_{-\infty}(\alpha)<0\) if \(p<0\). Unlike \(C_{\pm\infty}(\alpha)\), we do not know how to compute \(C_{p}(\alpha)\) for finite \(p\), even for simple irrationals such as the golden mean.
Figure 4: Subtracting the asymptotics from \(h_{p}(r)\) reveals an interesting self-similar structure. Finite \(p\) values yield very similar graphs, but the cases \(p=\pm\infty\) look markedly different. The four depicted functions are evaluated at all reduced rationals in \([0,1)\) with denominator at most \(150\).
The constants \(C_{p}(\alpha)\) and \(\tilde{C}_{p}(\alpha)\) are closely related to the limit of the functions \(h_{p}\) and \(\tilde{h}_{p}\) at quadratic irrationals. As an illustration, consider \(\sqrt{3}-1=[0;\overline{1,2}]\), and let \(p_{k}/q_{k}\) denote its convergents. Then \(T^{2}(p_{k}/q_{k})=p_{k-2}/q_{k-2}\), hence by the definition of \(h_{p}\) and the fact that \(\log q_{k}\sim(k/2)\log(2+\sqrt{3})\),
\[\sum_{0\leq j<k/2}h_{p}(p_{k-2j}/q_{k-2j})=\log J_{p}(p_{k}/q_{k})=\log J_{p,q _{k}}(\sqrt{3})+O(1)=\frac{C_{p}(\sqrt{3})\log(2+\sqrt{3})}{2}k+O(1).\]
Thus if \(\lim_{r\to\sqrt{3}-1}h_{p}(r)\) exists, then its value must be \(C_{p}(\sqrt{3})\log(2+\sqrt{3})\). In particular, while we cannot establish the continuous extension of \(h_{\pm\infty}\) to \(\sqrt{3}-1\), we know that in case they can be continuously extended to that point, their values must be \(h_{\infty}(\sqrt{3}-1)=1/4\) and \(h_{-\infty}(\sqrt{3}-1)=-1/12\); this is in good accordance with the numerics. For a general quadratic irrational \(\alpha\), the constant \(C_{p}(\alpha)\) can be similarly expressed in terms of the limit of \(h_{p}\) at the points of the finite orbit of \(\alpha\) under \(T^{2}\), provided that these limits exist.
## 3 Limit laws
Confirming a conjecture of Bettin and Drappeau [6], Aistleitner and the author [1] proved the following limit law for the value distribution of \(\tilde{J}_{p}(r)\) with a random rational \(r\); more precisely, for a randomly chosen element of \(F_{Q}=\{a/q\in[0,1]\,:\gcd(a,q)=1,\,1\leq q\leq Q\}\), the set of Farey fractions of order \(Q\). If \(a/q\sim\mathrm{Unif}(F_{Q})\), then for any \(0<p\leq\infty\),
\[\frac{\log\tilde{J}_{p}(a/q)-\tilde{E}_{p,q}}{\tilde{\sigma}_{q}}\stackrel{{ d}}{{\to}}\mathrm{Stab}(1,1)\qquad\text{as }Q\to\infty,\]
where \(\tilde{E}_{p,q}=\frac{3\mathrm{Vol}(4_{1})}{\pi^{3}}\log q\log\log q+\tilde{ D}_{p}\log q\) and \(\tilde{\sigma}_{q}=\frac{3\mathrm{Vol}(4_{1})}{2\pi^{2}}\log q\), with the constant
\[\tilde{D}_{p}=\frac{3\mathrm{Vol}(4_{1})}{\pi^{3}}\left(\log\frac{6}{\pi}- \gamma\right)+\frac{12}{\pi^{2}}\int_{0}^{1}\frac{\tilde{h}_{p}(x)-\frac{ \mathrm{Vol}(4_{1})}{4\pi}\lfloor 1/x\rfloor}{1+x}\,\mathrm{d}x. \tag{9}\]
Here \(\gamma\) denotes the Euler-Mascheroni constant. This was proved in [1] for \(p=2\), but the proof works mutatis mutandis for all \(0<p\leq\infty\). The identity \(\tilde{J}_{-p}(r)=q/\tilde{J}_{p}(r)\) mentioned in Section 2 means that \(\log\tilde{J}_{p}(a/q)+\log\tilde{J}_{-p}(a/q)=\log q\), and a limit law follows for \(-\infty\leq p<0\) as well.
In this paper, we show a similar limit law for \(J_{p}(r)\) with a random rational \(r\).
**Theorem 6**.: _Let \(a/q\sim\mathrm{Unif}(F_{Q})\). For any \(0<p\leq\infty\) and \(-\infty\leq p^{\prime}<0\),_
\[\left(\frac{\log J_{p}(a/q)-E_{p,q}}{\sigma_{q}},\,\frac{\log J_{p^{\prime}}( a/q)-E_{p^{\prime},q}}{\sigma_{q}}\right)\stackrel{{ d}}{{\to}}\mathrm{Stab}(1,1)\otimes\mathrm{Stab}(1,-1) \qquad\text{as }Q\to\infty,\]
_where, for any \(p\neq 0\), \(E_{p,q}=\mathrm{sgn}(p)\frac{3}{4\pi^{2}}\log q\log\log q+D_{p}\log q\) and \(\sigma_{q}=\frac{3}{8\pi}\log q\), with the constant_
\[D_{p}=-\mathrm{sgn}(p)\frac{3}{4\pi^{2}}\left(\gamma+\log\frac{\pi}{3}\right)+ \left\{\begin{array}{ll}\frac{6}{\pi^{2}}\int_{0}^{1}\frac{h_{p}(x)-\frac{ 1}{8}\lfloor 1/Tx\rfloor}{1+x}\,\mathrm{d}x&\text{if }p>0,\\ \frac{6}{\pi^{2}}\int_{0}^{1}\frac{h_{p}(x)+\frac{1}{8}\lfloor 1/x\rfloor}{1+x}\, \mathrm{d}x&\text{if }p<0.\end{array}\right. \tag{10}\]
In particular,
\[\frac{\log J_{p}(a/q)-E_{p,q}}{\sigma_{q}}\stackrel{{ d}}{{\to}} \mathrm{Stab}(1,\mathrm{sgn}(p))\qquad\text{as }Q\to\infty.\]
**Remark.** The identity \(J_{-p}(r)=1/J_{p}(1-r)\) and the fact that \(a/q\mapsto 1-a/q\) is a bijection of \(F_{Q}\) show that \(\log J_{-p}(a/q)\) and \(-\log J_{p}(a/q)\) are identically distributed. The previous limit law thus implies that \(E_{-p,q}=-E_{p,q}\), and consequently \(D_{-p}=-D_{p}\), a relation which is not immediate from the definition (10) of \(D_{p}\).
The main idea is to consider the telescoping sum \(\log\tilde{J}_{p}(r)=\sum_{j\geq 0}\tilde{h}_{p}(T^{j}r)\); note that \(\tilde{h}_{p}(0)=0\). Using the asymptotics (6) and the solution to Zagier's continuity conjecture, for \(0<p\leq\infty\) we can write \(\tilde{h}_{p}(r)=\frac{\operatorname{Vol}(4_{1})}{4\pi}a_{1}+\tilde{g}_{p}(r)\) with an a.e. continuous Lebesgue integrable function \(\tilde{g}_{p}(x)\). Letting \(a/q=[0;a_{1},a_{2},\dots,a_{L}]\) be a random fraction, we thus have
\[\log\tilde{J}_{p}(a/q)=\frac{\operatorname{Vol}(4_{1})}{4\pi}\sum_{j\geq 0}a_{j+ 1}+\sum_{j\geq 0}\tilde{g}_{p}(T^{j}(a/q)).\]
The first sum, with suitable centering and scaling, converges in distribution to \(\operatorname{Stab}(1,1)\), whereas the second sum, scaled by \(\log q\), converges in distribution to a constant. This leads to the limit law for \(\log\tilde{J}_{p}\).
We follow a similar strategy for \(J_{p}\). We consider the telescoping sum \(\log J_{p}(r)=\sum_{j\geq 0}h_{p}(T^{2j}r)\); note that \(h_{p}(0)=0\). Using Theorems 2 and 3, we can write \(h_{p}(r)=\operatorname{sgn}(p)a_{\varepsilon_{p}}/8+g_{p}(r)\) with an a.e. continuous Lebesgue integrable function \(g_{p}(x)\). Letting \(a/q=[0;a_{1},a_{2},\dots,a_{L}]\) be a random fraction, we thus have
\[\log J_{p}(a/q)=\left\{\begin{array}{ll}\frac{1}{8}\sum_{j\geq 0}a_{2j+2}+ \sum_{j\geq 0}g_{p}(T^{2j}(a/q))&\text{if }p>0,\\ -\frac{1}{8}\sum_{j\geq 0}a_{2j+1}+\sum_{j\geq 0}g_{p}(T^{2j}(a/q))&\text{if }p<0. \end{array}\right.\]
The main difference is that the main term in \(\log J_{p}(a/q)\) now depends only on the partial quotients with even resp. odd indices if \(p>0\) resp. \(p<0\). This explains the convergence of the joint distribution to a product measure in Theorem 6.
Classical mixing properties of the sequence of partial quotients lead to similar limit laws for random real numbers.
**Theorem 7**.: _Let \(\alpha\sim\mu\) with a Borel probability measure \(\mu\) on \([0,1]\) which is absolutely continuous with respect to the Lebesgue measure. For any \(0<p\leq\infty\),_
\[\frac{\log\tilde{J}_{p,M}(\alpha)-\tilde{E}_{p,M}}{\tilde{\sigma}_{M}}\overset {d}{\to}\operatorname{Stab}(1,1)\qquad\text{as }M\to\infty,\]
_where \(\tilde{E}_{p,M}=\frac{3\operatorname{Vol}(4_{1})}{\pi^{3}}\log M\log\log M+ \tilde{D}_{p}\log M\) and \(\tilde{\sigma}_{M}=\frac{3\operatorname{Vol}(4_{1})}{2\pi^{2}}\log M\), with the constant \(\tilde{D}_{p}\) defined in (9)._
Formula (2) is a special case of Theorem 7 with \(p=\infty\). Since
\[\log\tilde{J}_{p,M}(\alpha)+\log\tilde{J}_{-p,M}(\alpha)=\log M+o(\log M) \qquad\text{in $\mu$-measure},\]
a similar limit law holds for \(\log\tilde{J}_{p,M}(\alpha)\) with \(-\infty\leq p<0\).
**Theorem 8**.: _Let \(\alpha\sim\mu\) with a Borel probability measure \(\mu\) on \([0,1]\) which is absolutely continuous with respect to the Lebesgue measure. For any \(0<p\leq\infty\) and \(-\infty\leq p^{\prime}<0\),_
\[\left(\frac{\log J_{p,M}(\alpha)-E_{p,M}}{\sigma_{M}},\frac{\log J_{p^{\prime},M}(\alpha)-E_{p^{\prime},M}}{\sigma_{M}}\right)\overset{d}{\to}\operatorname {Stab}(1,1)\otimes\operatorname{Stab}(1,-1)\qquad\text{as }M\to\infty,\]
_where, for any \(p\neq 0\), \(E_{p,M}=\operatorname{sgn}(p)\frac{3}{4\pi^{2}}\log M\log\log M+D_{p}\log M\) and \(\sigma_{M}=\frac{3}{8\pi}\log M\), with the constant \(D_{p}\) defined in (10)._
Theorem 1 is a special case of Theorem 8 with \(p=\infty\) and \(p^{\prime}=-\infty\).
The function \(h_{p}\)
Throughout this section, we fix a real number \(\alpha\) and a parameter \(-\infty\leq p\leq\infty\), \(p\neq 0\), and define \(\varepsilon_{p}\) as in (8). If \(\alpha\in\mathbb{Q}\), we write its continued fraction expansion in the form \(\alpha=[a_{0};a_{1},a_{2},\ldots,a_{L}]\), and we let \(q\) be the denominator of \(\alpha\) in its reduced form. If \(\alpha\not\in\mathbb{Q}\), we write its continued fraction expansion in the form \(\alpha=[a_{0};a_{1},a_{2},\ldots]\), and set \(L=\infty\) and \(q=\infty\).
The convergents to \(\alpha\) are denoted by \(p_{\ell}/q_{\ell}=[a_{0};a_{1},a_{2},\ldots,a_{\ell}]\), \(0\leq\ell<L+1\). Any integer \(0\leq N<q\) can be uniquely written in the form \(N=\sum_{\ell=0}^{L-1}b_{\ell}(N)q_{\ell}\), where \(0\leq b_{0}(N)<a_{1}\) and \(0\leq b_{\ell}(N)\leq a_{\ell+1}\), \(1\leq\ell<L\) are integers which further satisfy the rule that \(b_{\ell+1}(N)=a_{\ell+2}\) implies \(b_{\ell}(N)=0\). This is the so-called Ostrowski expansion of \(N\) with respect to \(\alpha\), a special number system tailored to the circle rotation by \(\alpha\); in fact, it was first introduced in connection to \(S_{N}(\alpha)\)[19]. The Ostrowski expansion of course has finitely many terms; more precisely, if \(0\leq N<q_{K}\) with some integer \(0\leq K\leq L\), then \(N=\sum_{\ell=0}^{K-1}b_{\ell}(N)q_{\ell}\).
The distance from the nearest integer function is denoted by \(\|\cdot\|\). We will often use the fact that
\[\frac{1}{a_{\ell+1}+2}\leq q_{\ell}\|q_{\ell}\alpha\|\leq\frac{1}{a_{\ell+1}},\qquad 0\leq\ell<L,\]
except if \(\ell=0\) and \(a_{1}=1\); however, in the latter case \(b_{0}(N)=0\) for all \(0\leq N<q\), and \(\|q_{0}\alpha\|\) does not enter our formulas. Recall also the recursion \(q_{\ell+1}=a_{\ell+1}q_{\ell}+q_{\ell-1}\) with initial conditions \(q_{0}=1\), \(q_{1}=a_{1}\).
One of our main tools is an explicit formula for \(S_{N}(\alpha)\) due to Ostrowski [19] (see [4, p. 23] for a more recent proof).
**Lemma 9** (Ostrowski).: _Let \(0\leq N<q\) be an integer with Ostrowski expansion \(N=\sum_{\ell=0}^{L-1}b_{\ell}(N)q_{\ell}\). Then_
\[S_{N}(\alpha)=\sum_{\ell=0}^{L-1}(-1)^{\ell+1}b_{\ell}(N)\left(\frac{1-b_{ \ell}(N)q_{\ell}\|q_{\ell}\alpha\|}{2}-\|q_{\ell}\alpha\|\sum_{j=0}^{\ell-1}b_ {j}(N)q_{j}-\frac{\|q_{\ell}\alpha\|}{2}\right).\]
**Remark.** The alternating factor \((-1)^{\ell+1}\) in Ostrowski's explicit formula is related to the fact that \(S_{N}(\alpha)\) is an odd function in the variable \(\alpha\). An application of the second iterate of the Gauss map corresponds to shifting the partial quotients by two indices, leaving the factor \((-1)^{\ell+1}\) unchanged.
### Local optimum
In this section, we "locally optimize" \(S_{N}(\alpha)\) by choosing a single Ostrowski digit \(b_{k}(N)\). Note that the \(\ell=k\) term in Ostrowski's explicit formula in Lemma 9 is
\[\frac{(-1)^{k+1}}{2}\cdot\frac{b_{k}(N)}{a_{k+1}}\left(1-\frac{b_{k}(N)}{a_{k +1}}\right)+O(1).\]
Given an odd resp. even index \(k\), we can thus expect a particularly large resp. small value of \(S_{N}(\alpha)\) when choosing \(b_{k}(N)\approx a_{k+1}/2\). Lemma 10 below quantifies how the value of \(S_{N}(\alpha)\) changes as we deviate from the optimal value \(a_{k+1}/2\). In particular, in Lemma 11 below we show that in the sum \(\sum_{N=0}^{q-1}e^{pS_{N}(\alpha)}\) with \(p>0\) resp. \(p<0\), the main contribution comes from the terms with \(b_{k}(N)\approx a_{k+1}/2\).
In the following lemma and in the sequel, we use the natural convention that \(b_{L}(N)<a_{L+1}\) automatically holds.
**Lemma 10**.: _Let \(0\leq N<q\) be an integer with Ostrowski expansion \(N=\sum_{\ell=0}^{L-1}b_{\ell}(N)q_{\ell}\), and let \(0\leq k<L\). Define \(b_{k}^{*}=\lfloor a_{k+1}/2\rfloor\), and_
\[N^{*}=\left\{\begin{array}{ll}N+(b_{k}^{*}-b_{k}(N))q_{k}&\mbox{if }b_{k+1}(N)<a_{k+2},\\ N+b_{k}^{*}q_{k}-q_{k+1}&\mbox{if }b_{k+1}(N)=a_{k+2}.\end{array}\right.\]
_Then_
\[S_{N^{*}}(\alpha)-S_{N}(\alpha)=(-1)^{k+1}\frac{(b_{k}^{*}-b_{k}(N))^{2}}{2a_{k+1} }+O\left(\frac{|b_{k}^{*}-b_{k}(N)|}{a_{k+1}}\right)\]
_with a universal implied constant._
**Proof.** Assume first, that \(b_{k+1}(N)<a_{k+2}\). Then \(N^{*}\) is obtained from \(N\) by changing the Ostrowski digit \(b_{k}(N)\) to \(b_{k}^{*}\), and leaving all other Ostrowski digits intact. Applying Ostrowski's explicit formula in Lemma 9 to \(N\) and \(N^{*}\), we deduce
\[\begin{split} S_{N^{*}}(\alpha)-S_{N}(\alpha)=&(- 1)^{k+1}\left(b_{k}^{*}\frac{1-b_{k}^{*}q_{k}\|q_{k}\alpha\|}{2}-b_{k}(N)\frac{ 1-b_{k}(N)q_{k}\|q_{k}\alpha\|}{2}\right)\\ &+(-1)^{k}(b_{k}^{*}-b_{k}(N))\left(\|q_{k}\alpha\|\sum_{j=0}^{k- 1}b_{j}(N)q_{j}+\frac{\|q_{k}\alpha\|}{2}\right)\\ &+\sum_{\ell=k+1}^{L-1}(-1)^{\ell}b_{\ell}(N)\|q_{\ell}\alpha\|( b_{k}^{*}-b_{k}(N))q_{k}.\end{split} \tag{11}\]
By the rules of the Ostrowski expansion, here \(0\leq\sum_{j=0}^{k-1}b_{j}(N)q_{j}<q_{k}\). Therefore the second and the third line in (11) are negligible:
\[|b_{k}^{*}-b_{k}(N)|\left(\|q_{k}\alpha\|\sum_{j=0}^{k-1}b_{j}(N)q_{j}+\frac{ \|q_{k}\alpha\|}{2}\right)\ll\frac{|b_{k}^{*}-b_{k}(N)|}{a_{k+1}},\]
and
\[\left|\sum_{\ell=k+1}^{L-1}(-1)^{\ell}b_{\ell}(N)\|q_{\ell}\alpha\|(b_{k}^{*}- b_{k}(N))q_{k}\right|\leq|b_{k}^{*}-b_{k}(N)|q_{k}\sum_{\ell=k+1}^{L-1}\frac{1}{q_{ \ell}}\ll\frac{|b_{k}^{*}-b_{k}(N)|}{a_{k+1}}.\]
Note that \(b_{k}^{*}q_{k}\|q_{k}\alpha\|=1/2+O(1/a_{k+1})\) by the definition of \(b_{k}^{*}\). The polynomial \(F(x)=x(1-x)\) satisfies the identity \(F(x)-F(y)=(x-y)^{2}+(x-y)(1-2x)\), hence
\[F(b_{k}^{*}q_{k}\|q_{k}\alpha\|)-F(b_{k}(N)q_{k}\|q_{k}\alpha\|)=(b_{k}^{*}-b_ {k}(N))^{2}q_{k}^{2}\|q_{k}\alpha\|^{2}+O\left(\frac{|b_{k}^{*}-b_{k}(N)|q_{k} \|q_{k}\alpha\|}{a_{k+1}}\right),\]
and consequently in the first line in (11) we have
\[\begin{split} b_{k}^{*}\frac{1-b_{k}^{*}q_{k}\|q_{k}\alpha\|}{2} -b_{k}(N)\frac{1-b_{k}(N)q_{k}\|q_{k}\alpha\|}{2}&=\frac{F(b_{k} ^{*}q_{k}\|q_{k}\alpha\|)-F(b_{k}(N)q_{k}\|q_{k}\alpha\|)}{2q_{k}\|q_{k}\alpha \|}\\ &=\frac{(b_{k}^{*}-b_{k}(N))^{2}q_{k}\|q_{k}\alpha\|}{2}+O\left( \frac{|b_{k}^{*}-b_{k}(N)|}{a_{k+1}}\right)\\ &=\frac{(b_{k}^{*}-b_{k}(N))^{2}}{2a_{k+1}}+O\left(\frac{|b_{k}^{ *}-b_{k}(N)|}{a_{k+1}}\right).\end{split}\]
This finishes the proof in the case \(b_{k+1}(N)<a_{k+2}\).
Assume next, that \(b_{k+1}(N)=a_{k+2}\). By the rules of the Ostrowski expansion, we necessarily have \(b_{k}(N)=0\), thus \(N^{*}\) is obtained from \(N\) by decreasing the digit \(b_{k+1}(N)=a_{k+2}\) by one, and changing \(b_{k}(N)=0\) to \(b_{k}^{*}\). We arrive at a legitimate Ostrowski expansion of \(N^{*}\); in particular, \(b_{\ell}(N^{*})=b_{\ell}(N)\) for all \(\ell\neq k,k+1\). Applying Ostrowski's explicit formula in Lemma 9 to \(N\) and
\(N^{*}\), we deduce
\[S_{N^{*}}(\alpha)-S_{N}(\alpha)=\] \[(-1)^{k+1}b_{k}^{*}\left(\frac{1-b_{k}^{*}q_{k}\|q_{k}\alpha\|}{2}- \|q_{k}\alpha\|\sum_{j=0}^{k-1}b_{j}(N)q_{j}-\frac{\|q_{k}\alpha\|}{2}\right)\] \[+(-1)^{k+2}\left((a_{k+2}-1)\frac{1-(a_{k+2}-1)q_{k+1}\|q_{k+1} \alpha\|}{2}-a_{k+2}\frac{1-a_{k+2}q_{k+1}\|q_{k+1}\alpha\|}{2}\right)\] \[+(-1)^{k+2}\left(-(a_{k+2}-1)\|q_{k+1}\alpha\|\left(\sum_{j=0}^{k- 1}b_{j}(N)q_{j}+b_{k}^{*}q_{k}+\frac{1}{2}\right)+a_{k+2}\|q_{k+1}\alpha\| \left(\sum_{j=0}^{k-1}b_{j}(N)q_{j}+\frac{1}{2}\right)\right)\] \[+\sum_{\ell=k+2}^{L-1}(-1)^{\ell}b_{\ell}(N)\|q_{\ell}\alpha\|(b_ {k}^{*}q_{k}-q_{k+1}).\]
Straightforward computation shows that the first line in the previous formula is \((-1)^{k+1}a_{k+1}/8+O(1)\), and all other lines are \(O(1)\).
**Lemma 11**.: _Let \(0\leq k<K\leq L\) be integers such that \(a_{k+1}\geq A\) with a large universal constant \(A>1\). If \(p\neq\pm\infty\) and \(k+1\equiv\varepsilon_{p}\pmod{2}\), then_
\[\sum_{0\leq N<q_{K}\atop|b_{k}(N)-a_{k+1}/2|>\max\{10,10/\sqrt{|p|}\}\sqrt{a_{ k+1}\log a_{k+1}}}e^{pS_{N}(\alpha)}\leq a_{k+1}^{-48\max\{|p|,1\}}\sum_{0 \leq N<q_{K}}e^{pS_{N}(\alpha)}.\]
_If \(k\) is odd, then_
\[\max_{0\leq N<q_{K}\atop|b_{k}(N)-a_{k+1}/2|>10\sqrt{a_{k+1}\log a_{k+1}}}e^{ S_{N}(\alpha)}\leq a_{k+1}^{-48}\max_{0\leq N<q_{K}}e^{S_{N}(\alpha)}.\]
_If \(k\) is even, then_
\[\min_{0\leq N<q_{K}\atop|b_{k}(N)-a_{k+1}/2|>10\sqrt{a_{k+1}\log a_{k+1}}}e^{ S_{N}(\alpha)}\geq a_{k+1}^{48}\min_{0\leq N<q_{K}}e^{S_{N}(\alpha)}.\]
**Proof.** We give a detailed proof in the case \(0<p<\infty\). The proof for \(-\infty<p<0\) is entirely analogous, whereas the claims on the maximum and the minimum follow from letting \(p\to\pm\infty\).
Assume thus that \(0<p<\infty\), and that \(k\) is odd. Set \(Z=\sum_{0\leq N<q_{K}}e^{pS_{N}(\alpha)}\), and consider the sets
\[H_{k}(b) =\{0\leq N<q_{K}\,:\,b_{k}(N)=b\},\] \[H_{k}^{*}(0) =\{0\leq N<q_{K}\,:\,b_{k}(N)=0,\ b_{k+1}(N)<a_{k+2}\},\] \[H_{k}^{**}(0) =\{0\leq N<q_{K}\,:\,b_{k}(N)=0,\ b_{k+1}(N)=a_{k+2}\}.\]
Let \(|b-a_{k+1}/2|>\max\{10,10/\sqrt{p}\}\sqrt{a_{k+1}\log a_{k+1}}\) and \(b\neq 0\). Then the map \(H_{k}(b)\to H_{k}(\lfloor a_{k+1}/2\rfloor)\), \(N\mapsto N+(\lfloor a_{k+1}/2\rfloor-b)q_{k}\) is injective, and by Lemma 10,
\[\sum_{N\in H_{k}(b)}e^{pS_{N}(\alpha)}\leq\sum_{N\in H_{k}(\lfloor a_{k+1}/2 \rfloor)}e^{p(S_{N}(\alpha)-(b-a_{k+1}/2)^{2}/(2.001a_{k+1}))}\leq a_{k+1}^{-49.9\max\{p,1\}}Z.\]
The map \(H_{k}^{*}(0)\to H_{k}(\lfloor a_{k+1}/2\rfloor)\), \(N\mapsto N+\lfloor a_{k+1}/2\rfloor q_{k}\) is injective, and by Lemma 10,
\[\sum_{N\in H_{k}^{*}(0)}e^{pS_{N}(\alpha)}\leq\sum_{N\in H_{k}(\lfloor a_{k+1}/2 \rfloor)}e^{p(S_{N}(\alpha)-(a_{k+1}/2)^{2}/(2.001a_{k+1}))}\leq e^{-pa_{k+1} /8.004}Z.\]
The map \(H_{k}^{**}(0)\to H_{k}(\lfloor a_{k+1}/2\rfloor)\), \(N\mapsto N+\lfloor a_{k+1}/2\rfloor q_{k}-q_{k+1}\) is injective, and by Lemma 10,
\[\sum_{N\in H_{k}^{**}(0)}e^{pS_{N}(\alpha)}\leq\sum_{N\in H_{k}(\lfloor a_{k+1}/ 2\rfloor)}e^{p(S_{N}(\alpha)-(a_{k+1}/2)^{2}/(4a_{k+1}))}\leq e^{-pa_{k+1}/8.00 4}Z.\]
Note that \(e^{-pa_{k+1}/8.004}\leq a_{k+1}^{-49.9\max\{p,1\}}\) provided that \(|0-a_{k+1}/2|>\max\{10,10/\sqrt{p}\}\sqrt{a_{k+1}\log a_{k+1}}\). As the number of possible values of \(b\) is at most \(a_{k+1}-1\), the previous three formulas lead to
\[\sum_{\begin{subarray}{c}0\leq N<q_{K}\\ |b_{k}(N)-a_{k+1}/2|>\max\{10,10/\sqrt{p}\}\sqrt{a_{k+1}\log a_{k+1}}\end{subarray}}e ^{pS_{N}(\alpha)}\leq(a_{k+1}+1)a_{k+1}^{-49.9\max\{p,1\}}Z\leq a_{k+1}^{-48 \max\{p,1\}}Z.\]
### Factorization of \(J_{p}\)
In this section, we establish a factorization of \(\sum_{0\leq N<q_{K}}e^{pS_{N}(\alpha)}\) into a product of two sums up to a small error. The main point of Lemma 12 below is that the first main factor depends only on the first \(k\) partial quotients of \(\alpha\). In the special case of a rational \(\alpha\) and \(K=L\), we obtain a factorization of \(J_{p}(\alpha)\).
**Lemma 12**.: _Let \(0\leq k<K\leq L\) be integers such that \(a_{k+1}\geq A\max\{1,\frac{1}{|p|}\log\frac{1}{|p|}\}\) with a large universal constant \(A>1\). If \(p\neq\pm\infty\) and \(k+1\equiv\varepsilon_{p}\pmod{2}\), then_
\[\left(\sum_{0\leq N<q_{K}}e^{pS_{N}(\alpha)}\right)^{1/p}= \left(1+O\left(\sqrt{\frac{\log a_{k+1}}{\min\{1,|p|\}a_{k+1}}} \right)\right)\left(\sum_{0\leq N<q_{k}}e^{p(S_{N}(p_{k}/q_{k})+(-1)^{k}N/(2q_ {k}))}\right)^{1/p}\] \[\times\left(\sum_{\begin{subarray}{c}0\leq N<q_{K}\\ b_{0}(N)=\cdots=bb_{k-1}(N)=0\\ |b_{k}(N)-a_{k+1}/2|\leq\max\{10,10/\sqrt{|p|}\}\sqrt{a_{k+1}\log a_{k+1}} \end{subarray}}e^{pS_{N}(\alpha)}\right)^{1/p}.\]
_If \(k\) is odd, then_
\[\max_{0\leq N<q_{K}}e^{S_{N}(\alpha)}= \left(1+O\left(\sqrt{\frac{\log a_{k+1}}{a_{k+1}}}\right)\right) \max_{0\leq N<q_{k}}e^{S_{N}(p_{k}/q_{k})-N/(2q_{k})}\] \[\times\max_{\begin{subarray}{c}0\leq N<q_{K}\\ b_{0}(N)=\cdots=bb_{k-1}(N)=0\\ |b_{k}(N)-a_{k+1}/2|\leq\max\{10,10/\sqrt{|p|}\}\sqrt{a_{k+1}\log a_{k+1}} \end{subarray}}e^{S_{N}(\alpha)}.\]
_If \(k\) is even, then_
\[\min_{0\leq N<q_{K}}e^{S_{N}(\alpha)}= \left(1+O\left(\sqrt{\frac{\log a_{k+1}}{a_{k+1}}}\right)\right) \min_{0\leq N<q_{k}}e^{S_{N}(p_{k}/q_{k})+N/(2q_{k})}\] \[\times\min_{\begin{subarray}{c}0\leq N<q_{K}\\ b_{0}(N)=\cdots=bb_{k-1}(N)=0\\ |b_{k}(N)-a_{k+1}/2|\leq\max\{10,10/\sqrt{|p|}\}\sqrt{a_{k+1}\log a_{k+1}} \end{subarray}}e^{S_{N}(\alpha)}.\]
_All implied constants are universal._
We mention that the condition \(|b_{k}(N)-a_{k+1}/2|\leq\max\{10,10/\sqrt{|p|}\}\sqrt{a_{k+1}\log a_{k+1}}\) in the summations could be removed using a straightforward modification of Lemma 11, but we will not need this fact. We give the proof after a preparatory lemma.
**Lemma 13**.: _Let \(0\leq N<q\) be an integer with Ostrowski expansion \(N=\sum_{\ell=0}^{L-1}b_{\ell}(N)q_{\ell}\). Let \(0\leq k<L\), and set \(N_{1}=\sum_{\ell=0}^{k-1}b_{\ell}(N)q_{\ell}\) and \(N_{2}=\sum_{\ell=k}^{L-1}b_{\ell}(N)q_{\ell}\). Then_
\[S_{N}(\alpha)=S_{N_{1}}(\alpha)+S_{N_{2}}(\alpha)+(-1)^{k}b_{k}(N)\|q_{k} \alpha\|N_{1}+O\left(\frac{1}{a_{k+1}}\right)\]
_with a universal implied constant._
Proof.: Apply Ostrowski's explicit formula in Lemma 9 to \(N\), and consider the sum over \(0\leq\ell\leq k-1\) and \(k\leq\ell<L\) separately. The sum over \(0\leq\ell\leq k-1\) is precisely \(S_{N_{1}}(\alpha)\). For \(k\leq\ell<L\) we have
\[\sum_{j=0}^{\ell-1}b_{j}(N)q_{j}=\sum_{j=0}^{k-1}b_{j}(N)q_{j}+\sum_{j=k}^{ \ell-1}b_{j}(N)q_{j}=N_{1}+\sum_{j=0}^{\ell-1}b_{j}(N_{2})q_{j},\]
hence
\[S_{N}(\alpha)=S_{N_{1}}(\alpha)+S_{N_{2}}(\alpha)+\sum_{\ell=k}^{L-1}(-1)^{ \ell}b_{\ell}(N)\|q_{\ell}\alpha\|N_{1}.\]
Since \(N_{1}<q_{k}\), the terms \(k+1\leq\ell<L\) in the previous formula satisfy
\[\left|\sum_{\ell=k+1}^{L-1}(-1)^{\ell}b_{\ell}(N)\|q_{\ell}\alpha\|N_{1} \right|\leq\sum_{\ell=k+1}^{L-1}\frac{q_{k}}{q_{\ell}}\ll\frac{1}{a_{k+1}},\]
and the claim follows.
Proof of Lemma 12.: It is enough prove the lemma for finite \(p\). The claims on the maximum and the minimum then follow from letting \(p\to\pm\infty\).
Lemma 11 shows that
\[\sum_{0\leq N<q_{K}}e^{pS_{N}(\alpha)}=\left(1+O\left(a_{k+1}^{-48\max\{|p|,1 \}}\right)\right)\sum_{\begin{subarray}{c}0\leq N<q_{K}\\ |b_{k}(N)-a_{k+1}/2|\leq\max\{10,10/\sqrt{|p|}\}\sqrt{a_{k+1}\log a_{k+1}} \end{subarray}}e^{pS_{N}(\alpha)}. \tag{12}\]
Let \(N_{1},N_{2}\) be as in Lemma 13. The map \(N\mapsto(N_{1},N_{2})\) is a bijection from
\[\left\{0\leq N<q_{K}\,:\,|b_{k}(N)-a_{k+1}/2|\leq\max\{10,10/\sqrt{|p|}\}\sqrt {a_{k+1}\log a_{k+1}}\right\}\]
to the product set
\[[0,q_{k})\times\left\{0\leq N<q_{K}:\begin{array}{c}b_{0}(N)=\cdots=b_{k-1}(N )=0,\\ |b_{k}(N)-a_{k+1}/2|\leq\max\{10,10/\sqrt{|p|}\}\sqrt{a_{k+1}\log a_{k+1}} \end{array}\right\}.\]
For every such \(N\),
\[(-1)^{k}b_{k}(N)\|q_{k}\alpha\|N_{1} =(-1)^{k}\frac{a_{k+1}}{2}\|q_{k}\alpha\|N_{1}+O\left(\max\{1,1/ \sqrt{|p|}\}\sqrt{a_{k+1}\log a_{k+1}}\|q_{k}\alpha\|q_{k}\right)\] \[=(-1)^{k}\frac{N_{1}}{2q_{k}}+O\left(\max\{1,1/\sqrt{|p|}\}\sqrt{ \frac{\log a_{k+1}}{a_{k+1}}}\right).\]
Therefore by Lemma 13,
\[S_{N}(\alpha)=S_{N_{1}}(\alpha)+S_{N_{2}}(\alpha)+(-1)^{k}\frac{N_{1}}{2q_{k} }+O\left(\sqrt{\frac{\log a_{k+1}}{\min\{1,|p|\}a_{k+1}}}\right),\]
and consequently
\[\sum_{\begin{subarray}{c}0\leq N<q_{K}\\ |b_{k}(N)-a_{k+1}/2|\leq\max\{10,10/\sqrt{|p|}\}\sqrt{a_{k+1}\log a_{k+1}}\end{subarray}} e^{pS_{N}(\alpha)} =\sum_{0\leq N<q_{k}}e^{p(S_{N}(\alpha)+(-1)^{k}N/(2q_{k}))}\] \[\times\sum_{\begin{subarray}{c}0\leq N<q_{K}\\ b_{0}(N)=\cdots=b_{k-1}(N)=0\\ |b_{k}(N)-a_{k+1}/2|\leq\max\{10,10/\sqrt{|p|}\}\sqrt{a_{k+1}\log a_{k+1}} \end{subarray}}e^{pS_{N}(\alpha)}\] \[\times\exp\left(O\left(|p|\sqrt{\frac{\log a_{k+1}}{\min\{1,|p|\}a _{k+1}}}\right)\right).\]
Substituting this in (12) gives
\[\left(\sum_{0\leq N<q_{K}}e^{pS_{N}(\alpha)}\right)^{1/p}= \left(1+O\left(\sqrt{\frac{\log a_{k+1}}{\min\{1,|p|\}a_{k+1}}} \right)\right)\Bigg{(}\sum_{0\leq N<q_{k}}e^{p(S_{N}(\alpha)+(-1)^{k}N/(2q_{ k}))}\Bigg{)}^{1/p}\] \[\times\Bigg{(}\sum_{\begin{subarray}{c}0\leq N<q_{K}\\ b_{0}(N)=\cdots=b_{k-1}(N)=0\\ |b_{k}(N)-a_{k+1}/2|\leq\max\{10,10/\sqrt{|p|}\sqrt{a_{k+1}\log a_{k+1}}\end{subarray}}} e^{pS_{N}(\alpha)}\Bigg{)}^{1/p}.\]
It remains to replace \(\alpha\) by \(p_{k}/q_{k}\) in the first main factor in the previous formula. For any \(1\leq n<q_{k}\), we have \(|n\alpha-np_{k}/q_{k}|=(n/q_{k})\|q_{k}\alpha\|<1/q_{k}\), and \(np_{k}/q_{k}\) is not an integer. In particular, there is no integer between \(n\alpha\) and \(np_{k}/q_{k}\), so
\[\{n\alpha\}-\left\{\frac{np_{k}}{q_{k}}\right\}=n\alpha-\frac{np_{k}}{q_{k}}= \frac{n}{q_{k}}(-1)^{k}\|q_{k}\alpha\|.\]
Therefore for any \(0\leq N<q_{k}\),
\[S_{N}(\alpha)-S_{N}(p_{k}/q_{k})=\sum_{n=1}^{N}\frac{n}{q_{k}}(-1)^{k}\|q_{k} \alpha\|=O\left(\frac{1}{a_{k+1}}\right). \tag{13}\]
Replacing \(\alpha\) by \(p_{k}/q_{k}\) thus introduces a negligible multiplicative error \(1+O(1/a_{k+1})\).
### The matching lemma
Assume now that \(\alpha\in(0,1)\), and recall that we write its continued fraction expansion in the form \(\alpha=[0;a_{1},a_{2},\ldots,a_{L}]\) (if \(\alpha\in\mathbb{Q}\)) or \(\alpha=[0;a_{1},a_{2},\ldots]\) (if \(\alpha\not\in\mathbb{Q}\)), with convergents \(p_{\ell}/q_{\ell}=[0;a_{1},a_{2},\ldots,a_{\ell}]\). Let \(\alpha^{\prime}=T^{2}\alpha\), where \(T^{2}\) is the second iterate of the Gauss map \(T\). Then \(\alpha^{\prime}=[0;a_{3},a_{4},\ldots,a_{L}]\) if \(\alpha\in\mathbb{Q}\), with the convention that \(\alpha^{\prime}=0\) if \(L\leq 2\), and \(\alpha^{\prime}=[0;a_{3},a_{4},\ldots]\) if \(\alpha\not\in\mathbb{Q}\). Let \(q^{\prime}\) denote the denominator of \(\alpha^{\prime}\) in its reduced form if \(\alpha\in\mathbb{Q}\), and let \(q^{\prime}=\infty\) if \(\alpha\not\in\mathbb{Q}\). Let \(p^{\prime}_{\ell}/q^{\prime}_{\ell}=[0;a_{3},a_{4},\ldots,a_{\ell}]\), \(3\leq\ell<L+1\) and \(p^{\prime}_{2}=0\), \(q^{\prime}_{2}=1\) denote the convergents to \(\alpha^{\prime}\). The Ostrowski expansion of integers \(0\leq N<q^{\prime}\) with respect to \(\alpha^{\prime}\) will be written as \(N=\sum_{\ell=2}^{L-1}b^{\prime}_{\ell}(N)q^{\prime}_{\ell}\). Note that \(0\leq b^{\prime}_{2}(N)<a_{3}\) and \(0\leq b^{\prime}_{\ell}(N)\leq a_{\ell+1}\), \(3\leq\ell<L\).
Given an integer \(0\leq N<q\) with Ostrowski expansion \(N=\sum_{\ell=0}^{L-1}b_{\ell}(N)q_{\ell}\) with respect to \(\alpha\) such that \(b_{2}(N)<a_{3}\), define \(N^{\prime}=\sum_{\ell=2}^{L-1}b_{\ell}(N)q^{\prime}_{\ell}\). Note that this is a legitimate Ostrowski expansion with respect to \(\alpha^{\prime}\), that is, \(b^{\prime}_{\ell}(N^{\prime})=b_{\ell}(N)\) for all \(2\leq\ell<L\). The map \(N\mapsto N^{\prime}\), from \(\{0\leq N<q\,:\,b_{2}(N)<a_{3}\}\) to \([0,q^{\prime})\) is surjective but not injective (as it forgets the digits \(b_{0}(N)\) and \(b_{1}(N)\)), and provides a natural way to match certain terms of the sum \(\sum_{0\leq N<q}e^{pS_{N}(\alpha)}\) to terms of the sum \(\sum_{0\leq N<q^{\prime}}e^{pS_{N}(\alpha^{\prime})}\). By comparing \(S_{N}(\alpha)\) to \(S_{N^{\prime}}(\alpha^{\prime})\), the following "matching lemma" is a key ingredient in the study of the function \(h_{p}\).
**Lemma 14**.: _Let \(0\leq N<q\) be an integer with Ostrowski expansion \(N=\sum_{\ell=0}^{L-1}b_{\ell}(N)q_{\ell}\) with respect to \(\alpha\) such that \(b_{2}(N)<a_{3}\). Then_
\[S_{N}(\alpha)-S_{N^{\prime}}(\alpha^{\prime})=\sum_{\ell=0}^{1}(-1)^{\ell+1}b_{ \ell}(N)\left(\frac{1-b_{\ell}(N)q_{\ell}\|q_{\ell}\alpha\|}{2}-\|q_{\ell} \alpha\|\sum_{j=0}^{\ell-1}b_{j}(N)q_{j}-\frac{\|q_{\ell}\alpha\|}{2}\right)+ O(1).\]
_If in addition \(b_{0}(N)=\cdots=b_{k-1}(N)=0\) with some \(k\geq 2\), then_
\[S_{N}(\alpha)-S_{N^{\prime}}(\alpha^{\prime})=a_{1}\frac{(-1)^{k+1}p_{k}b_{k}( N)/a_{k+1}-(b_{k}(N)/a_{k+1})^{2}}{2q_{k}q_{k}^{\prime}}+O\left(\frac{1}{q_{k+1} ^{\prime}}\right).\]
_The implied constants are universal._
**Proof.** Since \(p_{\ell}^{\prime},q_{\ell}^{\prime}\) satisfy the same second order linear recursion of which \(p_{\ell},q_{\ell}\) are linearly independent solutions, they are linear combinations of \(p_{\ell},q_{\ell}\). Indeed, one readily checks that
\[p_{\ell}^{\prime}=(a_{1}a_{2}+1)p_{\ell}-a_{2}q_{\ell}\quad\text{and}\quad q_ {\ell}^{\prime}=q_{\ell}-a_{1}p_{\ell}\quad\text{for all $2\leq\ell<L+1$}. \tag{14}\]
Now let \(2\leq j\leq\ell<L\) be integers. We claim that if either \(\ell\geq 3\), or \(\ell=2\) and \(a_{3}>1\), then
\[\left|q_{j}\|q_{\ell}\alpha\|-q_{j}^{\prime}\|q_{\ell}^{\prime}\alpha^{\prime }\|\right|\leq\frac{2a_{1}}{q_{j+1}q_{\ell+1}^{\prime}}. \tag{15}\]
Set \(R=[a_{\ell+1};a_{\ell+2},\ldots,a_{L}]\) resp. \(R=[a_{\ell+1};a_{\ell+2},\ldots]\) if \(\alpha\in\mathbb{Q}\) resp. \(\alpha\not\in\mathbb{Q}\). A classical identity of continued fractions states that \(\|q_{\ell}\alpha\|=1/(Rq_{\ell}+q_{\ell-1})\) and \(\|q_{\ell}^{\prime}\alpha^{\prime}\|=1/(Rq_{\ell}^{\prime}+q_{\ell-1}^{\prime})\). Formula (14) thus leads to
\[q_{j}\|q_{\ell}\alpha\|-q_{j}^{\prime}\|q_{\ell}^{\prime}\alpha^{\prime}\|=a_ {1}\frac{Rq_{j}q_{\ell}\left(\frac{p_{j}}{q_{j}}-\frac{p_{\ell}}{q_{\ell}} \right)+q_{j}q_{\ell-1}\left(\frac{p_{j}}{q_{j}}-\frac{p_{\ell-1}}{q_{\ell-1}} \right)}{(Rq_{\ell}+q_{\ell-1})(Rq_{\ell}^{\prime}+q_{\ell-1}^{\prime})}.\]
Observe that \(R\geq a_{\ell+1}\), and recall the identity \(|q_{\ell}p_{\ell-1}-q_{\ell-1}p_{\ell}|=1\). If \(j=\ell\), we thus have
\[|q_{\ell}\|q_{\ell}\alpha\|-q_{\ell}^{\prime}\|q_{\ell}^{\prime}\alpha^{\prime }\|=a_{1}\frac{1}{(Rq_{\ell}+q_{\ell-1})(Rq_{\ell}^{\prime}+q_{\ell-1}^{\prime })}\leq\frac{a_{1}}{q_{\ell+1}q_{\ell+1}^{\prime}},\]
as claimed. If \(j=\ell-1\), then
\[|q_{\ell-1}\|q_{\ell}\alpha\|-q_{\ell-1}^{\prime}\|q_{\ell}^{\prime}\alpha^{ \prime}\|\|=a_{1}\frac{R}{(Rq_{\ell}+q_{\ell-1})(Rq_{\ell}^{\prime}+q_{\ell-1} ^{\prime})}\leq\frac{a_{1}}{q_{\ell}q_{\ell+1}^{\prime}},\]
as claimed. If \(j\leq\ell-2\), we can use \(|p_{j}/q_{j}-p_{\ell}/q_{\ell}|\leq 2|\alpha-p_{j}/q_{j}|\) and \(|p_{j}/q_{j}-p_{\ell-1}/q_{\ell-1}|\leq 2|\alpha-p_{j}/q_{j}|\) to deduce
\[|q_{j}\|q_{\ell}\alpha\|-q_{j}^{\prime}\|q_{\ell}^{\prime}\alpha^{\prime}\| \|\leq a_{1}\frac{(Rq_{j}q_{\ell}+q_{j}q_{\ell-1})2\left|\alpha-\frac{p_{j}}{q_ {j}}\right|}{(Rq_{\ell}+q_{\ell-1})(Rq_{\ell}^{\prime}+q_{\ell-1}^{\prime})}= \frac{2a_{1}\|q_{j}\alpha\|}{Rq_{\ell}^{\prime}+q_{\ell-1}^{\prime}}\leq\frac{ 2a_{1}}{q_{j+1}q_{\ell+1}^{\prime}},\]
as claimed. This finishes the proof of (15).
We now prove the lemma. Since \(b_{\ell}^{\prime}(N^{\prime})=b_{\ell}(N)\) for all \(2\leq\ell<L\), Ostrowski's explicit formula in Lemma 9 gives
\[S_{N^{\prime}}(\alpha^{\prime})=\sum_{\ell=2}^{L-1}(-1)^{\ell+1}b_{\ell}(N) \left(\frac{1-b_{\ell}(N)q_{\ell}^{\prime}\|q_{\ell}^{\prime}\alpha^{\prime }\|}{2}-\|q_{\ell}^{\prime}\alpha^{\prime}\|\sum_{j=2}^{L-1}b_{j}(N)q_{j}^{ \prime}-\frac{\|q_{\ell}^{\prime}\alpha^{\prime}\|}{2}\right).\]
Consequently,
\[S_{N}(\alpha)-S_{N^{\prime}}(\alpha^{\prime})= \sum_{\ell=0}^{1}(-1)^{\ell+1}b_{\ell}(N)\left(\frac{1-b_{\ell}(N)q _{\ell}\|q_{\ell}\alpha\|}{2}-\|q_{\ell}\alpha\|\sum_{j=0}^{\ell-1}b_{j}(N)q_{j }-\frac{\|q_{\ell}\alpha\|}{2}\right)\] \[+\sum_{\ell=2}^{L-1}(-1)^{\ell+1}b_{\ell}(N)\Bigg{(}\frac{b_{\ell }(N)(q_{\ell}^{\prime}\|q_{\ell}^{\prime}\alpha^{\prime}\|-q_{\ell}\|q_{\ell} \alpha\|)}{2}-\|q_{\ell}\alpha\|\sum_{j=0}^{1}b_{j}(N)q_{j}\] \[+\sum_{j=2}^{\ell-1}b_{j}(N)\left(q_{j}^{\prime}\|q_{\ell}^{ \prime}\alpha^{\prime}\|-q_{j}\|q_{\ell}\alpha\|\right)+\frac{\|q_{\ell}^{ \prime}\alpha^{\prime}\|-\|q_{\ell}\alpha\|}{2}\Bigg{)}.\]
By the estimate (15) and the fact that \(q_{\ell+1}\geq q_{2}q_{\ell+1}^{\prime}\) (which can be seen e.g. by induction), the absolute value of the sum over \(2\leq\ell<L\) in the previous formula is at most
\[\sum_{\ell=2}^{L-1}a_{\ell+1}\left(\frac{a_{\ell+1}a_{1}}{q_{\ell+1}q_{\ell+1 }^{\prime}}+\frac{q_{2}}{q_{\ell+1}}+\sum_{j=2}^{\ell-1}a_{j+1}\frac{2a_{1}}{ q_{j+1}q_{\ell+1}^{\prime}}+\frac{1}{q_{\ell+1}^{\prime}}+\frac{1}{q_{\ell+1}} \right)\ll\sum_{\ell=2}^{L-1}\frac{1}{q_{\ell}^{\prime}}\ll 1.\]
This finishes the proof of the first claim.
If \(b_{0}(N)=\dots=b_{k-1}(N)=0\) with some \(k\geq 2\), then the terms \(\ell\leq k-1\) are all zero, and the contribution of the terms \(k+1\leq\ell<L\) is similarly seen to be \(\sum_{\ell=k+1}^{L-1}1/q_{\ell}^{\prime}\ll 1/q_{k+1}^{\prime}\). Finally, the \(\ell=k\) term is
\[(-1)^{k+1}b_{k}(N)\left(\frac{b_{k}(N)(q_{k}^{\prime}\|q_{k}^{\prime}\alpha^{ \prime}\|-q_{k}\|q_{k}\alpha\|)}{2}+\frac{\|q_{k}^{\prime}\alpha^{\prime}\|- \|q_{k}\alpha\|}{2}\right).\]
As we have seen, with \(R=[a_{k+1};a_{k+2},\dots,a_{L}]\) resp. \(R=[a_{k+1};a_{k+2},\dots]\) here
\[q_{k}^{\prime}\|q_{k}^{\prime}\alpha^{\prime}\|-q_{k}\|q_{k}\alpha\|=\frac{(-1 )^{k}a_{1}}{(Rq_{k}+q_{k-1})(Rq_{k}^{\prime}+q_{k-1}^{\prime})}=\frac{(-1)^{k }a_{1}}{a_{k+1}^{2}q_{k}q_{k}^{\prime}}+O\left(\frac{a_{1}}{a_{k+1}^{3}q_{k}q_{ k}^{\prime}}\right)\]
and using (14),
\[\|q_{k}^{\prime}\alpha^{\prime}\|-\|q_{k}\alpha\|=\frac{1}{Rq_{k}^{\prime}+q_ {k-1}^{\prime}}-\frac{1}{Rq_{k}+q_{k-1}}=\frac{a_{1}p_{k}}{a_{k+1}q_{k}q_{k}^{ \prime}}+O\left(\frac{a_{1}p_{k}}{a_{k+1}^{2}q_{k}q_{k}^{\prime}}\right),\]
and the second claim follows.
### Asymptotics of \(h_{p}\)
We now prove Theorem 2 on the asymptotics of \(h_{p}\) after a preparatory lemma.
**Lemma 15**.: _For any \(0<p<\infty\) and any integer \(a\geq 1\),_
\[\sum_{b=0}^{a-1}e^{\frac{pa}{2}\cdot\frac{b}{a}\left(1-\frac{b}{a}\right)}= \exp\left(\frac{pa}{8}+\frac{1}{2}\log a+O\left(\max\left\{p,\log\frac{1}{p} \right\}\right)\right) \tag{16}\]
_and_
\[\sum_{b=0}^{a-1}e^{-\frac{pa}{2}\cdot\frac{b}{a}\left(1-\frac{b}{a}\right)}= \exp\left(O\left(\max\left\{p,\log\frac{1}{p}\right\}\right)\right) \tag{17}\]
_with universal implied constants._
**Proof.** We start with (16). Each term in the sum is at most \(e^{pa/8}\), thus comparing the sum to the corresponding integral leads to the upper bound
\[\sum_{b=0}^{a-1}e^{\frac{pa}{2}\cdot\frac{b}{a}\left(1-\frac{b}{a}\right)}\leq a \int_{0}^{1}e^{\frac{pa}{2}x(1-x)}\,\mathrm{d}x+e^{\frac{pa}{8}}\leq ae^{\frac{ pa}{8}}\int_{-\infty}^{\infty}e^{-\frac{pa}{2}(x-1/2)^{2}}\,\mathrm{d}x+e^{\frac{pa}{8 }}=\left(\sqrt{\frac{2\pi a}{p}}+1\right)e^{\frac{pa}{8}}.\]
Here
\[\log\left(\sqrt{\frac{2\pi a}{p}}+1\right)\leq\frac{1}{2}\log a+O\left(\max \left\{p,\log\frac{1}{p}\right\}\right),\]
and the \(\leq\) part of (16) follows. Since \(e^{\frac{pa}{2}x(1-x)}\) is increasing on \([0,1/2]\), comparing the sum to the corresponding integral leads to the lower bound
\[\sum_{b=1}^{\lfloor a/2\rfloor}e^{\frac{pa}{2}\cdot\frac{b}{a}\left(1-\frac{b }{a}\right)}\geq a\int_{0}^{\frac{\lfloor a/2\rfloor}{a}}e^{\frac{pa}{2}x(1-x )}\,\mathrm{d}x\geq ae^{\frac{pa}{8}}\int_{0}^{\frac{a-1}{2a}}e^{-\frac{pa}{2} \left(x-1/2\right)^{2}}\,\mathrm{d}x=\sqrt{\frac{a}{p}}e^{\frac{pa}{8}}\int_{ -\frac{\sqrt{pa}}{2}}^{-\frac{\sqrt{pa}}{2}}e^{-x^{2}/2}\,\mathrm{d}x.\]
If \(pa\geq 100\) and \(p\leq 64a\), then \(-\sqrt{pa}/2\leq-5\) and \(-\sqrt{p}/(2\sqrt{a})\geq-4\), thus the previous formula yields
\[\sum_{b=0}^{a-1}e^{\frac{pa}{2}\cdot\frac{b}{a}\left(1-\frac{b}{a}\right)} \gg\sqrt{\frac{a}{p}}e^{\frac{pa}{8}},\]
which suffices for the \(\geq\) part of (16). If \(pa<100\), then simply using the fact that each term is at least \(1\) yields
\[\sum_{b=0}^{a-1}e^{\frac{pa}{2}\cdot\frac{b}{a}\left(1-\frac{b}{a}\right)} \geq a\geq\exp\left(\frac{pa}{8}+\frac{1}{2}\log a-\frac{100}{8}\right),\]
which again suffices for the \(\geq\) part of (16). If \(p>64a\), then it is enough to keep the \(b=\lfloor a/2\rfloor\) term in the sum, yielding
\[e^{\frac{pa}{2}\cdot\frac{\lfloor a/2\rfloor}{a}\left(1-\frac{\lfloor a/2 \rfloor}{a}\right)}\geq e^{\frac{pa}{2}\cdot\frac{a-1}{2a}\left(1-\frac{a-1}{ 2a}\right)}=e^{\frac{pa}{8}-\frac{p}{8a}}\geq\exp\left(\frac{pa}{8}+\frac{1}{2 }\log a-\frac{1}{2}\log\frac{p}{64}-\frac{p}{8}\right),\]
which also suffices for the \(\geq\) part of (16). This finishes the proof of (16).
We now prove (17). Keeping only the term \(b=0\) gives the trivial lower bound \(1\). Since each term is at most \(1\), comparing the sum to the corresponding integral leads to the upper bound
\[1\leq\sum_{b=0}^{a-1}e^{-\frac{pa}{2}\cdot\frac{b}{a}\left(1- \frac{b}{a}\right)}\leq a\int_{0}^{1}e^{-\frac{pa}{2}x(1-x)}\,\mathrm{d}x+1 =ae^{-\frac{pa}{8}}\int_{-1/2}^{1/2}e^{\frac{pa}{2}x^{2}}\, \mathrm{d}x+1\] \[=\frac{8}{p}\sqrt{\frac{pa}{8}}e^{-\frac{pa}{8}}\int_{0}^{\sqrt{ \frac{pa}{8}}}e^{x^{2}}\,\mathrm{d}x+1\] \[\ll\frac{1}{p}+1.\]
In the last step we used the fact that \(\sup_{y\geq 0}ye^{-y^{2}}\int_{0}^{y}e^{x^{2}}\,\mathrm{d}x<\infty\). This establishes (17).
**Proof of Theorem 2.** It will be enough to prove the theorem for finite \(p\). The claim for \(p=\pm\infty\) then follows from taking the limit as \(p\to\pm\infty\).
Let \(r=[0;a_{1},a_{2},\ldots,a_{L}]\) be rational with denominator \(q\) and convergents \(p_{\ell}/q_{\ell}=[0;a_{1},a_{2},\ldots,a_{\ell}]\). Let \(r^{\prime}=T^{2}r=[0;a_{3},a_{4},\ldots,a_{L}]\) with denominator \(q^{\prime}\) and convergents \(p_{\ell}^{\prime}/q_{\ell}^{\prime}=[0;a_{3},a_{4},\ldots,a_{\ell}]\), \(3\leq\ell\leq L\), and \(p_{2}^{\prime}=0\), \(q_{2}^{\prime}=1\).
Fix integers \(0\leq b_{0}<a_{1}\) and \(0\leq b_{1}\leq a_{2}\) such that \(b_{1}=a_{2}\) implies \(b_{0}=0\). Observe that the map \(N\mapsto N-q_{2}\) is an injection from
\[\left\{0\leq N<q\,:\,b_{0}(N)=b_{0},\ b_{1}(N)=0,\ b_{2}(N)=a_{3}\right\}\]
to
\[\left\{0\leq N<q\,:\,b_{0}(N)=b_{0},\ b_{1}(N)=0,\ b_{2}(N)=a_{3}-1\right\}.\]
Two applications of Lemma 10 (to \(N\) and \(N-q_{2}\), with \(k=1\)) shows that \(S_{N}(r)=S_{N-q_{2}}(r)+O(1)\), therefore
\[\sum_{\begin{subarray}{c}0\leq N<q\\ b_{0}(N)=b_{0},\ b_{1}(N)=0,\ b_{2}(N)=a_{3}\end{subarray}}e^{pS_{N}(r)}\leq \exp(O(|p|))\sum_{\begin{subarray}{c}0\leq N<q\\ b_{0}(N)=b_{0},\ b_{1}(N)=0,\ b_{2}(N)=a_{3}-1\end{subarray}}e^{pS_{N}(r)}.\]
In particular,
\[\sum_{\begin{subarray}{c}0\leq N<q\\ b_{0}(N)=b_{0},\ b_{1}(N)=b_{1}\end{subarray}}e^{pS_{N}(r)}=\exp\left(O\left( \max\{|p|,1\}\right)\right)\sum_{\begin{subarray}{c}0\leq N<q\\ b_{0}(N)=b_{0},\ b_{1}(N)=b_{1},\ b_{2}(N)<a_{3}\end{subarray}}e^{pS_{N}(r)},\]
the formula being trivial for \(b_{1}\neq 0\), as in that case the two sums are identical.
The "matching" map \(N\to N^{\prime}\) introduced in Section 4.3 is a bijection
\[\left\{0\leq N<q\,:\,b_{0}(N)=b_{0},\ b_{1}(N)=b_{1},\ b_{2}(N)<a_{3}\right\} \rightarrow[0,q^{\prime}),\]
and by Lemma 14,
\[S_{N}(r)-S_{N^{\prime}}(r^{\prime})= -b_{0}\left(\frac{1-b_{0}q_{0}\|q_{0}r\|}{2}-\frac{\|q_{0}r\|}{2} \right)+b_{1}\left(\frac{1-b_{1}q_{1}\|q_{1}r\|}{2}-\|q_{1}r\|b_{0}q_{0}-\frac {\|q_{1}r\|}{2}\right)+O(1)\] \[= -b_{0}\frac{1-b_{0}/a_{1}}{2}+b_{1}\frac{1-b_{1}/a_{2}}{2}+O(1).\]
Consequently,
\[\sum_{\begin{subarray}{c}0\leq N<q\\ b_{0}(N)=b_{0},\ b_{1}(N)=b_{1}\end{subarray}}e^{pS_{N}(r)}=\exp\left(-pb_{0} \frac{1-b_{0}/a_{1}}{2}+pb_{1}\frac{1-b_{1}/a_{2}}{2}+O(\max\{|p|,1\})\right) \sum_{0\leq N<q^{\prime}}e^{pS_{N}(r^{\prime})}.\]
We now sum over all possible values of \(b_{0},b_{1}\), and apply Lemma 15 to deduce
\[\sum_{0\leq N<q}e^{pS_{N}(r)} =\left(1+\sum_{b_{0}=0}^{a_{1}-1}e^{-pb_{0}\frac{1-b_{0}/a_{1}}{2 }}\sum_{b_{1}=0}^{a_{2}-1}e^{pb_{1}\frac{1-b_{1}/a_{2}}{2}}\right)\exp(O(\max \{|p|,1\}))\sum_{0\leq N<q^{\prime}}e^{pS_{N}(r^{\prime})}\] \[=\exp\left(\frac{|p|a_{\varepsilon_{p}}}{8}+\frac{1}{2}\log a_{ \varepsilon_{p}}+O\left(\max\left\{|p|,\log\frac{1}{|p|}\right\}\right) \right)\sum_{0\leq N<q^{\prime}}e^{pS_{N}(r^{\prime})}.\]
By the definition of \(h_{p}\), this means that
\[h_{p}(r)=\operatorname{sgn}(p)\frac{a_{\varepsilon_{p}}}{8}+\frac{1}{2p}\log a _{\varepsilon_{p}}+O\left(\max\left\{1,\frac{1}{|p|}\log\frac{1}{|p|}\right\} \right),\]
which is an equivalent form of the claim.
### Continuity of \(h_{p}\) at irrationals
We now prove Theorem 3 in a quantitative form, establishing an estimate for the modulus of continuity as well. Fix an irrational \(\alpha\in(0,1)\) with continued fraction expansion \(\alpha=[0;a_{1},a_{2},\ldots]\) and convergents \(p_{k}/q_{k}=[0;a_{1},a_{2},\ldots,a_{k}]\). Let
\[I_{k+1}=\{[0;c_{1},c_{2},\ldots]\,:\,c_{j}=a_{j}\text{ for all }1\leq j\leq k+1\}\]
denote the set of real numbers in \((0,1)\) whose first \(k+1\) partial quotients are identical to those of \(\alpha\). Recall that \(I_{k+1}\subset(0,1)\) is an interval with rational endpoints; in particular, \(\alpha\in\operatorname{int}I_{k+1}\).
**Theorem 16**.: _Let \(-\infty\leq p\leq\infty\), \(p\neq 0\), and let \(k\geq 2\) be an integer such that \(a_{k+1}\geq A\max\{1,\frac{1}{|p|}\log\frac{1}{|p|}\}\) with a large universal constant \(A>1\), and \(k+1\equiv\varepsilon_{p}\pmod{2}\). Then_
\[\sup_{r\in I_{k+1}\cap\mathbb{Q}}h_{p}(r)-\inf_{r\in I_{k+1}\cap\mathbb{Q}}h_{ p}(r)\ll\frac{a_{1}a_{2}}{q_{k}}+\sqrt{\frac{\log a_{k+1}}{\min\{1,|p|\}a_{k+1}}}\]
_with a universal implied constant._
In particular, if \(\sup_{k\in\mathbb{N}}a_{2k+\varepsilon_{p}}=\infty\), then
\[\liminf_{\begin{subarray}{c}k\to\infty\\ k+1\equiv\varepsilon_{p}\pmod{2}\end{subarray}}\left(\frac{a_{1}a_{2}}{q_{k} }+\sqrt{\frac{\log a_{k+1}}{\min\{1,|p|\}a_{k+1}}}\right)=0,\]
and consequently \(\lim_{r\to\alpha}h_{p}(r)\) exists and is finite by the Cauchy criterion. This proves Theorem 3.
**Proof of Theorem 16.** We only give a detailed proof for finite \(p\), as the proof for \(p=\pm\infty\) is entirely analogous. Let \(\alpha^{\prime}=T^{2}\alpha=[0;a_{3},a_{4},\ldots]\), and let \(p^{\prime}_{\ell}/q^{\prime}_{\ell}=[0;a_{3},a_{4},\ldots]\), \(\ell\geq 3\) and \(p^{\prime}_{2}=0\), \(q^{\prime}_{2}=1\) denote its convergents.
Let \(r\in I_{k+1}\cap\mathbb{Q}\) be arbitrary with denominator \(q\), continued fraction expansion \(r=[0;c_{1},c_{2},\ldots,c_{L}]\) and convergents \(\bar{p}_{\ell}/\bar{q}_{\ell}=[0;c_{1},c_{2},\ldots,c_{\ell}]\). Let \(r^{\prime}=T^{2}r=[0;c_{3},c_{4},\ldots,c_{L}]\) with denominator \(q^{\prime}\), and convergents \(\bar{p}^{\prime}_{\ell}/\bar{q}^{\prime}_{\ell}=[0;c_{3},c_{4},\ldots,c_{\ell}]\), \(3\leq\ell\leq L\) and \(\bar{p}^{\prime}_{2}=0\), \(\bar{q}^{\prime}_{2}=1\). By construction, we have \(\bar{p}_{\ell}/\bar{q}_{\ell}=p_{\ell}/q_{\ell}\) for all \(0\leq\ell\leq k+1\), and \(\bar{p}^{\prime}_{\ell}/\bar{q}^{\prime}_{\ell}=p^{\prime}_{\ell}/q^{\prime}_{\ell}\) for all \(2\leq\ell\leq k+1\).
An application of Lemma 12 to \(r\) resp. \(r^{\prime}\) with \(K=L\) yields
\[\left(\sum_{0\leq N<q}e^{pS_{N}(r)}\right)^{1/p}= \left(1+O\left(\sqrt{\frac{\log a_{k+1}}{\min\{1,|p|\}a_{k+1}}} \right)\right)\Bigg{(}\sum_{0\leq N<q_{k}}e^{p(S_{N}(p_{k}/q_{k})+(-1)^{k}N/(2 q_{k}))}\Bigg{)}^{1/p}\] \[\times\Bigg{(}\sum_{\begin{subarray}{c}0\leq N<q\\ b_{0}(N)=\ldots=b_{k-1}(N)=0\\ |b_{k}(N)-a_{k+1}/2|\leq\max\{10,10/\sqrt{|p|}\}\sqrt{a_{k+1}\log a_{k+1}} \end{subarray}}\]
resp.
\[\left(\sum_{0\leq N<q^{\prime}}e^{pS_{N}(r^{\prime})}\right)^{1/p}= \left(1+O\left(\sqrt{\frac{\log a_{k+1}}{\min\{1,|p|\}a_{k+1}}} \right)\right)\Bigg{(}\sum_{0\leq N<q^{\prime}_{k}}e^{p(S_{N}(p^{\prime}_{k}/q ^{\prime}_{k})+(-1)^{k}N/(2q^{\prime}_{k}))}\Bigg{)}^{1/p}\] \[\times\Bigg{(}\sum_{\begin{subarray}{c}0\leq N<q^{\prime}\\ b^{\prime}_{2}(N)=\ldots=b^{\prime}_{k-1}(N)=0\\ |b^{\prime}_{k}(N)-a_{k+1}/2|\leq\max\{10,10/\sqrt{|p|}\}\sqrt{a_{k+1}\log a_{k+ 1}}\end{subarray}}e^{pS_{N}(r^{\prime})}\Bigg{)}^{1/p}.\]
Here \(b_{\ell}(N)\) resp. \(b^{\prime}_{\ell}(N)\) denote the digits in the Ostrowski expansion with respect to \(r\) resp. \(r^{\prime}\). Consequently,
\[h_{p}(r)=\log\frac{J_{p}(r)}{J_{p}(r^{\prime})}= Z_{p,k}(\alpha)+\frac{1}{p}\log\frac{\sum_{\begin{subarray}{c}0\leq N <q\\ b_{0}(N)=\cdots=b_{k-1}(N)=0\\ b^{\prime}_{k}(N)=\cdots=b^{\prime}_{k-1}(N)=0\\ |b^{\prime}_{k}(N)-a_{k+1}/2|\leq\max\{10,10/\sqrt{|p|}\}\sqrt{a_{k+1}\log a_{ k+1}}\end{subarray}}}{\sum_{\begin{subarray}{c}0\leq N<q^{\prime}\\ b^{\prime}_{2}(N)=\cdots=b^{\prime}_{k-1}(N)=0\\ |b^{\prime}_{k}(N)-a_{k+1}/2|\leq\max\{10,10/\sqrt{|p|}\}\sqrt{a_{k+1}\log a_{ k+1}}\end{subarray}}} \tag{18}\] \[+O\left(\sqrt{\frac{\log a_{k+1}}{\min\{1,|p|\}a_{k+1}}}\right),\]
with the crucial observation that
\[Z_{p,k}(\alpha):=\frac{1}{p}\log\frac{\sum_{0\leq N<q_{k}}e^{p(S_{N}(p_{k}/q_ {k})+(-1)^{k}N/(2q_{k}))}}{\sum_{0\leq N<q^{\prime}_{k}}e^{p(S_{N}(p^{\prime}_ {k}/q^{\prime}_{k})+(-1)^{k}N/(2q^{\prime}_{k}))}}\]
depends only on \(\alpha\), but not on \(r\).
The "matching" map \(N\mapsto N^{\prime}\) introduced in Section 4.3 is a bijection from the set
\[\left\{0\leq N<q\,:\begin{array}{c}b_{0}(N)=\cdots=b_{k-1}(N)=0,\\ |b_{k}(N)-a_{k+1}/2|\leq\max\{10,10/\sqrt{|p|}\}\sqrt{a_{k+1}\log a_{k+1}}\end{array}\right\}\]
to the set
\[\left\{0\leq N<q^{\prime}\,:\begin{array}{c}b^{\prime}_{2}(N)=\cdots=b^{ \prime}_{k-1}(N)=0,\\ |b^{\prime}_{k}(N)-a_{k+1}/2|\leq\max\{10,10/\sqrt{|p|}\}\sqrt{a_{k+1}\log a_{ k+1}}\end{array}\right\},\]
and by Lemma 14, \(|S_{N}(r)-S_{N^{\prime}}(r^{\prime})|\ll 1/q^{\prime}_{k}\ll a_{1}a_{2}/q_{k}\). Hence
\[\sum_{\begin{subarray}{c}0\leq N<q\\ b_{0}(N)=\cdots=b_{k-1}(N)=0\\ |b_{k}(N)-a_{k+1}/2|\leq\max\{10,10/\sqrt{|p|}\}\sqrt{a_{k+1}\log a_{k+1}}\\ \end{subarray}}e^{pS_{N}(r)}\] \[\frac{\sum_{\begin{subarray}{c}0\leq N<q\\ b^{\prime}_{2}(N)=\cdots=b^{\prime}_{k-1}(N)=0\\ |b^{\prime}_{k}(N)-a_{k+1}/2|\leq\max\{10,10/\sqrt{|p|}\}\sqrt{a_{k+1}\log a_{ k+1}}\end{subarray}}}{\sum_{\begin{subarray}{c}0\leq N<q^{\prime}\\ b^{\prime}_{2}(N)=\cdots=b^{\prime}_{k-1}(N)=0\\ |b^{\prime}_{k}(N)-a_{k+1}/2|\leq\max\{10,10/\sqrt{|p|}\}\sqrt{a_{k+1}\log a_{ k+1}}\end{subarray}}}\]
and (18) leads to
\[h_{p}(r)=Z_{p,k}(\alpha)+O\left(\frac{a_{1}a_{2}}{q_{k}}+\sqrt{\frac{\log a_{ k+1}}{\min\{1,|p|\}a_{k+1}}}\right)\qquad\text{uniformly in }r\in I_{k+1}\cap\mathbb{Q}.\]
This establishes the desired upper bound to the oscillation of \(h_{p}\) on the set \(I_{k+1}\cap\mathbb{Q}\).
### One-sided limit of \(h_{p}\) at rationals
**Proof of Theorem 4.** We only give a detailed proof for finite \(p\), as the proof for \(p=\pm\infty\) is entirely analogous.
Fix a reduced rational \(a/q\in(0,1)\). It has exactly two continued fraction expansions, one of even length and one of odd length. Consider thus the expansion \(a/q=[0;a_{1},a_{2},\ldots,a_{s}]\) with odd \(s\geq 3\) if \(p>0\), and even \(s\geq 2\) if \(p<0\), and let \(p_{k}/q_{k}=[0;a_{1},a_{2},\ldots,a_{k}]\) denote its convergents. In particular, \(s+1\equiv\varepsilon_{p}\pmod{2}\). Let \(I(n)\) be the set of all reals of the form \([0;a_{1},a_{2},\ldots,a_{s},m,\ldots]\) with \(m\geq n\). Note that \(I(n)\) is an interval with endpoints \((p_{s}n+p_{s-1})/(q_{s}n+q_{s-1})\) and \(p_{s}/q_{s}=a/q\). The choice of the parity of \(s\) implies that \(I(n)=[a/q-\kappa_{n},a/q)\) is a left-hand neighborhood if \(p>0\), whereas \(I(n)=(a/q,a/q+\kappa_{n}]\) is a right-hand neighborhood if \(p<0\), of length \(\kappa_{n}=1/(q_{s}^{2}n+q_{s-1}q_{s})\). It will thus be enough to prove that \(\sup_{r\in I(n)\cap\mathbb{Q}}|h_{p}(r)-W_{p}(a/q)|\to 0\) as \(n\to\infty\).
Now let \(n>A\max\{1,\frac{1}{|p|}\log\frac{1}{|p|}\}\) with a large universal constant \(A>1\), and let \(r\in I(n)\cap\mathbb{Q}\) be arbitrary. The continued fraction of \(r\) is thus of the form \(r=[0;a_{1},a_{2},\ldots,a_{L}]\) with \(L\geq s+1\geq 3\) and \(a_{s+1}\geq n\). In particular, the convergents \(p_{k}/q_{k}\), \(0\leq k\leq L\) to \(r\) coincide with those to \(a/q\) for \(0\leq k\leq s\). Let \(r^{\prime}=T^{2}r=[0;a_{3},\ldots,a_{L}]\) with convergents \(p^{\prime}_{k}/q^{\prime}_{k}=[0;a_{3},\ldots,a_{k}]\), \(3\leq k\leq L\) and \(p^{\prime}_{2}=0\), \(q^{\prime}_{2}=1\). Then \(a^{\prime}/q^{\prime}=T^{2}(a/q)=[0;a_{3},\ldots,a_{s}]\) has the same convergents for \(2\leq k\leq s\).
Following the steps in the proof of Theorem 16 leading up to (18) (with \(k=s\)), we deduce
\[h_{p}(r)= \frac{1}{p}\log\frac{\sum_{0\leq N<q}e^{p(S_{N}(a/q)-\operatorname {sgn}(p)N/(2q))}}{\sum_{0\leq N<q^{\prime}}e^{p(S_{N}(a^{\prime}/ q^{\prime})-\operatorname{sgn}(p)N/(2q^{\prime}))}}\] \[+\frac{1}{p}\log\frac{\sum_{0\leq N<q_{L}}e^{pS_{N}(r)}}{\sum_{ \begin{subarray}{c}0\leq N<q_{L}\\ b_{0}(N)=\cdots=b_{s-1}(N)=0\\ b_{s}^{\prime}(N)-a_{s+1}/2|\leq\max\{10,10/\sqrt{|p|}\}\sqrt{a_{s+1}\log a_{s+ 1}}\end{subarray}}}e^{pS_{N}(r^{\prime})}+O\left(\sqrt{\frac{\log n}{\min\{1,|p |\}n}}\right).\]
Here \(b_{\ell}(N)\) resp. \(b^{\prime}_{\ell}(N)\) denote the digits in the Ostrowski expansion with respect to \(r\) resp. \(r^{\prime}\). The first term in the previous formula depends only on \(a/q\) but not on \(r\).
It remains to estimate the second term. The "matching" map \(N\mapsto N^{\prime}\) introduced in Section 4.3 is a bijection from the set
\[\left\{0\leq N<q_{L}\,:\begin{array}{c}b_{0}(N)=\cdots=b_{s-1}(N)=0,\\ |b_{s}(N)-a_{s+1}/2|\leq\max\{10,10/\sqrt{|p|}\}\sqrt{a_{s+1}\log a_{s+1}}\end{array}\right\}\]
to the set
\[\left\{0\leq N<q^{\prime}_{L}\,:\begin{array}{c}b^{\prime}_{2}(N)=\cdots=b^{ \prime}_{s-1}(N)=0,\\ |b^{\prime}_{s}(N)-a_{s+1}/2|\leq\max\{10,10/\sqrt{|p|}\}\sqrt{a_{s+1}\log a_{ s+1}}\end{array}\right\}.\]
By Lemma 14, for all such \(N\),
\[S_{N}(r)-S_{N^{\prime}}(r^{\prime}) =a_{1}\frac{\operatorname{sgn}(p)p_{s}b_{s}(N)/a_{s+1}-(b_{s}(N)/a _{s+1})^{2}}{2q_{s}q^{\prime}_{s}}+O\left(\frac{1}{q^{\prime}_{s+1}}\right)\] \[=\lfloor q/a\rfloor\frac{\operatorname{sgn}(p)a/2-1/4}{2qq^{ \prime}}+O\left(\sqrt{\frac{\log n}{\min\{1,|p|\}n}}\right),\]
consequently
\[\sum_{\begin{subarray}{c}0\leq N<q_{L}\\ b_{0}(N)=\cdots=b_{s-1}(N)=0\\ \end{subarray}}e^{pS_{N}(r)}\] \[\frac{1}{p}\log\frac{|b_{s}(N)-a_{s+1}/2|\leq\max\{10,10/\sqrt{[p ]}\}\sqrt{a_{s+1}\log a_{s+1}}}{\sum_{\begin{subarray}{c}0\leq N<q_{L}^{ \prime}\\ b_{2}^{\prime}(N)=\cdots=b_{s-1}^{\prime}(N)=0\\ \end{subarray}}e^{pS_{N}(r^{\prime})}= \lfloor q/a\rfloor\frac{\operatorname{sgn}(p)a/2-1/4}{2qq^{ \prime}}\] \[\qquad+O\left(\sqrt{\frac{\log n}{\min\{1,|p|\}n}}\right).\]
Hence
\[h_{p}(r)=W_{p}(a/q)+O\left(\sqrt{\frac{\log n}{\min\{1,|p|\}n}}\right)\qquad \text{uniformly in }r\in I(n),\]
and the desired limit relation follows.
## 5 Quadratic irrationals
Fix a quadratic irrational \(\alpha\) and a parameter \(-\infty\leq p\leq\infty\), \(p\neq 0\). Throughout this section, constants and implied constants may depend on \(\alpha\).
Let us write the continued fraction expansion in the form \(\alpha=[a_{0};a_{1},\ldots,a_{s},\overline{a_{s+1},\ldots,a_{s+m}}]\), where the overline denotes the period. We can always choose the period length \(m\) to be even, although it might not be the shortest possible period. This choice is convenient because \(S_{N}(\alpha)\) is odd in the variable \(\alpha\), cf. the alternating factor \((-1)^{\ell+1}\) in Ostrowski's explicit formula in Lemma 9. Solving the recursions with periodic coefficients gives that for any \(k\geq 0\) and \(1\leq r\leq m\),
\[q_{s+km+r}=E_{r}\eta^{k}+F_{r}\eta^{-k}\qquad\text{and}\qquad\|q_{s+km+r} \alpha\|=G_{r}\eta^{-k} \tag{19}\]
with some explicitly computable constants \(\eta>1\), \(E_{r},G_{r}>0\) and \(F_{r}\in\mathbb{R}\), \(1\leq r\leq m\)[2, Eq. (28)].
The following lemma states that shifting the digits in the Ostrowski expansion by full periods has a negligible effect.
**Lemma 17**.: _Let \(0\leq N<q_{s+km}\) be an integer with Ostrowski expansion \(N=\sum_{\ell=s}^{s+km-1}b_{\ell}(N)q_{\ell}\). Let \(i\geq 1\) be an integer, and set \(N^{(i)}=\sum_{\ell=s+im}^{s+(i+k)m-1}b_{\ell-im}(N)q_{\ell}\). Then \(|S_{N}(\alpha)-S_{N^{(i)}}(\alpha)|\ll 1\)._
**Proof.** Note that the shift results in a legitimate Ostrowski expansion for \(N^{(i)}\), that is, \(b_{\ell}(N^{(i)})=b_{\ell-im}(N)\) for all \(s+im\leq\ell\leq s+(i+k)m-1\). Applying Ostrowski's explicit formula in Lemma 9 to \(N\) and \(N^{(i)}\) thus yields
\[S_{N}(\alpha)-S_{N^{(i)}}(\alpha)=\sum_{\ell=s}^{s+km-1}(-1)^{ \ell+1}b_{\ell}(N)\bigg{(} \frac{b_{\ell}(N)(q_{\ell+im}\|q_{\ell+im}\alpha\|-q_{\ell}\|q_{ \ell}\alpha\|)}{2}\] \[+\sum_{j=s}^{\ell-1}b_{j}(N)(q_{j+im}\|q_{\ell+im}\alpha\|-q_{j} \|q_{\ell}\alpha\|)\] \[+\frac{\|q_{\ell+im}\alpha\|-\|q_{\ell}\alpha\|}{2}\bigg{)}.\]
Formula (19) shows that here \(q_{j+im}\|q_{\ell+im}\alpha\|-q_{j}\|q_{\ell}\alpha\|=O(\eta^{-(j+\ell)/m})\) for all \(s\leq j\leq\ell\), and the claim follows.
We now show that \(\log J_{p,M}(\alpha)\) with \(M=q_{s+km}\) is approximately additive in \(k\).
**Lemma 18**.: _For any integers \(i,k\geq 1\),_
\[\log J_{p,q_{s+(i+k)m}}(\alpha)=\log J_{p,q_{s+im}}(\alpha)+\log J_{p,q_{s+km}}( \alpha)+O(\max\{1,1/|p|\}).\]
Proof.: It will be enough to prove the lemma for finite \(p\). The claim for \(p=\pm\infty\) then follows from taking the limit as \(p\to\pm\infty\).
Note that each individual term in Ostrowski's explicit formula in Lemma 9 is \(O(1)\). In particular, \(S_{N}(\alpha)=O(1)\) whenever \(N\) has \(O(1)\) nonzero digits in its Ostrowski expansion. More generally, changing a single Ostrowski digit of \(N\) changes the value of \(S_{N}(\alpha)\) by \(O(1)\).
Let \(c_{k}=\sum_{0\leq N<q_{s+km}}e^{pS_{N}(\alpha)}\), \(k\geq 1\). Observe that the map \([0,q_{s+(k+1)m})\to[0,q_{s+km})\), \(N=\sum_{\ell=0}^{s+(k+1)m-1}b_{\ell}(N)q_{\ell}\to N^{-}=\sum_{\ell=0}^{s+km-1} b_{\ell}(N)q_{\ell}\) has the property that each value is attained \(O(1)\) times. Since \(N^{-}\) is obtained from \(N\) by deleting a single Ostrowski digit, we have \(S_{N^{-}}(\alpha)=S_{N}(\alpha)+O(1)\). Hence for all \(k\geq 1\),
\[c_{k+1}\leq e^{O(|p|)}\sum_{0\leq N<q_{s+(k+1)m}}e^{pS_{N^{-}}(\alpha)}\leq e^ {O(\max\{|p|,1\})}c_{k}. \tag{20}\]
Now fix \(i,k\geq 1\). Let \(0\leq N^{\prime}<q_{s+im}\) and \(0\leq N^{\prime\prime}<q_{s+km}\) be integers with Ostrowski expansions \(N^{\prime}=\sum_{\ell=0}^{s+im-1}b_{\ell}(N^{\prime})q_{\ell}\) and \(N^{\prime\prime}=\sum_{\ell=0}^{s+km-1}b_{\ell}(N^{\prime\prime})q_{\ell}\). Define \(0\leq N<q_{s+(i+k)m}\), \(N=\sum_{\ell=0}^{s+(i+k)m-1}b_{\ell}(N)q_{\ell}\) as
\[b_{\ell}(N)=\left\{\begin{array}{ll}b_{\ell}(N^{\prime})&\mbox{if $0\leq\ell\leq s +im-1$,}\\ 0&\mbox{if $s+im\leq\ell\leq s+(i+1)m-1$,}\\ b_{\ell-(i+1)m}(N^{\prime\prime})&\mbox{if $s+(i+1)m\leq\ell\leq s+(i+k+1)m-1$.} \end{array}\right.\]
Note that the block of zeroes in the middle ensures that the extra rule of Ostrowski expansions \((b_{\ell+1}(N)=a_{\ell+2}\) implies \(b_{\ell}(N)=0)\) is satisfied. The map \([0,q_{s+km})\times[0,q_{s+im})\to[0,q_{s+(i+k+1)m})\), \((N^{\prime},N^{\prime\prime})\mapsto N\) is injective. Deleting the first \(s\) Ostrowski digits of \(N^{\prime\prime}\), and then applying Lemmas 17 and 13 shows that \(S_{N}(\alpha)=S_{N^{\prime}}(\alpha)+S_{N^{\prime\prime}}(\alpha)+O(1)\). Using (20) as well thus leads to
\[c_{i}c_{k}=\sum_{\begin{subarray}{c}0\leq N^{\prime}<q_{s+im}\\ 0\leq N^{\prime\prime}<q_{s+km}\end{subarray}}e^{p(S_{N^{\prime}}(\alpha)+S_{N ^{\prime\prime}}(\alpha))}\leq e^{O(|p|)}c_{i+k+1}\leq e^{O(\max\{|p|,1\})}c_{ i+k}. \tag{21}\]
Next, for any integer \(0\leq N<q_{s+(i+k)m}\) with Ostrowski expansion \(N=\sum_{\ell=0}^{s+(i+k)m-1}b_{\ell}(N)q_{\ell}\) define \(N_{1}=\sum_{\ell=0}^{s+im-1}b_{\ell}(N)q_{\ell}\) and \(N_{2}=\sum_{\ell=s}^{s+km-1}b_{\ell+im}(N)q_{\ell}\). Note that, with the notation of Lemma 17, \(N=N_{1}+N_{2}^{(})\), hence Lemmas 13 and 17 give \(S_{N}(\alpha)=S_{N_{1}}(\alpha)+S_{N_{2}}(\alpha)+O(1)\). Observe that the map \([0,q_{s+(i+k)m})\to[0,q_{s+im})\times[0,q_{s+km})\), \(N\mapsto(N_{1},N_{2})\) is injective, thus
\[c_{i+k}\leq e^{O(|p|)}\sum_{\begin{subarray}{c}0\leq N_{1}<q_{s+im}\\ 0\leq N_{2}<q_{s+km}\end{subarray}}e^{p(S_{N_{1}}(\alpha)+S_{N_{2}}(\alpha))}= e^{O(|p|)}c_{i}c_{k}.\]
The previous formula together with (21) show that \(c_{i+k}=e^{O(\max\{|p|,1\})}c_{i}c_{k}\), and the claim follows.
Proof of Theorem 5.: By Lemma 18, there exists a constant \(K=O(\max\{1,1/|p|\})\) such that the sequence \(\log J_{p,q_{s+km}}(\alpha)+K\) resp. \(\log J_{p,q_{s+km}}(\alpha)-K\) is subadditive resp. superadditive in \(k\). An application of the subadditive lemma of Fekete then shows that the sequence \(k^{-1}\log J_{p,q_{s+km}}(\alpha)\) is convergent, and denoting its limit by \(C_{p}^{\prime}(\alpha)\),
\[C_{p}^{\prime}(\alpha)=\inf_{k\geq 1}\frac{\log J_{p,q_{s+km}}(\alpha)+K}{k}= \sup_{k\geq 1}\frac{\log J_{p,q_{s+km}}(\alpha)-K}{k}.\]
In particular, \(\log J_{p,q_{s+km}}(\alpha)=C^{\prime}_{p}(\alpha)k+O(\max\{1,1/|p|\})\).
Given an arbitrary integer \(q_{s+km}\leq M<q_{s+(k+1)m}\), we have
\[\log J_{p,q_{s+km}}(\alpha)\leq\log J_{p,M}(\alpha)\leq\log J_{p,q_{s+(k+1)m}}(\alpha)\]
if \(p>0\), and the reverse inequalities hold if \(p<0\). Formula (19) shows that \(\log q_{s+km}=(\log\eta)k+O(1)\), hence
\[\log J_{p,M}(\alpha)=C^{\prime}_{p}(\alpha)k+O(\max\{1,1/|p|\})=\frac{C^{ \prime}_{p}(\alpha)}{\log\eta}\log M+O(\max\{1,1/|p|\}).\]
Thus \(C_{p}(\alpha)=C^{\prime}_{p}(\alpha)/\log\eta\) satisfies the claim of the theorem.
## 6 Proof of the limit laws
For any \(r\in(0,1)\cap\mathbb{Q}\), define
\[g_{p}(r)=h_{p}(r)-\left\{\begin{array}{ll}\mathds{1}_{\{Tr\neq 0\}}\frac{1}{8} \lfloor\frac{1}{Tr}\rfloor&\text{if }p>0,\\ -\frac{1}{8}\lfloor\frac{1}{r}\rfloor&\text{if }p<0.\end{array}\right. \tag{22}\]
By Theorem 3, \(g_{p}\) can be extended to an a.e. continuous function on \([0,1]\), which we simply denote by \(g_{p}\) as well. By Theorem 2, we have \(|g_{p}(x)|\leq c(1+\log(1/Tx))\) if \(p>0\), and \(|g_{p}(x)|\leq c(1+\log(1/x))\) if \(p<0\) with a large constant \(c>0\) depending only on \(p\).
**Lemma 19**.: _For any \(\varepsilon>0\), there exist a constant \(\delta_{p}>0\) and functions \(g_{p}^{\pm}\) on \([0,1]\) with the following properties._
1. \(g_{p}^{-}\leq g_{p}\leq g_{p}^{+}\) _on_ \([0,1]\)_, and_ \(\int_{0}^{1}(g_{p}^{+}(x)-g_{p}^{-}(x))\,\mathrm{d}x<\varepsilon\)_._
2. _If_ \(p>0\)_, then for all_ \(n\in\mathbb{N}\)_, the functions_ \(g_{p}^{\pm}\) _are smooth on_ \((\frac{1}{n+1},\frac{1}{n})\)_, and_ \(g_{p}^{\pm}(x)=\pm 2c\log(1/Tx)\) _for all_ \(x\in(\frac{1}{n+1},\frac{1}{n})\cap(\frac{1}{n}-\delta_{p},\frac{1}{n})\)_._
3. _If_ \(p<0\)_, then the functions_ \(g_{p}^{\pm}\) _are smooth on_ \((0,1)\)_, and_ \(g_{p}^{\pm}(x)=\pm 2c\log(1/x)\) _for all_ \(x\in(0,\delta_{p})\)_._
**Proof.** Fix \(\varepsilon>0\). Assume first, that \(p>0\), and let \(\delta_{p}>0\) be a small constant to be chosen. If \(n\) is large enough so that \(\frac{1}{n}-\delta_{p}\leq\frac{1}{n+1}\), then we are forced to define \(g_{p}^{\pm}(x)=\pm 2c\log(1/Tx)\) for \(x\in(\frac{1}{n+1},\frac{1}{n})\). Now let \(n\) be such that \(\frac{1}{n}-\delta_{p}>\frac{1}{n+1}\). Since \(g_{p}\) is bounded and a.e. continuous, and consequently Riemann integrable on \([\frac{1}{n+1},\frac{1}{n}-\delta_{p}]\), we can approximate \(g_{p}\) pointwise from above and from below by step functions, and extend them to \((\frac{1}{n}-\delta_{p},\frac{1}{n})\) as \(\pm 2c\log(1/Tx)\). By choosing \(\delta_{p}\) small enough, we can ensure that these piecewise defined upper and lower approximating functions are \(\varepsilon\)-close to each other in \(L^{1}\). Next, we approximate the piecewise defined functions from above and from below by smooth functions which are still \(\varepsilon\)-close to each other in \(L^{1}\).
The construction for \(p<0\) is similar. We first approximate \(g_{p}\) from above and from below by step functions on \([\delta_{p},1]\), and extend them as \(\pm 2c\log(1/x)\) on \((0,\delta_{p})\). Then we approximate these piecewise defined functions from above and from below by smooth functions.
The following lemma will play a role in the proof of the limit laws for both random rationals and random reals.
**Lemma 20**.: _For any \(t_{1},t_{2}\in(-1/2,1/2)\),_
\[\int_{0}^{1}\frac{e^{i(t_{1}\lfloor 1/Tx\rfloor+t_{2}\lfloor 1/x \rfloor)}-1}{1+x}\,\mathrm{d}x= -\frac{\pi}{2}|t_{1}|-i\gamma t_{1}-it_{1}\log|t_{1}|-\frac{\pi} {2}|t_{2}|-i\gamma t_{2}-it_{2}\log|t_{2}|\] \[+O\left(t_{1}^{2}\log\frac{1}{|t_{1}|}+t_{2}^{2}\log\frac{1}{|t_ {2}|}+|t_{1}t_{2}|\log\frac{1}{|t_{1}|}\log\frac{1}{|t_{2}|}\right)\]
_with a universal implied constant._
**Proof.** Let \(I(t_{1},t_{2})\) denote the integral in the claim. Applying the substitution \(x\mapsto 1/x\) twice leads to
\[I(t_{1},t_{2}) =\int_{1}^{\infty}\frac{e^{i(t_{1}\lfloor 1/\{x\}\rfloor+t_{2} \lfloor x\rfloor)}-1}{x(x+1)}\,\mathrm{d}x=\sum_{n=1}^{\infty}\int_{0}^{1} \frac{e^{i(t_{1}\lfloor 1/x\rfloor+t_{2}n)}-1}{(x+n)(x+n+1)}\,\mathrm{d}x\] \[=\sum_{n=1}^{\infty}\int_{1}^{\infty}\frac{e^{i(t_{1}\lfloor x \rfloor+t_{2}n)}-1}{(nx+1)((n+1)x+1)}\,\mathrm{d}x=\sum_{n,m=1}^{\infty}\int_{ 0}^{1}\frac{e^{i(t_{1}m+t_{2}n)}-1}{(n(x+m)+1)((n+1)(x+m)+1)}\,\mathrm{d}x\] \[=\sum_{n,m=1}^{\infty}\left(e^{i(t_{1}m+t_{2}n)}-1\right)\log \frac{((n+1)(m+1)+1)(nm+1)}{((n+1)m+1)((m+1)n+1)}.\]
Here
\[\log\frac{((n+1)(m+1)+1)(nm+1)}{((n+1)m+1)((m+1)n+1)} =\log\left(1+\frac{1}{n^{2}m^{2}+n^{2}m+nm^{2}+3nm+n+m+1}\right)\] \[=\frac{1}{n^{2}m^{2}+n^{2}m+nm^{2}+3nm+n+m+1}+O\left(\frac{1}{n^{4 }m^{4}}\right)\] \[=\frac{1}{n(n+1)m(m+1)}+O\left(\frac{1}{n^{3}m^{3}}\right).\]
Letting
\[R_{n,m}=\log\frac{((n+1)(m+1)+1)(nm+1)}{((n+1)m+1)((m+1)n+1)}-\frac{1}{n(n+1) m(m+1)},\]
we thus have \(R_{n,m}=O(n^{-3}m^{-3})\), and we can write
\[I(t_{1},t_{2})=\sum_{n,m=1}^{\infty}\frac{e^{i(t_{1}m+t_{2}n)}-1}{n(n+1)m(m+1) }+\sum_{n,m=1}^{\infty}\left(e^{i(t_{1}m+t_{2}n)}-1\right)R_{n,m}. \tag{23}\]
The second term is estimated as
\[\sum_{n,m=1}^{\infty}\left(e^{i(t_{1}m+t_{2}n)}-1\right)R_{n,m} =\sum_{m=1}^{1/|t_{1}|}\sum_{n=1}^{1/|t_{2}|}\left(it_{1}m+it_{2} n+O\left(|t_{1}m+t_{2}n|^{2}\right)\right)R_{n,m}+O\left(t_{1}^{2}+t_{2}^{2}\right)\] \[=it_{1}\sum_{n,m=1}^{\infty}mR_{n,m}+it_{2}\sum_{n,m=1}^{\infty} nR_{n,m}+O\left(t_{1}^{2}\log\frac{1}{|t_{1}|}+t_{2}^{2}\log\frac{1}{|t_{2}|} \right).\]
The infinite series is easily computed using telescoping sums:
\[\sum_{n,m=1}^{\infty}nR_{n,m} =\sum_{n=1}^{\infty}\left(n\log\frac{(n+1)^{2}}{n(n+2)}-\frac{1}{ n+1}\right)\] \[=\lim_{N\to\infty}\left(\log(N+1)+N\log\frac{N+1}{N+2}-\sum_{n=1} ^{N}\frac{1}{n+1}\right)=-\gamma.\]
By symmetry, we also have \(\sum_{n,m=1}^{\infty}mR_{n,m}=-\gamma\), thus the second term in (23) is
\[\sum_{n,m=1}^{\infty}\left(e^{i(t_{1}m+t_{2}n)}-1\right)R_{n,m}=-i\gamma t_{1 }-i\gamma t_{2}+O\left(t_{1}^{2}\log\frac{1}{|t_{1}|}+t_{2}^{2}\log\frac{1}{|t _{2}|}\right). \tag{24}\]
We can rewrite the first term in (23) as
\[\sum_{n,m=1}^{\infty}\frac{e^{i(t_{1}m+t_{2}n)}-1}{n(n+1)m(m+1)}=\left(\sum_{ m=1}^{\infty}\frac{e^{it_{1}m}}{m(m+1)}\right)\left(\sum_{n=1}^{\infty}\frac{e^{ it_{2}n}}{n(n+1)}\right)-1.\]
Observe that
\[\sum_{n=1}^{\infty}\frac{z^{n}}{n(n+1)}=1+\frac{1-z}{z}\log(1-z),\qquad|z|\leq 1\]
with the principal branch of the logarithm. For \(j=1,2\),
\[\log(1-e^{it_{j}})=\log|2\sin(t_{j}/2)|+i\left(\frac{t_{j}}{2}-\mathrm{sgn}(t_{ j})\frac{\pi}{2}\right)=\log|t_{j}|-i\mathrm{sgn}(t_{j})\frac{\pi}{2}+O\left(|t_{j} |\right),\]
hence
\[\sum_{n=1}^{\infty}\frac{e^{it_{j}n}}{n(n+1)}=1+(e^{-it_{j}}-1)\log(1-e^{it_{j} })=1-it_{j}\log|t_{j}|-\frac{\pi}{2}|t_{j}|+O\left(t_{j}^{2}\log\frac{1}{|t_{j }|}\right).\]
Therefore the first term in (23) is
\[\sum_{n,m=1}^{\infty}\frac{e^{i(t_{1}m+t_{2}n)}-1}{n(n+1)m(m+1)}= -it_{1}\log|t_{1}|-\frac{\pi}{2}|t_{1}|-it_{2}\log|t_{2}|-\frac{ \pi}{2}|t_{2}|\] \[+O\left(t_{1}^{2}\log\frac{1}{|t_{1}|}+t_{2}^{2}\log\frac{1}{|t_ {2}|}+|t_{1}t_{2}|\log\frac{1}{|t_{1}|}\log\frac{1}{|t_{2}|}\right).\]
The previous formula together with (23) and (24) lead to the claim of the lemma.
### Random rationals
**Proof of Theorem 6.** Let \(a/q\sim\mathrm{Unif}(F_{Q})\), and consider its continued fraction expansion \(a/q=[0;a_{1},a_{2},\ldots,a_{L}]\). Then \(T^{2j}(a/q)=[0;a_{2j+1},a_{2j+2},\ldots,a_{L}]\). Given \(0<p\leq\infty\) and \(-\infty\leq p^{\prime}<0\), by the definition (22) of \(g_{p}\) we can write
\[\begin{split}\left(\log J_{p}(a/q),\log J_{p^{\prime}}(a/q)\right) =&\sum_{j\geq 0}\left(h_{p}(T^{2j}(a/q)),h_{p^{\prime}}(T^{2j}(a/q) )\right)\\ =&\sum_{j\geq 0}\left(\frac{a_{2j+2}}{8},-\frac{a_{2j +1}}{8}\right)+\sum_{j\geq 0}\left(g_{p}(T^{2j}(a/q)),g_{p^{\prime}}(T^{2j}(a/q) )\right).\end{split} \tag{25}\]
The main term in (25) is the first sum. We find its limit distribution by applying [5, Theorem 3.1] with, in the notation of that paper, \(m=2\) and the \(\mathbb{R}^{2}\)-valued functions \(\phi_{1}(x)=(0,-\frac{1}{8}\lfloor 1/x\rfloor)\) and \(\phi_{2}(x)=(\frac{1}{8}\lfloor 1/x\rfloor,0)\) to obtain an estimate for the characteristic function of
\[\sum_{j\geq 1}\phi_{j\bmod 2}(T^{j-1}(a/q))=\sum_{j\geq 0}\left(\frac{a_{2j+2}}{8 },-\frac{a_{2j+1}}{8}\right).\]
In particular, the theorem states that for any \(\varepsilon>0\) there exist small constants \(\tau=\tau(\varepsilon)>0\) and \(\delta=\delta(\varepsilon)>0\) such that for all \(t=(t_{1},t_{2})\) with \(|t|<\tau\),
\[\mathbb{E}\exp\left(i\left(t_{1}\sum_{j\geq 0}\frac{a_{2j+2}}{8}-t_{2}\sum_{j \geq 0}\frac{a_{2j+1}}{8}\right)\right)=\exp\left(U(t_{1},t_{2})\log Q+O\left(|t |^{2-\varepsilon}\log Q+|t|^{1-\varepsilon}+Q^{-\delta}\right)\right)\]
with
\[U(t_{1},t_{2})=\frac{6}{\pi^{2}}\int_{0}^{1}\frac{e^{i\left(\frac{t_{1}}{8} \lfloor 1/Tx\rfloor-\frac{t_{2}}{8}\lfloor 1/x\rfloor\right)}-1}{1+x}\,\mathrm{d}x\]
and an implied constant depending only on \(\varepsilon\). Fix constants \(x_{1},x_{2}\in\mathbb{R}\), and choose \(t_{1}=x_{1}/(\frac{3}{8\pi}\log Q)\) and \(t_{2}=x_{2}/(\frac{3}{8\pi}\log Q)\). Lemma 20 shows that
\[U\left(\frac{x_{1}}{\frac{3}{8\pi}\log Q},\frac{x_{2}}{\frac{3}{8 \pi}\log Q}\right)\log Q= -|x_{1}|-i\frac{2\gamma}{\pi}x_{1}-i\frac{2}{\pi}x_{1}\log\frac{ \pi|x_{1}|}{3\log Q}\] \[-|x_{2}|+i\frac{2\gamma}{\pi}x_{2}+i\frac{2}{\pi}x_{2}\log\frac{ \pi|x_{2}|}{3\log Q}+O\left(\frac{(\log\log Q)^{2}}{\log Q}\right).\]
After subtracting the appropriate centering term, we thus obtain that the characteristic function
\[\mathbb{E}\exp\left(i\left(x_{1}\frac{\sum_{j\geq 0}\frac{a_{2j+2}}{8}-B_{Q}}{ \frac{3}{8\pi}\log Q}+x_{2}\frac{-\sum_{j\geq 0}\frac{a_{2j+1}}{8}+B_{Q}}{ \frac{3}{8\pi}\log Q}\right)\right)\]
with
\[B_{Q}=\frac{3}{4\pi^{2}}\log Q\log\log Q-\frac{3}{4\pi^{2}}\left(\gamma+\log \frac{\pi}{3}\right)\log Q\]
converges pointwise to \(\exp(-|x_{1}|(1+i\frac{2}{\pi}\mathrm{sgn}(x_{1})\log|x_{1}|))\exp(-|x_{2}|(1 -i\frac{2}{\pi}\mathrm{sgn}(x_{2})\log|x_{2}|))\), which is the characteristic funcion of \(\mathrm{Stab}(1,1)\otimes\mathrm{Stab}(1,-1)\). In particular, the first sum in (25) satisfies
\[\left(\frac{\sum_{j\geq 0}\frac{a_{2j+2}}{8}-B_{Q}}{\frac{3}{8\pi}\log Q}, \frac{-\sum_{j\geq 0}\frac{a_{2j+1}}{8}+B_{Q}}{\frac{3}{8\pi}\log Q} \right)\stackrel{{ d}}{{\rightarrow}}\mathrm{Stab}(1,1)\otimes \mathrm{Stab}(1,-1)\qquad\text{as }Q\rightarrow\infty. \tag{26}\]
Consider the second sum in (25). Instead of Lemma 20, we can now use the fact that for any \(f\in L^{1}([0,1])\),
\[\int_{0}^{1}\frac{e^{4f(x)}-1}{1+x}\,\mathrm{d}x=it\int_{0}^{1}\frac{f(x)}{1 +x}\,\mathrm{d}x+o(|t|)\qquad\text{as }t\to 0.\]
Fix \(\varepsilon>0\), and let \(g_{p}^{\pm}\) be as in Lemma 19. By another application of [5, Theorem 3.1] with \(m=2\), \(\phi_{1}(x)=g_{p}^{\pm}(x)\mp 2c\log(1/Tx)\) and \(\phi_{2}(x)=\pm 2c\log(1/x)\), we deduce
\[\frac{\sum_{j\geq 0}g_{p}^{\pm}(T^{2j}(a/q))}{\log Q}\stackrel{{ d}}{{ \rightarrow}}\frac{6}{\pi^{2}}\int_{0}^{1}\frac{g_{p}^{\pm}(x)}{1+x}\,\mathrm{ d}x\qquad\text{as }Q\rightarrow\infty,\]
and letting \(\varepsilon\to 0\) leads to
\[\frac{\sum_{j\geq 0}g_{p}(T^{2j}(a/q))}{\log Q}\stackrel{{ d}}{{ \rightarrow}}\frac{6}{\pi^{2}}\int_{0}^{1}\frac{h_{p}(x)-\frac{1}{8}[1/Tx]}{1+ x}\,\mathrm{d}x\qquad\text{as }Q\rightarrow\infty.\]
From [5, Theorem 3.1] with \(m=2\), \(\phi_{1}(x)=g_{p^{\prime}}^{\pm}(x)\) and \(\phi_{2}(x)=0\), we similarly deduce
\[\frac{\sum_{j\geq 0}g_{p^{\prime}}(T^{2j}(a/q))}{\log Q}\stackrel{{ d}}{{ \rightarrow}}\frac{6}{\pi^{2}}\int_{0}^{1}\frac{h_{p^{\prime}}(x)+\frac{1}{8} \lfloor 1/x\rfloor}{1+x}\,\mathrm{d}x\qquad\text{as }Q\rightarrow\infty.\]
These formulas combined with (25) and (26) immediately yield the joint limit law
\[\left(\frac{\log J_{p}(a/q)-E_{p,Q}}{\sigma_{Q}},\frac{\log J_{p^{\prime}}(a/q )-E_{p^{\prime},Q}}{\sigma_{Q}}\right)\stackrel{{ d}}{{ \rightarrow}}\mathrm{Stab}(1,1)\otimes\mathrm{Stab}(1,-1)\qquad\text{as }Q \rightarrow\infty.\]
Since \(\log q/\log Q\stackrel{{ d}}{{\rightarrow}}1\), we can replace \(E_{p,Q}\) by \(E_{p,q}\) and \(\sigma_{Q}\) by \(\sigma_{q}\)
### Random reals
Throughout, \(\alpha\in[0,1]\) is an irrational number with continued fraction expansion \(\alpha=[0;a_{1},a_{2},\ldots]\) and convergents \(p_{k}/q_{k}=[0;a_{1},a_{2},\ldots,a_{k}]\). Let \(\nu(B)=\frac{1}{\log 2}\int_{B}\frac{1}{1+x}\,\mathrm{d}x\) (\(B\subseteq[0,1]\) Borel) denote the Gauss measure on \([0,1]\). The following lemma relies on the classical fact of metric number theory that if \(\alpha\sim\nu\), then the sequence of random variables \(a_{1},a_{2},\ldots\) is strictly stationary and \(\psi\)-mixing with exponential rate. We refer to the monograph [16] for more context.
**Lemma 21**.: _Let \(\alpha\sim\nu\). For any \(0<p\leq\infty\) and \(-\infty\leq p^{\prime}<0\),_
\[\left(\frac{\log J_{p}(p_{k}/q_{k})-A_{p,k}}{\frac{3}{8\pi}\cdot\frac{\pi^{2}} {12\log 2}k},\frac{\log J_{\nu^{\prime}}(p_{k}/q_{k})-A_{p^{\prime},k}}{\frac{3}{8 \pi}\cdot\frac{\pi^{2}}{12\log 2}k}\right)\stackrel{{ d}}{{\rightarrow}} \operatorname{Stab}(1,1)\otimes\operatorname{Stab}(1,-1)\qquad\text{as $k \rightarrow\infty$},\]
_where, for all \(p\neq 0\), \(A_{p,k}=\operatorname{sgn}(p)\frac{3}{4\pi^{2}}\cdot\frac{\pi^{2}}{12\log 2}k \log\left(\frac{\pi^{2}}{12\log 2}k\right)+D_{p}\frac{\pi^{2}}{12\log 2}k\), with \(D_{p}\) defined in (10)._
**Proof.** For the sake of simplicity, we assume that \(k\) is even, in which case
\[\left(\log J_{p}(p_{k}/q_{k}),\log J_{p^{\prime}}(p_{k}/q_{k})\right)=\sum_{0 \leq j<k/2}\left(\frac{a_{2j+2}}{8},-\frac{a_{2j+1}}{8}\right)+\sum_{0\leq j<k/ 2}\left(g_{p}(T^{2j}(p_{k}/q_{k})),g_{p^{\prime}}(T^{2j}(p_{k}/q_{k}))\right). \tag{27}\]
A similar formula holds for odd \(k\), the only difference being that the last term in the first sum is \((0,-a_{k}/8)\), which is negligible in measure.
The main term in (27) is the first sum, whose limit distribution is easily found using the theory of \(\psi\)-mixing random variables. Fix real constants \(x_{1},x_{2}\) such that \((x_{1},x_{2})\neq(0,0)\); in what follows, implied constants are allowed to depend on \(x_{1},x_{2}\). The random variables
\[X_{j}:=x_{1}\frac{a_{2j+2}/8}{\frac{3}{8\pi}\cdot\frac{\pi^{2}}{12\log 2}k}+x_{ \frac{-a_{2j+1}/8}{\frac{3}{8\pi}\cdot\frac{\pi^{2}}{12\log 2}k}},\qquad 0\leq j<k/2\]
are identically distributed and \(\psi\)-mixing with exponential rate. Using the facts that \(|e^{iX_{j}}-1|\leq\min\{|X_{j}|,2\}\) and
\[1-\cos X_{j}=2\sin^{2}(X_{j}/2)\geq\frac{2}{\pi^{2}}X_{j}^{2}\mathds{1}_{\{|X _{j}|\leq\pi\}},\]
one readily checks that \(\mathbb{E}|e^{iX_{j}}-1|\ll(\log k)/k\) and \(\mathbb{E}(1-\cos X_{j})\gg 1/k\). Applying [15, Lemma 1] with, in the notation of that paper, \(P\approx\sqrt{k/\log k}\) and \(m\approx\sqrt{k/\log k}\) yields
\[\mathbb{E}\exp\left(i\sum_{0\leq j<k/2}X_{j}\right)=\exp\left(\sum_{0\leq j<k/ 2}\mathbb{E}\left(e^{iX_{j}}-1\right)\right)+O\left(\frac{(\log k)^{2}}{k} \right).\]
Lemma 20 with \(t_{1}=x_{1}\frac{4\log 2}{\pi k}\) and \(t_{2}=-x_{2}\frac{4\log 2}{\pi k}\) gives that here
\[\sum_{0\leq j<k/2}\mathbb{E}\left(e^{iX_{j}}-1\right)= \frac{k}{2\log 2}\int_{0}^{1}\frac{e^{i(t_{1}\lfloor 1/Tx\rfloor+t_{2} \lfloor 1/x\rfloor)}-1}{1+x}\,\mathrm{d}x\] \[= -|x_{1}|-i\frac{2\gamma}{\pi}x_{1}-i\frac{2}{\pi}x_{1}\log\frac{4 (\log 2)|x_{1}|}{\pi k}\] \[-|x_{2}|+i\frac{2\gamma}{\pi}x_{2}+i\frac{2}{\pi}x_{2}\log\frac{4 (\log 2)|x_{2}|}{\pi k}+O\left(\frac{(\log k)^{2}}{k}\right).\]
After subtracting the appropriate centering term, we thus obtain that the characteristic function
\[\mathbb{E}\exp\left(i\left(x_{1}\frac{\sum_{0\leq j<k/2}a_{2j+2}/8-B_{k}}{ \frac{3}{8\pi}\cdot\frac{\pi^{2}}{12\log 2}k}+x_{2}\frac{-\sum_{0\leq j<k/2}a_{2j+1}/8+B_{k}}{ \frac{3}{8\pi}\cdot\frac{\pi^{2}}{12\log 2}k}\right)\right)\]
with
\[B_{k}=\frac{3}{4\pi^{2}}\cdot\frac{\pi^{2}}{12\log 2}k\log\left(\frac{\pi^{2}}{12 \log 2}k\right)-\frac{3}{4\pi^{2}}\left(\gamma+\log\frac{\pi}{3}\right)\frac{ \pi^{2}}{12\log 2}k\]
converges pointwise to \(\exp(-|x_{1}|(1+i\frac{2}{\pi}\mathrm{sgn}(x_{1})\log|x_{1}|))\exp(-|x_{2}|(1- i\frac{2}{\pi}\mathrm{sgn}(x_{2})\log|x_{2}|))\), which is the characteristic funcion of \(\mathrm{Stab}(1,1)\otimes\mathrm{Stab}(1,-1)\). In particular, the first sum in (27) satisfies
\[\left(\frac{\sum_{0\leq j<k/2}a_{2j+2}/8-B_{k}}{\frac{3}{8\pi}\cdot\frac{\pi^{ 2}}{12\log 2}k},\frac{-\sum_{0\leq j<k/2}a_{2j+1}/8+B_{k}}{\frac{3}{8\pi}\cdot\frac {\pi^{2}}{12\log 2}k}\right)\stackrel{{ d}}{{\rightarrow}}\mathrm{ Stab}(1,1)\otimes\mathrm{Stab}(1,-1)\qquad\text{as $k\rightarrow\infty$}. \tag{28}\]
Consider now the second sum in (27). Recall that the Gauss map \(T\) is mixing in the sense of ergodic theory, therefore \(T^{2}\) is ergodic. Fix \(\varepsilon>0\), and let \(g_{p}^{\pm}\) be as in Lemma 19. Since \(T^{2j}(p_{k}/q_{k})=[0;a_{2j+1},a_{2j+2},\ldots,a_{k}]\) and \(T^{2j}\alpha=[0;a_{2j+1},a_{2j+2},\ldots]\), by construction we have
\[|g_{p}^{\pm}(T^{2j}(p_{k}/q_{k}))-g_{p}^{\pm}(T^{2j}\alpha)|\ll|\log[0;a_{2j+2 },a_{2j+3},\ldots,a_{k}]-\log[0;a_{2j+2},a_{2j+3},\ldots]|.\]
This decays exponentially fast in \(k-2j\), hence \(\sum_{0\leq j<k/2}g_{p}^{\pm}(T^{2j}(p_{k}/q_{k}))=\sum_{0\leq j<k/2}g_{p}^{ \pm}(T^{2j}\alpha)+O(1)\). Applying Birkhoff's pointwise ergodic theorem to \(T^{2}\) thus yields
\[\frac{1}{k/2}\sum_{0\leq j<k/2}g_{p}^{\pm}(T^{2j}(p_{k}/q_{k}))\to \frac{1}{\log 2}\int_{0}^{1}\frac{g_{p}^{\pm}(x)}{1+x}\,\mathrm{d}x\qquad\text{for a.e. $\alpha$},\]
and after letting \(\varepsilon\to 0\),
\[\frac{\sum_{0\leq j<k/2}g_{p}(T^{2j}(p_{k}/q_{k}))}{\frac{3}{8\pi}\cdot\frac{ \pi^{2}}{12\log 2}k}\to\frac{1}{\frac{3}{8\pi}}\cdot\frac{6}{\pi^{2}}\int_{0}^{1} \frac{h_{p}(x)-\frac{1}{8}\lfloor 1/Tx\rfloor}{1+x}\,\mathrm{d}x\qquad\text{for a.e. $\alpha$}. \tag{29}\]
We similarly obtain
\[\frac{\sum_{0\leq j<k/2}g_{p}(T^{2j}(p_{k}/q_{k}))}{\frac{3}{8\pi}\cdot\frac{ \pi^{2}}{12\log 2}k}\to\frac{1}{\frac{3}{8\pi}}\cdot\frac{6}{\pi^{2}}\int_{0}^{1} \frac{h_{p^{\prime}}(x)+\frac{1}{8}\lfloor 1/x\rfloor}{1+x}\,\mathrm{d}x\qquad\text{for a.e. $\alpha$}.\]
The previous two relations imply convergence in distribution, and the desired limit law follows from (27) and (28).
Proof of Theorem 8.: First, let \(\alpha\in[0,1]\) be fixed. Recall from (13) that \(|S_{N}(\alpha)-S_{N}(p_{k}/q_{k})|\ll 1\) for all \(0\leq N<q_{k}\) with a universal implied constant. Therefore if \(q_{k}\leq M\leq q_{K}\), then \(J_{p,q_{k}}(\alpha)\leq J_{p,M}(\alpha)\leq J_{p,q_{K}}(\alpha)\), and consequently
\[\log J_{p}(p_{k}/q_{k})-O(1)\leq\log J_{p,M}(\alpha)\leq\log J_{p}(p_{K}/q_{K })+O(1) \tag{30}\]
with universal implied constants. The reverse inequalities hold with \(p^{\prime}\) instead of \(p\).
Now let \(\alpha\sim\mu\) with a Borel probability measure \(\mu\) on \([0,1]\) which is absolutely continuous with respect to the Lebesgue measure. Let \(k_{M}^{*}=k_{M}^{*}(\alpha)\) be the positive integer for which \(q_{k_{M}^{*}}\leq M<q_{k_{M}^{*}+1}\). The convergent denominators of Lebesgue-a.e. \(\alpha\) (and consequently, \(\mu\)-a.e. \(\alpha\)) satisfy the law of the iterated logarithm
\[\limsup_{k\rightarrow\infty}\frac{\left|\log q_{k}-\frac{\pi^{2}}{12\log 2}k \right|}{\sqrt{k\log\log k}}=c\]
with a universal constant \(c>0\); in fact, the central limit theorem also holds for \(\alpha\sim\mu\)[16, Section 3.2.3]. Therefore
\[\log q_{k_{M}^{*}}-O\left(\sqrt{\log M\log\log\log M}\right)\leq\log q_{\lfloor \frac{12\log 2}{\pi^{2}}\log M\rfloor}\leq\log q_{k_{M}^{*}+1}+O\left(\sqrt{\log M \log\log\log M}\right),\]
and by the general fact \(q_{k+2}/q_{k}\geq 2\) for all \(k\geq 1\),
\[k_{M}^{*}=\frac{12\log 2}{\pi^{2}}\log M+O\left(\sqrt{\log M\log\log\log M} \right)\qquad\text{for $\mu$-a.e. $\alpha$.}\]
Letting \(k_{M}\) be the even integer closest to, say, \(\frac{12\log 2}{\pi^{2}}\log M-(\log M)^{3/4}\) and \(K_{M}\) be the even integer closest to, say, \(\frac{12\log 2}{\pi^{2}}\log M+(\log M)^{3/4}\), we thus have \(\mu(\{\alpha\in[0,1]\,:\,k_{M}\leq k_{M}^{*}\leq K_{M}\})=1-o(1)\) as \(M\to\infty\). By (30), we can write \(\log J_{p,M}(\alpha)=\log J_{p}(p_{k_{M}}/q_{k_{M}})+\xi_{p,M}(\alpha)\), with an error term \(\xi_{p,M}(\alpha)\) which outside a set of \(\mu\)-measure \(o(1)\) satisfies
\[|\xi_{p,M}(\alpha)|\ll 1+|\log J_{p}(p_{K_{M}}/q_{K_{M}})-\log J_{p}(p_{k_{M}}/ q_{k_{M}})|\]
with a universal implied constant. The same holds with \(p^{\prime}\) instead of \(p\).
Recall the decomposition formula (27) for \((\log J_{p}(p_{k_{M}}/q_{k_{M}}),\log J_{p^{\prime}}(p_{k_{M}}/q_{k_{M}}))\). According to Lemma 21, if \(\alpha\sim\nu\), then
\[\left(\frac{\log J_{p}(p_{k_{M}}/q_{k_{M}})-E_{p,M}}{\sigma_{M}},\frac{\log J_ {p^{\prime}}(p_{k_{M}}/q_{k_{M}})-E_{p^{\prime},M}}{\sigma_{M}}\right) \stackrel{{ d}}{{\to}}\operatorname{Stab}(1,1)\otimes \operatorname{Stab}(1,-1)\qquad\text{as $M\to\infty$.}\]
In fact, the same holds if \(\alpha\sim\mu\). Indeed, this easily follows from a mixing property of the Gauss map [16, p. 166]
\[\lim_{n\to\infty}\sup_{A\in\mathcal{F}_{n}^{\infty}}|\mu(A)-\nu(A)|=0,\]
where \(\mathcal{F}_{n}^{\infty}\) denotes the \(\sigma\)-algebra generated by the partial quotients \(a_{m}\), \(m\geq n\). Note that the terms \(j\geq n/2\) in (27) are \(\mathcal{F}_{n}^{\infty}\)-measurable.
It remains to show that \(\xi_{p,M}(\alpha)=o(\log M)\) and \(\xi_{p^{\prime},M}(\alpha)=o(\log M)\) in \(\mu\)-measure. By the decomposition formula (27),
\[|\log J_{p}(p_{K_{M}}/q_{K_{M}})-\log J_{p}(p_{k_{M}}/q_{k_{M}})|\leq\\ \sum_{k_{M}/2\leq j<K_{M}/2}\frac{a_{2j+2}}{8}+\left|\sum_{0\leq j <K_{M}/2}g_{p}(T^{2j}p_{K_{M}}/q_{K_{M}})-\sum_{0\leq j<k_{M}/2}g_{p}(T^{2j} p_{k_{M}}/q_{k_{M}})\right|.\]
Recall that for any \(j\geq 1\) and any real \(t\geq 1\),
\[\nu\left(\{\alpha\in[0,1]\,:\,a_{j}\geq t\}\right)=\frac{1}{\log 2}\sum_{n \geq t}\log\left(1+\frac{1}{n(n+2)}\right)\ll\frac{1}{t}.\]
Since \(K_{M}-k_{M}\ll(\log M)^{3/4}\), the union bound thus yields
\[\nu\bigg{(}\bigg{\{}\alpha\in[0,1]\,:\,\sum_{k_{M}/2\leq j<K_{M}/2}a_{2j+2} \geq\varepsilon\log M\bigg{\}}\bigg{)}\ll\frac{1}{\varepsilon(\log M)^{1/4}}.\]
In particular, \(\sum_{k_{M}/2\leq j<K_{M}/2}a_{2j+2}=o(\log M)\) in \(\nu\)-measure, and consequently also in \(\mu\)-measure. Formula (29) shows that
\[\sum_{0\leq j<K_{M}/2}g_{p}(T^{2j}p_{K_{M}}/q_{K_{M}})-\sum_{0\leq j<k_{M}/2}g _{p}(T^{2j}p_{k_{M}}/q_{k_{M}})=o(\log M)\]
holds for Lebesgue-a.e. \(\alpha\), and consequently also in \(\mu\)-measure. This finishes the proof of \(\xi_{p,M}(\alpha)=o(\log M)\) in \(\mu\)-measure, and the same arguments show that this holds with \(p^{\prime}\) instead of \(p\) as well.
Proof of Theorem 7.: This is entirely analogous to the proof of Theorem 8. The only difference is that instead of \(|S_{N}(\alpha)-S_{N}(p_{k}/q_{k})|\ll 1\), we use \(|\tilde{S}_{N}(\alpha)-\tilde{S}_{N}(p_{k}/q_{k})|\ll\max_{1\leq j\leq k}\log( a_{j}+1)\), see [2, Proposition 3]. In particular, \(|\tilde{S}_{N}(\alpha)-\tilde{S}_{N}(p_{k}/q_{k})|\ll\log(k+1)\) for Lebesgue-a.e. \(\alpha\), which suffices for our purposes.
## Acknowledgments
The author is supported by the Austrian Science Fund (FWF) project M 3260-N.
|
2301.00858 | Robust Average-Reward Markov Decision Processes | In robust Markov decision processes (MDPs), the uncertainty in the transition
kernel is addressed by finding a policy that optimizes the worst-case
performance over an uncertainty set of MDPs. While much of the literature has
focused on discounted MDPs, robust average-reward MDPs remain largely
unexplored. In this paper, we focus on robust average-reward MDPs, where the
goal is to find a policy that optimizes the worst-case average reward over an
uncertainty set. We first take an approach that approximates average-reward
MDPs using discounted MDPs. We prove that the robust discounted value function
converges to the robust average-reward as the discount factor $\gamma$ goes to
$1$, and moreover, when $\gamma$ is large, any optimal policy of the robust
discounted MDP is also an optimal policy of the robust average-reward. We
further design a robust dynamic programming approach, and theoretically
characterize its convergence to the optimum. Then, we investigate robust
average-reward MDPs directly without using discounted MDPs as an intermediate
step. We derive the robust Bellman equation for robust average-reward MDPs,
prove that the optimal policy can be derived from its solution, and further
design a robust relative value iteration algorithm that provably finds its
solution, or equivalently, the optimal robust policy. | Yue Wang, Alvaro Velasquez, George Atia, Ashley Prater-Bennette, Shaofeng Zou | 2023-01-02T19:51:55Z | http://arxiv.org/abs/2301.00858v2 | # Robust Average-Reward Markov Decision Processes
###### Abstract
In robust Markov decision processes (MDPs), the uncertainty in the transition kernel is addressed by finding a policy that optimizes the worst-case performance over an uncertainty set of MDPs. While much of the literature has focused on discounted MDPs, robust average-reward MDPs remain largely unexplored. In this paper, we focus on robust average-reward MDPs, where the goal is to find a policy that optimizes the worst-case average reward over an uncertainty set. We first take an approach that approximates average-reward MDPs using discounted MDPs. We prove that the robust discounted value function converges to the robust average-reward as the discount factor \(\gamma\) goes to 1, and moreover, when \(\gamma\) is large, any optimal policy of the robust discounted MDP is also an optimal policy of the robust average-reward. We further design a robust dynamic programming approach, and theoretically characterize its convergence to the optimum. Then, we investigate robust average-reward MDPs directly without using discounted MDPs as an intermediate step. We derive the robust Bellman equation for robust average-reward MDPs, prove that the optimal policy can be derived from its solution, and further design a robust relative value iteration algorithm that provably find its solution, or equivalently, the optimal robust policy.
1 University at Buffalo, The State University of New York
2 University of Colorado Boulder
3 University of Central Florida
4 Air Force Research Laboratory
[email protected], [email protected], [email protected], [email protected], [email protected]
## 1 Introduction
A Markov decision process (MDP) is an effective mathematical tool for sequential decision-making in stochastic environments [1, 13]. Solving an MDP problem entails finding an optimal policy that maximizes a cumulative reward according to a given criterion. However, in practice there could exist a mismatch between the assumed MDP model and the underlying environment due to various factors, such as non-stationarity of the environment, modeling error, exogenous perturbation, partial observability, and adversarial attacks. The ensuing model mismatch could result in solution policies with poor performance.
This challenge spurred noteworthy efforts on developing and analyzing a framework of robust MDPs e.g., [1, 12, 13]. Rather than adopting a fixed MDP model, in the robust MDP setting, one seeks to optimize the worst-case performance over an uncertainty set of possible MDP models. The solution to the robust MDP problem provides performance guarantee for all uncertain MDP models, and is thus robust to the model mismatch.
Robust MDP problems falling under different reward optimality criteria are fundamentally different. In robust discounted MDPs, the goal is to find a policy that maximizes the discounted cumulative reward in the worst case. In this setting, as the agent interacts with the environment, the reward received diminishes exponentially over time. Much of the prior work in the robust setting has focused on the discounted reward formulation. The model-based method, e.g., [13, 14, 15, 16, 17, 18, 19, 20], where information about the uncertainty set is assumed to be known to the learner, unveiled several fundamental characterizations of robust discounted MDPs. This was further extended to the more practical model-free setting in which only samples from a simulator (the centroid of the uncertainty set) are available to the learner. For example, the value-based method [13, 14, 15, 16, 17, 18, 19, 20], where information about the uncertainty set is assumed to be known to the learner, unveiled several fundamental characterizations of robust discounted MDPs. This was further extended to the more practical model-free setting in which only samples from a simulator (the centroid of the uncertainty set) are available to the learner. For example, the value-based method [13, 14, 15, 16, 17, 18, 19, 20], where information about the uncertainty set is assumed to be known to the learner, unveiled several fundamental characterizations of robust discounted MDPs. This was further extended to the more practical model-free setting in which only samples from a simulator (the centroid of the uncertainty set) are available to the learner. For example, the value-based method [13, 14, 15, 16, 17, 18, 19, 20] optimizes the worst-case performance using the robust value function as an intermediate step; on the other hand, the model-free policy-based method [13, 14, 15]. Rather than adopting a fixed MDP model, in the robust MDP setting, one seeks to optimize the worst-case performance over an uncertainty set of possible MDP models. The solution to the robust MDP problem provides performance guarantee for all uncertain MDP models, and is thus robust to the model mismatch.
Robust MDP problems falling under different reward optimality criteria are fundamentally different. In robust discounted MDPs, the goal is to find a policy that maximizes the discounted cumulative reward in the worst case. In this setting, as the agent interacts with the environment, the reward received diminishes exponentially over time. Much of the prior work in the robust setting has focused on the discounted reward formulation. The model-based method, e.g., [13, 14, 15, 16, 17, 18, 19, 20], where information about the uncertainty set is assumed to be known to the learner, unveiled several fundamental characterizations of robust discounted MDPs. This was further extended to the more practical model-free setting in which only samples from a simulator (the centroid of the uncertainty set) are available to the learner. For example, the value-based method [13, 14, 15, 16, 17, 18, 19, 20], where information about the uncertainty set is assumed to be known to the learner, unveiled several fundamental characterizations of robust discounted MDPs. This was further extended to the more practical model-free setting in which only samples from a simulator (the centroid of the uncertainty set) are available to the learner. For example, the value-based method [13, 14, 15, 16, 17, 18, 19, 20] optimizes the worst-case performance using the robust value function as an intermediate step; on the other hand, the model-free policy-based method [13, 14, 15, 16, 17, 18, 19, 20] directly optimizes the policy and is thus scalable to large/continuous state and action spaces.
Although discounted MDPs induce an elegant Bellman operator that is a contraction, and have been studied extensively, the policy obtained usually has poor long-term performance when a system operates for an extended period of time. When the discount factor is very close to 1, the agent may prefer to compare policies on the basis of their average expected reward instead of their expected total discounted reward, e.g., queueing control, inventory management in supply chains, scheduling automatic guided vehicles and applications in communication networks [15]. Therefore, it is also important to optimize the long-term aver
age performance of a system.
However, robust MDPs under the average-reward criterion are largely understudied. Compared to the discounted setting, the average-reward setting depends on the limiting behavior of the underlying stochastic process, and hence is markedly more intricate. A recognized instance of such intricacy concerns the one-to-one correspondence between the stationary policies and the limit points of state-action frequencies, which while true for discounted MDPs, breaks down under the average-reward criterion even in the non-robust setting except in some very special cases [21, 22]. This is largely due to dependence of the necessary conditions for establishing a contraction in average-reward settings on the graph structure of the MDP, versus the discounted-reward setting where it simply suffices to have a discount factor that is strictly less than one. Heretofore, only a handful of studies have considered average-reward MDPs in the robust setting. The first work by [20] considers robust average-reward MDPs under a specific finite interval uncertainty set, but their method is not easily applicable to other uncertainty sets. More recently, [15] proposed an algorithm for robust average-reward MDPs under the \(\ell_{1}\) uncertainty set. However, obtaining fundamental characterizations of the problem and convergence guarantee remains elusive.
### Challenges and Contributions
In this paper, we derive characterizations of robust average-reward MDPs with general uncertainty sets, and develop model-based approaches with provable theoretical guarantee. Our approach is fundamentally different from previous work on robust discounted MDPs, robust and non-robust average-reward MDPs. In particular, the key challenges and the main contributions are summarized below.
* **We characterize the limiting behavior of robust discounted value function as the discount factor \(\gamma\to 1\).** For the standard _non-robust_ setting and for a specific transition kernel, the discounted non-robust value function converges to the average-reward non-robust value function as \(\gamma\to 1\)[21]. However, in the robust setting, we need to consider the worst-case limiting behavior under all possible transition kernels in the uncertainty set. Hence, the previous point-wise convergence result [21] cannot be directly applied. In [20], a finite interval uncertainty set is studied, where due to its special structure, the number of possible worst-case transition kernels of robust discounted MDPs is finite, and hence the order of \(\min\) (over transition kernel) and \(\lim_{\gamma\to 1}\) can be exchanged, and therefore, the robust discounted value function converges to the robust average-reward value function. This result, however, does not hold for general uncertainty sets investigated in this paper. We first prove the _uniform_ convergence of discounted non-robust value function to average-reward w.r.t. the transition kernels and policies. Based on this uniform convergence, we show the convergence of the robust discounted value function to the robust average-reward. This uniform convergence result is the first in the literature and is of key importance to motivate our algorithm design and to guarantee convergence to the optimal robust policy in the average-reward setting.
* **We design algorithms for robust policy evaluation and optimal control based on the limit method.** Based on the uniform convergence, we then use robust discounted MDPs to approximate robust average-reward MDPs. We show that when \(\gamma\) is large, any optimal policy of the robust discounted MDP is also an optimal policy of the robust average-reward, and hence solves the robust optimal control problem in the average reward setting. This result is similar to the Blackwell optimality [17, 1] for the non-robust setting, however, our proof is fundamentally different. Technically, the proof in [17, 1] is based on the fact that the difference between the discounted value functions of two policies is a rational function of the discount factor, which has a finite number of zeros. However, in the robust setting with a general uncertainty set, the difference is no longer a rational function due to the min over the transition kernel. We construct a novel proof based on the limiting behavior of robust discounted MDPs, and show that the (optimal) robust discounted value function converges to the (optimal) robust average-reward as \(\gamma\to 1\). Motivated by these insights, we then design our algorithms by applying a sequence of robust discounted Bellman operators while increasing the discount factor at a certain rate. We prove that our method can (i) evaluate the robust average-reward for a given policy and; (ii) find the optimal robust value function and, in turn, the optimal robust policy for general uncertainty sets.
* **We design a robust relative value iteration method without using the discounted MDPs as an intermediate step.** We further pursue a direct approach that solves the robust average-reward MDPs without using the limit method, i.e., without using discounted MDPs as an intermediate step. We derive a robust Bellman equation for robust average-reward MDPs, and show that the pair of robust relative value function and robust average-reward is a solution to the robust Bellman equation under the average-reward setting. We further prove that if we can find any solution to the robust Bellman equation, then the optimal policy can be derived by a greedy approach. The problem hence can be equivalently solved by solving the robust Bellman equation. We then design a robust value iteration method which provably converges to the solution of the robust Bellman equation, i.e., solve the optimal policy for the robust average-reward MDP problem.
### Related Work
**Robust discounted MDPs.** Model-based methods for robust discounted MDPs were studied in [20, 21, 22, 23, 24, 25], where the uncertainty set is assumed to be known, and the problem can be solved using robust dynamic programming. Later, the studies were generalized to the model-free setting where stochas
tic samples from the centroid MDP of the uncertainty set are available in an online fashion [16, 17, 18, 19, 20] and an offline fashion [21, 22, 23] and Gand-Clementen [24, 25]; for the robust Bellman operator is a contraction, based on which robust dynamic programming and value-based methods can be designed. In this paper, we focus on robust average-reward MDPs. However, the robust Bellman operator for average-reward MDPs is not a contraction, and its fixed point may not be unique. Moreover, the average-reward setting depends on the limiting behavior of the underlying stochastic process, which is thus more intricate.
**Robust average-reward MDPs.** Studies on robust average-reward MDPs are quite limited in the literature. Robust average-reward MDPs under a specific finite interval uncertainty set was studied in [23], where the authors showed the existence of a Blackwell optimal policy, i.e., there exists some \(\delta\in[0,1)\), such that the optimal robust policy exists and remains unchanged for any discount factor \(\gamma\in[\delta,1)\). However, this result depends on the structure of the uncertainty set. For general uncertainty sets, the existence of a Blackwell optimal policy may not be guaranteed. More recently, [12] designed a model-free algorithm for a specific \(\ell_{1}\)-norm uncertainty set and characterized its regret bound. However, their method also relies on the structure of the \(\ell_{1}\)-norm uncertainty set, and may not be generalizable to other types of uncertainty sets. In this paper, our results can be applied to various types of uncertainty sets, and thus is more general.
## Preliminaries and Problem Model
In this section, we introduce some preliminaries on discounted MDPs, average-reward MDPs, and robust MDPs.
**Discounted MDPs.** A discounted MDP \((\mathscr{S},\mathcal{A},\mathsf{P},r,\gamma)\) is specified by a state space \(\mathscr{S}\), an action space \(\mathcal{A}\), a transition kernel \(\mathsf{P}=\{p_{s}^{a}\in\varDelta(\mathscr{S}),a\in\mathscr{A},s\in\mathscr{ S}\}\)1, where \(p_{s}^{a}\) is the distribution of the next state over \(\mathscr{S}\) upon taking action \(a\) in state \(s\) (with \(p_{s,s^{\prime}}^{a}\) denoting the probability of transitioning to \(s^{\prime}\)), a reward function \(r:\mathscr{S}\times\mathscr{A}\rightarrow[0,1]\), and a discount factor \(\gamma\in[0,1)\). At each time step \(t\), the agent at state \(s_{t}\) takes an action \(a_{t}\), the environment then transitions to the next state \(s_{t+1}\) according to \(p_{s_{t}}^{a_{t}}\), and produces a reward signal \(r(s_{t},a_{t})\in[0,1]\) to the agent. In this paper, we also write \(r_{t}=r(s_{t},a_{t})\) for convenience.
Footnote 1: \(\varDelta(\mathscr{S})\): the \((|\mathscr{S}|-1)\)-dimensional probability simplex on \(\mathscr{S}\).
A stationary policy \(\pi:\mathscr{S}\rightarrow\varDelta(\mathscr{A})\) is a distribution over \(\mathscr{A}\) for any given state \(s\), and the agent takes action \(a\) at state \(s\) with probability \(\pi(a|s)\). The discounted value function of a stationary policy \(\pi\) starting from \(s\in\mathscr{S}\) is defined as the expected discounted cumulative reward by following policy \(\pi\): \(V_{\mathsf{P},\gamma}^{\pi}(s)\triangleq\mathbb{E}_{\pi,\mathsf{P}}\left[ \sum_{t=0}^{\infty}\gamma^{t}r_{t}|S_{0}=s\right]\).
**Average-Reward MDPs.** Different from discounted MDPs, average-reward MDPs do not discount the reward over time, and consider the behavior of the underlying Markov process under the steady-state distribution. More specifically, under a specific transition kernel \(\mathsf{P}\), the average-reward of a policy \(\pi\) starting from \(s\in\mathscr{S}\) is defined as
\[g_{\mathsf{P}}^{\pi}(s)\triangleq\lim_{n\rightarrow\infty}\mathbb{E}_{\pi, \mathsf{P}}\bigg{[}\frac{1}{n}\sum_{t=0}^{n-1}r_{t}|S_{0}=s\bigg{]}, \tag{1}\]
which we also refer to in this paper as the average-reward value function for convenience.
The average-reward value function can also be equivalently written as follows: \(g_{\mathsf{P}}^{\pi}=\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{t=0}^{n-1}( \mathsf{P}^{\pi})^{t}r_{\pi}\triangleq\mathsf{P}_{\pi}^{\pi}r_{\pi},\) where \((\mathsf{P}^{\pi})_{s,s^{\prime}}\triangleq\sum_{a}\pi(a|s)p_{s,s^{\prime}}^ {a}\) and \(r_{\pi}(s)\triangleq\sum_{a}\pi(a|s)r(s,a)\) are the transition matrix and reward function induced by \(\pi\), and \(\mathsf{P}_{\ast}^{\pi}\triangleq\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{t =0}^{n-1}(\mathsf{P}^{\pi})^{t}\) is the limit matrix of \(\mathsf{P}^{\pi}\).
In the average-reward setting, we also define the following relative value function
\[V_{\mathsf{P}}^{\pi}(s)\triangleq\mathbb{E}_{\pi,\mathsf{P}}\bigg{[}\sum_{t=0 }^{\infty}(r_{t}-g_{\mathsf{P}}^{\pi})|S_{0}=s\bigg{]}, \tag{2}\]
which is the cumulative difference over time between the reward and the average value \(g_{\mathsf{P}}^{\pi}\). It has been shown that [23]: \(V_{\mathsf{P}}^{\pi}=H_{\mathsf{P}}^{\pi}r_{\pi}\), where \(H_{\mathsf{P}}^{\pi}\triangleq(I-\mathsf{P}^{\pi}+\mathsf{P}_{\ast}^{\pi})^{-1 }(I-\mathsf{P}_{\ast}^{\pi})\) is defined as the deviation matrix of \(\mathsf{P}^{\pi}\).
The relationship between the average-reward and the relative value functions can be characterized by the following Bellman equation [23]:
\[V_{\mathsf{P}}^{\pi}(s)=\mathbb{E}_{\pi}\bigg{[}r(s,A)-g_{\mathsf{P}}^{\pi}(s)+ \sum_{s^{\prime}\in\mathscr{S}}p_{s,s^{\prime}}^{A}V_{\mathsf{P}}^{\pi}(s^{ \prime})\bigg{]}. \tag{3}\]
**Robust discounted and average-reward MDPs.** For robust MDPs, the transition kernel is not fixed but belongs to some uncertainty set \(\mathscr{P}\). After the agent takes an action, the environment transits to the next state according to an arbitrary transition kernel \(\mathsf{P}\in\mathscr{P}\). In this paper, we focus on the \((s,a)\)-rectangular uncertainty set [12, 23], i.e., \(\mathscr{P}=\bigotimes_{s,a}\mathcal{P}_{s}^{\mathrm{a}}\), where \(\mathscr{P}_{s}^{\mathrm{a}}\subseteq\varDelta(\mathscr{S})\). We note that there are also studies on relaxing the \((s,a)\)-rectangular uncertainty set to \(s\)-rectangular uncertainty set, which is not the focus of this paper.
Under the robust setting, we consider the worst-case performance over the uncertainty set of MDPs. More specifically, the robust discounted value function of a policy \(\pi\) for a discounted MDP is defined as
\[V_{\mathscr{P},\gamma}^{\pi}(s)\triangleq\min_{\kappa\in\bigotimes_{t\geq 0} \mathscr{P}}\mathbb{E}_{\pi,\kappa}\left[\sum_{t=0}^{\infty}\gamma^{t}r_{t}|S_{0}=s \right], \tag{4}\]
where \(\kappa=(\mathsf{P}_{0},\mathsf{P}_{1}...)\in\bigotimes_{t\geq 0}\mathscr{P}\).
In this paper, we focus on the following worst-case average-reward for a policy \(\pi\):
\[g^{\pi}_{\mathcal{P}}(s)\triangleq\min_{s\in\bigotimes_{t\geq 0}}\lim_{\mathcal{P} \,n\rightarrow\infty}\mathbb{E}_{\pi,\kappa}\left[\frac{1}{n}\sum_{t=0}^{n-1}r_{ t}|S_{0}=s\right], \tag{5}\]
to which, for convenience, we refer as the robust average-reward value function.
For robust discounted MDPs, it has been shown that the robust discounted value function is the unique fixed-point of the robust discounted Bellman operator [12, 13, 14]:
\[\mathbf{T}_{\pi}V(s)\triangleq\sum_{a\in\mathcal{A}}\pi(a|s)\left(r(s,a)+ \gamma\sigma_{\mathcal{P}^{a}_{\pi}}(V)\right), \tag{6}\]
where \(\sigma_{\mathcal{P}^{a}_{\pi}}(V)\triangleq\min_{p\in\mathcal{P}^{a}_{\pi}}p^{ \top}V\) is the support function of \(V\) on \(\mathcal{P}^{a}_{s}\). Based on the contraction of \(\mathbf{T}_{\pi}\), robust dynamic programming approaches, e.g., robust value iteration, can be designed [12, 13] (see Appendix for a review of these methods). However, there is no such contraction result for robust average-reward MDPs. In this paper, our goal is to find a policy that optimizes the robust average-reward value function:
\[\max_{\pi\in\varPi}g^{\pi}_{\mathcal{P}}(s),\text{ for any }s\in\mathcal{S}, \tag{7}\]
where \(\varPi\) is the set of all stationary policies, and we denote by \(g^{*}_{\mathcal{P}}(s)\triangleq\max_{\pi}g^{\pi}_{\mathcal{P}}(s)\) the optimal robust average-reward.
## 3 Limit Approach for Robust Average-Reward MDPs
We first take a limit approach to solve the problem of robust average-reward MDPs in eq. (7). It is known that under the non-robust setting, for any fixed \(\pi\) and \(\mathsf{P}\), the discounted value function converges to the average-reward value function as the discount factor \(\gamma\) approaches \(1\)[14], i.e.,
\[\lim_{\gamma\to 1}(1-\gamma)V^{\pi}_{\mathsf{P},\gamma}=g^{\pi}_{ \mathcal{P}}. \tag{8}\]
We take a similar idea, and show that the same result holds in the robust case: \(\lim_{\gamma\to 1}(1-\gamma)V^{\pi}_{\mathsf{P},\gamma}=g^{\pi}_{ \mathcal{P}}\) under a mild assumption. Based on this result, we further design algorithms (Algorithms 1 and 2) that apply a sequence of robust discounted Bellman operators while increasing the discount factor at a certain rate. We then theoretically prove that our algorithms converge to the optimal solutions.
In the following, we first show that the convergence \(\lim_{\gamma\to 1}(1-\gamma)V^{\pi}_{\mathsf{P},\gamma}=g^{\pi}_{ \mathcal{P}}\) is uniform on the set \(\varPi\times\mathcal{P}\). In studies of average-reward MDPs, it is usually the case that a certain class of MDPs are considered, e.g., unichain and communicating [15, 16, 17, 18]. In this paper, we focus on the unichain setting to highlight the major technical novelty to achieve robustness.
**Assumption 1**.: _For any \(s\in\mathcal{S},a\in\mathcal{A}\), the uncertainty set \(\mathcal{P}^{a}_{s}\) is a compact subset of \(\Delta(\mathcal{S})\). And for any \(\pi\in\varPi,\mathsf{P}\in\mathcal{P}\), the induced MDP is a unichain._
The first part of Assumption 1 amounts to assuming that the uncertainty set is closed. We remark that many standard uncertainty sets satisfy this assumption, e.g., those defined by \(\epsilon\)-contamination [12], finite interval [14], total-variation [12], and KL-divergence [13]. The unichain assumption is also widely used in studies of average-reward MDPs, e.g., [14, 15, 16, 17, 18, 19]. Also it is worth noting that under the unichain assumption, the robust average-reward is identical for every starting state, i.e., \(g^{\pi}_{\mathcal{P}}(s_{1})=g^{\pi}_{\mathcal{P}}(s_{2}),\forall s_{1},s_{2} \in\mathcal{S}\)[14].
**Remark 1**.: _The results in this section actually only require the uniform boundedness of \(\|H^{\pi}_{\mathcal{P}}\|,\forall\pi\in\varPi,\mathsf{P}\in\mathcal{P}\) (Lemma 2 in Appendix). Assumption 1 is one sufficient condition._
In [14], the convergence \(\lim_{\gamma\to 1}(1-\gamma)V^{\pi}_{\mathsf{P},\gamma}=g^{\pi}_{ \mathcal{P}}\) for a fixed policy \(\pi\) and a fixed transition kernel \(\mathsf{P}\) (non-robust setting) is point-wise. However, such point-wise convergence does not provide any convergence guarantee on the robust discounted value function, as the robust value function measures the worst-case performance over the uncertainty set and the order of \(\lim\) and \(\min\) may not be exchanged in general. In the following theorem, we prove the uniform convergence of the discounted value function under the foregoing assumption.
**Theorem 1** (Uniform convergence).: _Under Assumption 1, the discounted value function converges uniformly to the average-reward value function on \(\varPi\times\mathcal{P}\) as \(\gamma\to 1\), i.e.,_
\[\lim_{\gamma\to 1}(1-\gamma)V^{\pi}_{\mathsf{P},\gamma}=g^{\pi}_{ \mathcal{P}},\text{ uniformly.} \tag{9}\]
With uniform convergence in Theorem 1, the order of the limit \(\gamma\to 1\) and \(\min_{\mathsf{P}}\) can be interchanged, then the following convergence of the robust discounted value function can be established.
**Theorem 2**.: _The robust discounted value function in eq. (4) converges to the robust average-reward uniformly on \(\varPi\):_
\[\lim_{\gamma\to 1}(1-\gamma)V^{\pi}_{\mathsf{P},\gamma}=g^{\pi}_{ \mathcal{P}}\text{ uniformly.} \tag{10}\]
We note that a similar convergence result is shown in [14], but only for a special uncertainty set of finite interval. Our Theorem 2 holds for general compact uncertainty sets. Moreover, it is worth highlighting that our proof technique is fundamentally different from the one in [14]. Specifically, under the finite interval uncertainty set, the worst-case transition kernels are from a finite set, i.e., \(V^{\pi}_{\mathsf{P},\gamma}=\min_{\mathsf{P}\in\mathcal{M}}V^{\pi}_{\mathsf{P},\gamma}\) for a finite set \(\mathcal{M}\subseteq\mathcal{P}\). This hence implies the interchangeability of \(\lim\) and \(\min\). However, for general uncertainty sets, the number of worst-case transition kernels may not be finite. We demonstrate the interchangeability via our uniform convergence result in Theorem 1.
The previous two convergence results play a fundamental role in limit method for robust average-reward MDPs, and are of key importance to motivate the design of the following
two algorithms, the basic idea of which is to apply a sequence of robust discounted Bellman operators on an arbitrary initialization while increasing the discount factor at a certain rate.
We first consider the robust policy evaluation problem, which aims to estimate the robust average-reward \(g_{\mathcal{P}}^{\pi}\) for a fixed policy \(\pi\). This problem for robust discounted MDPs is well studied in the literature, however, results for robust average-reward MDPs are quite limited except for the one in [20] for a specific finite interval uncertainty set. We present the a robust value iteration (robust VI) algorithm for evaluating the robust average-reward with general uncertainty sets in Algorithm 1.
```
1:\(\pi,V_{0}(s)=0,\forall s,T\)
2:for\(t=0,1,...,T-1\)do
3:\(\gamma_{t}\leftarrow\frac{t+1}{t+2}\)
4:for all \(s\in\mathcal{S}\)do
5:\(V_{t+1}(s)\leftarrow\mathbb{E}_{\pi}[(1-\gamma_{t})r(s,A)+\gamma_{t}\sigma_{ \mathcal{P}_{s}^{A}}(V_{t})]\)
6:endfor
7:endfor
8:return\(V_{T}\)
```
**Algorithm 1** Robust VI: Policy Evaluation
At each time step \(t\), the discount factor \(\gamma_{t}\) is set to \(\frac{t+1}{t+2}\), which converges to \(1\) as \(t\rightarrow\infty\). Subsequently, a robust Bellman operator w.r.t discount factor \(\gamma_{t}\) is applied on the current estimate \(V_{t}\) of the robust discounted value function \((1-\gamma_{t})V_{\mathcal{P},\gamma_{t}}^{\pi}\). As the discount factor approaches \(1\), the estimated robust discounted value function converges to the robust average-reward \(g_{\mathcal{P}}^{\pi}\) by Theorem 2. The following result shows that the output of Algorithm 1 converges to the robust average-reward.
**Theorem 3**.: _Algorithm 1 converges to robust average reward, i.e., \(\lim_{T\rightarrow\infty}V_{T}=g_{\mathcal{P}}^{\pi}\)._
Besides the robust policy evaluation problem, it is also of great practical importance to find an optimal policy that maximizes the worst-case average-reward, i.e., to solve eq.7. Based on a similar idea as the one of Algorithm 1, we extend our limit approach to solve the robust optimal control problem in Algorithm 2.
```
1:\(V_{0}(s)=0,\forall s,T\)
2:for\(t=0,1,...,T-1\)do
3:\(\gamma_{t}\leftarrow\frac{t+1}{t+2}\)
4:for all \(s\in\mathcal{S}\)do
5:\(V_{t+1}(s)\leftarrow\max\limits_{a\in\mathcal{A}}\left\{(1-\gamma_{t})r(s,a)+ \gamma_{t}\sigma_{\mathcal{P}_{s}^{\pi}}(V_{t})\right\}\)
6:endfor
7:endfor
8:for\(s\in\mathcal{S}\)do
9:\(\pi_{T}(s)\leftarrow\arg\max_{a\in\mathcal{A}}\left\{(1-\gamma_{t})r(s,a)+ \gamma_{t}\sigma_{\mathcal{P}_{s}^{\pi}}(V_{T})\right\}\)
10:endfor
11:return\(V_{T},\pi_{T}\)
```
**Algorithm 2** Robust VI: Optimal Control
Similar to Algorithm 1, at each time step, the discount factor \(\gamma_{t}\) is set to be closer to \(1\), and a one-step robust discounted Bellman operator (for optimal control) w.r.t. \(\gamma_{t}\) is applied to the current estimate \(V_{t}\). The following theorem establishes that \(V_{T}\) in Algorithm 2 converges to the optimal robust value function, hence can find the optimal robust policy.
**Theorem 4**.: _The output \(V_{T}\) in Algorithm 2 converges to the optimal robust average-reward \(g_{\mathcal{P}}^{\pi}\): \(V_{T}\to g_{\mathcal{P}}^{\pi}\) as \(T\rightarrow\infty\)._
As discussed in [1, 1], the average-reward criterion is insensitive and under selective since it is only interested in the performance under the steady-state distribution. For example, two policies providing rewards: \(100+0+0+\cdots\) and \(0+0+0+\cdots\) are equally good/bad. Towards this issue, for the non-robust setting, a more sensitive term of optimality was introduced by Blackwell [1]. More specifically, a policy is said to be Blackwell optimal if it optimizes the discounted value function for all discount factor \(\gamma\in(\delta,1)\) for some \(\delta\in(0,1)\). Together with eq.8, the optimal policy obtained by taking \(\gamma\to 1\) is optimal not only for the average-reward criterion, but also for the discounted criterion with large \(\gamma\). Intuitively, it is optimal under the average-reward setting, and is sensitive to early rewards.
Following a similar idea, we justify that the obtained policy from Algorithm 2 is not only optimal in the robust average-reward setting, but also sensitive to early rewards.
Denote by \(\Pi_{D}^{\pi}\) the set of all the deterministic optimal policies for robust average-reward (proved to exist in Lemma 7), i.e. \(\Pi_{D}^{\pi}=\left\{\pi\in\Pi_{D}:g_{\mathcal{P}}^{\pi}=g_{\mathcal{P}}^{\pi}\right\}\).
**Theorem 5** (Blackwell optimality).: _There exists \(0<\delta<1\), such that for any \(\gamma>\delta\), the deterministic optimal robust policy for robust discounted value function \(V_{\mathcal{P},\gamma}^{\ast}\) belongs to \(\Pi_{D}^{\pi}\). Moreover, when \(\Pi_{D}^{\pi}\) is a singleton, there exists a unique Blackwell optimal policy._
This result implies that using the limit method in this section to find the optimal robust policy for average-reward MDPs has an additional advantage that the policy it finds not only optimizes the average reward in steady state, but also is sensitive to early rewards.
It is worth highlighting the distinction of our results from the technique used in the proof of Blackwell optimality [1]. In the non-robust setting, the existence of a stationary Blackwell optimal policy is proved via contradiction, where a difference function of two policies \(\pi\) and \(\nu\): \(f_{\pi,\nu}(\gamma)\triangleq V_{\mathcal{P},\gamma}^{\pi}-V_{\mathcal{P},\gamma} ^{\mu}\) is used in the proof. It was shown by contradiction that \(f\) has infinitely many zeros, which however contradicts with the fact that \(f\) is a rational function of \(\gamma\) with a finite number of zeros. A similar technique was also used in [20] for the finite interval uncertainty set. Specifically, in [20], it was shown that the worst-case transition kernels for any \(\pi,\gamma\) are from a finite set \(\mathcal{M}\), hence \(f_{\pi,\nu}(\gamma)\triangleq\min_{\mathcal{P}\in\mathcal{M}}V_{\mathcal{P}, \gamma}^{\pi}-\min_{\mathcal{P}\in\mathcal{M}}V_{\mathcal{P},\gamma}^{\mu}\) can also be shown to be a rational function with a finite number of zeroes. For a general uncertainty set \(\mathcal{P}\), the difference function \(f_{\pi,\nu}(\gamma)\), however, may not be rational. This makes the method in [1, 20] inapplicable to our problem.
## Direct Approach for Robust Average-Reward MDPs
The limit approach in Section - is based on the uniform convergence of the discounted value function, and uses discounted MDPs to approximate average-reward MDPs. In this section, we develop a direct approach to solving the robust average-reward MDPs that does not adopt discounted MDPs as intermediate steps.
For average-reward MDPs, the relative value iteration (RVI) approach [20] is commonly used since it is numerically stable and has convergence guarantee. In the following, we generalize the RVI algorithm to the robust setting, and design the robust RVI algorithm in Algorithm 3.
We first generalize the relative value function in eq. (2) to the robust relative value function. The robust relative value function measures the difference between the worst-case cumulative reward and the worst-case average-reward for a policy \(\pi\).
**Definition 1**.: _The robust relative value function is defined as_
\[V_{\mathcal{F}}^{\pi}(s)\triangleq\min_{\kappa\in\bigotimes_{t\geq 0} \mathcal{P}}\mathbb{E}_{\kappa,\pi}\bigg{[}\sum_{t=0}^{\infty}(r_{t}-g_{ \mathcal{F}}^{\pi})|S_{0}=s\bigg{]}, \tag{11}\]
_where \(g_{\mathcal{F}}^{\pi}\) is the worst-case average-reward defined in eq. (5)._
The following theorem presents a robust Bellman equation for robust average-reward MDPs.
**Theorem 6**.: _For any \(s\) and \(\pi\), \((V_{\mathcal{F}}^{\pi},g_{\mathcal{F}}^{\pi})\) is a solution to the following robust Bellman equation:_
\[V(s)+g=\sum_{a}\pi(a|s)\left(r(s,a)+\sigma_{\mathcal{P}_{a}^{\pi}}(V)\right). \tag{12}\]
It can be seen that the robust Bellman equation for average-reward MDPs has a similar structure to the one for discounted MDPs in eq. (6) except for a discount factor. This actually reveals a fundamental difference between the robust Bellman operator of the discounted MDPs and the average-reward ones. For a discounted MDP, its robust Bellman operator is a contraction with constant \(\gamma\)[12, 20], and hence the fixed point is unique. Based on this, the robust value function can be found by recursively applying the robust Bellman operator (see Appendix -). In sharp contrast, in the average-reward setting, the robust Bellman is not necessarily a contraction, and the fixed point may not be unique. Therefore, repeatedly applying the robust Bellman operator in the average-reward setting may not even converge, which underscores that the two problem settings are fundamentally different.
We first derive the following equivalent optimality condition for robust average-reward MDPs.
**Theorem 7**.: _For any \((g,V)\) that is a solution to_
\[\max_{a}\left\{r(s,a)-g+\sigma_{\mathcal{P}_{a}^{\pi}}(V)-V(s)\right\}=0, \forall s, \tag{13}\]
\(g=g_{\mathcal{F}}^{*}\)_. If we further set_
\[\pi^{*}(s)=\arg\max_{a}\left\{r(s,a)+\sigma_{\mathcal{P}_{s}^{\pi}}(V)\right\} \tag{14}\]
_for any \(s\in\mathcal{S}\), then \(\pi^{*}\) is an optimal robust policy._
Theorem 7 suggests that as long as we find a solution \((g,V)\) to eq. (13), which though may not be unique, then \(g\) is the optimal robust average-reward \(g_{\mathcal{F}}^{*}\), and the greedy policy \(\pi^{*}\) is the optimal policy to our robust average-reward MDP problem in eq. (7).
In the following, we generalize the RVI approach to the robust setting, and design a robust RVI algorithm in Algorithm 3. We will further show that the output of this algorithm converges to a solution to eq. (13), and further the optimal policy could be obtained by eq. (14).
```
1:\(V_{0}\), \(\epsilon\) and arbitrary \(s^{*}\in\mathcal{S}\)
2:\(w_{0}\gets V_{0}-V_{0}(s^{*})\mathbb{1}\)
3:while\(sp(w_{t}-w_{t+1})\geq\epsilon\)do
4:for all \(s\in\mathcal{S}\)do
5:\(V_{t+1}(s)\leftarrow\max_{a}(r(s,a)+\sigma_{\mathcal{P}_{a}}(w_{t}))\)
6:\(w_{t+1}(s)\gets V_{t+1}(s)-V_{t+1}(s^{*})\)
7:endfor
8:endwhile
9:return\(w_{t},V_{t}\)
```
**Algorithm 3** Robust RVI
## Examples and Numerical Results
In this section, we study several commonly used uncertainty set models, including contamination model, Kullback-Lerbler (KL) divergence and total-variation defined model.
As can be observed from Algorithms 1 to 3, for different uncertainty sets, the only difference lies in how the support function \(\sigma_{\mathcal{P}^{a}_{s}}(V)\) is calculated. In the sequel, we discuss how to efficiently calculate the support function for various uncertainty sets.
We numerically compare our robust (relative) value iteration methods v.s. non-robust (relative) value iteration method on different uncertainty sets. Our experiments are based on the Garnet problem \(\mathcal{G}(20,40)\)(Archibald, McKinnon, and Thomas 1995). More specifically, there are \(20\) states and \(30\) actions; the nominal transition kernel \(\mathsf{P}=\{p^{a}_{s}\in\Delta(\mathcal{B})\}\) is randomly generated according to the uniform distribution, and the reward functions \(r(s,a)\sim\mathcal{N}(0,\sigma_{s,a})\), where \(\sigma_{s,a}\sim\text{Uniform}[0,1]\). In our experiments, the uncertainty sets are designed to be centered at the nominal transition kernel. We run different algorithms, i.e., (robust) value iteration and (robust) relative value iteration, and obtain the greedy policies at each time step. Then, we use robust average-reward policy evaluation (Algorithm 1) to evaluate the robust average-reward of these policies. We plot the robust average-reward against the number of iterations.
**Contamination model.** For any \((s,a)\) the uncertainty set \(\mathcal{P}^{a}_{s}\) is defined as \(\mathcal{P}^{a}_{s}=\{q:q=(1-R)p^{a}_{s}+Rp^{\prime},p^{\prime}\in\Delta( \mathcal{B})\}\), where \(p^{a}_{s}\) is the nominal transition kernel. It can be viewed as an adversarial model, where at each time-step, the environment transits according to the nominal transition kernel \(p\) with probability \(1-R\), and according to an arbitrary kernel \(p^{\prime}\) with probability \(R\). Note that \(\sigma_{\mathcal{P}^{a}_{s}}(V)=(1-R)(p^{a}_{s})^{\top}V+R\min_{s}V(s)\). Our experimental results under the contamination model are shown in Figure 1.
**Total variation.** The total variation distance is another commonly used distance metric to measure the difference between two distributions. For two distributions \(p\) and \(q\), it is defined as \(D_{TV}(p,q)=\frac{1}{2}\|p-q\|_{1}\). Consider an uncertainty set defined via total variation: \(\mathcal{P}^{a}_{s}=\{q:D_{TV}(q\|p^{a}_{s})\leq R\}\). Then, its support function can be efficiently solved as follows (Iyengar 2005): \(\sigma_{\mathcal{P}^{a}_{s}}(V)=p^{\top}V-R\min_{\mu\geq 0}\left\{\max_{s}(V(s)- \mu(s))-\min_{s}(V(s)-\mu(s))\right\}.\)
Our experimental results under the total variation model are shown in Figure 2.
**Kullback-Lerbler (KL) divergence.** The Kullback-Leibler divergence is widely used to measure the distance between two probability distributions. For distributions \(p,q\), it is defined as \(D_{KL}(q\|p)=\sum_{s}q(s)\log\frac{q(s)}{p(s)}\). Consider an uncertainty set defined via KL divergence: \(\mathcal{P}^{a}_{s}=\{q:D_{KL}(q\|p^{a}_{s})\leq R\}\). Then, its support function can be efficiently solved using the duality result in (Hu and Hong 2013): \(\sigma_{\mathcal{P}^{a}_{s}}(V)=-\min_{\alpha\geq 0}\left\{R\alpha+\alpha \log\left(p^{\top}e^{\frac{-V}{\alpha}}\right)\right\}.\) Our experimental results under the KL-divergence model are shown in Figure 3.
It can be seen that our robust methods can obtain policies that achieve higher worst-case reward. Also, both our limit-based robust value iteration and our direct method of robust relative value iteration converge to the optimal robust policies, which validates our theoretical results.
## Conclusion
In this paper, we investigated the problem of robust MDPs under the average-reward setting. We established _uniform_ convergence of the discounted value function to average-reward, which further implies the uniform convergence of the _robust_ discounted value function to _robust_ average-reward. Based on this insight, we designed a robust dynamic programming approach using the robust discounted MDPs as an approximation (the limit method). We theoretically proved their convergence and optimality and proved a robust version of the Blackwell optimality (Blackwell 1962). We then designed a direct approach for robust average-reward MDPs, where we derived the robust Bellman equation for robust average-reward MDPs. We further designed a robust RVI method, which was proven to converge to the optimal robust solution. Technically, our proof techniques are fundamentally different from existing studies on average-reward robust MDPs, e.g., those in (Blackwell 1962; Tewari and Bartlett 2007).
## Acknowledgment
This work was supported by the National Science Foundation under Grants CCF-2106560, CCF-2007783, CCF-2106339 and CCF-1552497.
Figure 1: Comparison on contamination model with \(R=0.4\).
Figure 3: Comparison on KL-divergence model with \(R=0.8\).
Figure 2: Comparison on total variation model with \(R=0.6\). |
2306.02030 | Almost Sure Averaging for Evolution Equations driven by fractional
Brownian motions | We apply the averaging method to a coupled system consisting of two evolution
equations which has a slow component driven by fractional Brownian motion (FBM)
with the Hurst parameter $H_1> \frac12$ and a fast component driven by additive
FBM with the Hurst parameter $ H_2\in(1-H_1,1)$. The main purpose is to show
that the slow component of such a couple system can be described by a
stochastic evolution equation with averaged coefficients. Our first result
provides a pathwise mild solution for the system of mixed stochastic evolution
equations. Our main result deals with an averaging procedure which proves that
the slow component converges almost surely to the solution of the corresponding
averaged equation using the approach of time discretization. To do this we
generate a stationary solution by a exponentially attracting random fixed point
of the random dynamical system generated by the fast component. | Bin Pei, Bjoern Schmalfuss, Yong Xu | 2023-06-03T07:17:47Z | http://arxiv.org/abs/2306.02030v1 | # Almost sure averaging for evolution equations
###### Abstract
We apply the averaging method to a coupled system consisting of two evolution equations which has a slow component driven by fractional Brownian motion (FBM) with the Hurst parameter \(H_{1}>\frac{1}{2}\) and a fast component driven by additive FBM with the Hurst parameter \(H_{2}\in(1-H_{1},1)\). The main purpose is to show that the slow component of such a couple system can be described by a stochastic evolution equation with averaged coefficients. Our first result provides a pathwise mild solution for the system of mixed stochastic evolution equations. Our main result deals with an averaging procedure which proves that the slow component converges almost surely to the solution of the corresponding averaged equation using the approach of time discretization. To do this we generate a stationary solution by a exponentially attracting random fixed point of the random dynamical system generated by the fast component.
A remarkRemark
Almost sure averaging, Random fixed points, Fractional Brownian motion, Slow-fast systems
60G22, 60H05, 60H15, 34C29.
## 1 Introduction
The aim of this article is to address the almost sure averaging for the problem of slow-fast stochastic evolution equation in a separable Hilbert space \(V\)
\[dX^{\epsilon}(t)= AX^{\epsilon}(t)\,dt+f(X^{\epsilon}(t),Y^{\epsilon}(t))\,dt+h(X^ {\epsilon}(t))\,dB^{H_{1}}(t),X^{\epsilon}(0)=X_{0}\in V, \tag{1}\] \[dY^{\epsilon}(t)= \frac{1}{\epsilon}BY^{\epsilon}(t)\,dt+\frac{1}{\epsilon}g(X^{ \epsilon}(t),Y^{\epsilon}(t))\,dt+dB^{H_{2}}(t/\epsilon),Y^{\epsilon}(0)=Y_{0 }\in V\]
where \(0<\epsilon\ll 1\) is a small parameter, \(X^{\epsilon}(t)\in V\) and \(Y^{\epsilon}(t)\in V\) are the state variables, \(B^{H_{1}},B^{H_{2}}\) are the trace class \(V\)-valued fractional Brownian motions (FBMs) with \(H_{1}\in(1/2,1)\), \(H_{2}\in(1-H_{1},1)\), \(f,h,g\) are sufficiently regular.
Very often a complex system can be viewed as a combination of slow and fast motions [12, 22], see (1)-(1) for instance, which leads to two widely separated time scales equations and is extremely difficult to analyze directly. It is highly desirable to obtain a simplified equation capturing the dynamics of the system at the slow time scale. Averaging plays an important role to extract effective macroscopic dynamics which can describe approximately the slow motion (see [3, 37] for the deterministic case and [9, 11, 12, 20] for the stochastic case). It is worth mentioning that this idea was exploited in atmospheric science where, for instance, in the study of the climate variability by Hasselmann [19] who got the Nobel Prize in Physics 2021 applying
the stochastic averaging framework considering climate and weather as slow and fast motions, respectively. Later, Arnold [2] has recast Hasselmann's program of reducing complex deterministic multiscale climate models to simpler stochastic models for the slow variables. Kifer [21] contained a short survey of stochastic averaging methods for random dynamical system (RDS) with application to climate models. Eichinger et al. [10] studied the sample paths estimates for stochastic fast-slow systems driven by FBM and also illustrated their results in an example arising in climate modeling.
The theory of stochastic averaging has a popular history which can be traced back to the work of Khasminskii [20]. We here mention only a few relevant references. Freidlin and Wentzell [12] provided a mathematically rigorous overview of fundamental stochastic averaging procedures. Pavilois and Stuart [30] covered stochastic averaging and homogenization results obtained by perturbation analysis, see, e.g. [16, 26, 38, 39, 40, 35] (and the references therein) for further generalizations. Most of the known results in the literature mainly considered the case of perturbation by Brownian motion (BM). While slow-fast systems with FBM have seen a tremendous spike of interest in the last two years [18, 25, 24, 32, 33, 29]. We mention only the most relevant result in our case here. Hairer and Li [18] considered slow-fast systems where the slow system is driven by FBM and proved the convergence to the averaged solution took place in probability. Pei, Inahama and Xu [32, 34] answered affirmatively that an averaging principle still holds for fast-slow mixed stochastic differential equations (SDEs) if disturbances involve both BM and FBM \(H\in(1/3,1)\) in the mean square sense. The result [32] was extended in [33] to establish an averaging principle in the mean square sense for stochastic partial differential equation (SPDEs) driven by FBM with an additional fast-varying diffusion process.
One naturally wonders what happens to the averaging principle of this type when the driving noise of fast motion does not have (semi) martingale property. A first attempt to study the long time behaviour of SDEs driven by FBM was made by Hairer [17]. The approach is closer to the usual Markovian semigroup approach to invariant measures than the theory of RDS. Note that FBM does not define the Markov process as in the case of usual BM. Therefore, it is not possible to apply standard methods to show existence of stationary solutions. This research direction is fairly new and there are not many papers at the moment. Li and Sieber [25] established a quantitative quenched ergodic theorem on the conditional evolution of the process of the fast dynamics with frozen slow input and proved a fractional averaging principle for interacting slow-fast systems including a non-Markovian fast dynamics in probability. However, we will show that the fast motion is shown to define RDS which has the unique, exponentially attracting random fixed point and our intention in this article is to study the almost sure averaging replacing the invariant measures by the random fixed points of fast motion which are pathwise exponentially attracting. Note that the almost sure averaging for evolution equations driven by two FBMs is still an open problem in both finite dimensional and infinite dimensional state spaces. The key idea to solve the problem is to replace the stationary solution of a Markov semigroup by the attracting random fixed point of a RDS, which can exist in the case of non-white noise.
To investigate the almost sure averaging of system (1.1)-(1.2), it is essential to obtain the unique solution. Another simple understanding on (1.1)-(1.2) is to rewrite
this system in the following form
\[\left(\begin{array}{c}dX^{*}(t)\\ dY^{*}(t)\end{array}\right)= \begin{pmatrix}A&O\\ O&\frac{1}{\epsilon}B\end{pmatrix}\left(\begin{array}{c}X^{\epsilon}(t)\\ Y^{\epsilon}(t)\end{array}\right)dt+\left(\begin{array}{c}f(X^{\epsilon}(t),Y^ {\epsilon}(t))\\ \frac{1}{\epsilon}g(X^{\epsilon}(t),Y^{\epsilon}(t))\end{array}\right)dt\] \[+\begin{pmatrix}h(X^{*}(t))&O\\ O&\mathrm{id}\end{pmatrix}\begin{pmatrix}dB^{H_{1}}(t)\\ dB^{H_{2}}(t/\epsilon)\end{pmatrix}.\]
Existence and uniqueness of solution to this kind of equation have been established, for instance, Maslowski and Nualart [28], Garrido-Atienza, et.al. [13], Chen, et.al. [4] and Pei, et.al. [33]. In this last paper, the authors were able to overcome the lack of the regularity relying on a pathwise approach, a stopping time technique and an approximation for the fractional noise. However, our technique differs qualitatively from the method mentioned above.
The paper is organized as follows. In Section 2 we formulate the basic properties of RDE and of stochastic integrals with respect to the FBM that are used in the paper. Section 3 contains the existence and uniqueness of a pathwise mild solution to the nonlinear infinite-dimensional evolution equations. In Section 4, we prove that almost sure averaging for evolution equations (1)-(2). The random fixed points for the RDS generated by (2) are concluded in Section 4.2.
## 2 Preliminaries on the random perturbations and pathwise stochastic integrals
In this section we review some basic concepts of pathwise stochastic integrals that will be used later.
### Random perturbations
Let \((V,(\cdot,\cdot))\) be a separable Hilbert space and its norm is denoted by \(\|\cdot\|\). Let \(C([T_{1},T_{2}];V)\) be the space of continuous functions on \([T_{1},T_{2}]\) with values in \(V\) equipped with the usual norm
\[\|u\|_{\infty}=\|u\|_{\infty,T_{1},T_{2}}=\sup_{s\in[T_{1},T_{2}]}\|u(s)\|.\]
We consider a \(V\)-valued continuous trace class FBM on some interval \([0,T]\) denoted \(B^{H_{1}}\) with Hurst parameter \(H_{1}\in(1/2,1)\). The distribution \(\mathbb{P}_{H_{1}}\) of this process is determined by the covariance
\[\mathbb{E}[B^{H_{1}}(t)\otimes B^{H_{1}}(s)]=Q_{1}(|t|^{2H_{1}}+|s|^{2H_{1}}-| t-s|^{2H_{1}}),\quad\mathbb{E}[B^{H_{1}}(t)]=0 \tag{1}\]
where \(\mathbb{P}_{H_{1}}\) is defined on \(\mathcal{B}(C_{0}([0,T];V))\) the Borel-\(\sigma\)-algebra of the space of continuous functions \(C_{0}([0,T];V)\) which are zero at zero. \(Q_{1}\) be a symmetric positive operator of finite trace. In addition, for \(H_{2}\) in \((1-H_{1},1)\) let \(B^{H_{2}}\) be a continuous trace class FBM on \(\mathbb{R}\) with distribution \(\mathbb{P}_{H_{2}}\) defined on \(\mathcal{B}(C_{0}(\mathbb{R};V))\) where \(C_{0}(\mathbb{R};V)\) is the set of continuous paths on \(\mathbb{R}\) with values in \(V\) which are zero at zero equipped with the compact open topology. The covariance of this stochastic process can be defined by (1) with a trace class operator \(Q_{2}\) and replace \(H_{1}\) by \(H_{2}\). We assume that \(B^{H_{1}}\) and \(B^{H_{2}}\) are independent.
We define \(C^{\beta}([T_{1},T_{2}];V)\) to be the Banach space of Holder continuous functions on \([T_{1},T_{2}]\) with exponent \(0<\beta<1\) having values in \(V\). A norm of this space is given by
\[\|u\|_{\beta}=\|u\|_{\beta,T_{1},T_{2}}=\|u\|_{\infty,T_{1},T_{2}}+\left| \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
By Kolmogorov's theorem we know that \(B^{H_{1}}\) has a version \(C^{\beta}([0,T];V)\) where we assume that \(\beta\in(1/2,H_{1})\). \(B^{H_{2}}\) has a version so that on any interval \([T_{1},T_{2}]\), \(T_{1}<T_{2}\) we have \(B^{H_{2}}|_{[T_{1},T_{2}]}\in C^{\gamma}\left([T_{1},T_{2}];V\right)\) where \(\gamma^{\prime}<H_{2}\) and
\[H_{1}>1/2,\quad H_{2}\in(1-H_{1},1). \tag{2}\]
Let us consider canonical versions of \(B^{H_{1}}\) and \(B^{H_{2}}\):
\[B^{H_{1}}(\omega_{1})=\omega_{1},\quad\omega_{1}\in C_{0}([0,T];V),\quad B^{H_ {2}}(\omega_{2})=\omega_{2},\quad\omega_{2}\in C_{0}(\mathbb{R};V).\]
Taking into account that we have Holder continuous versions for \(B^{H_{1}}\) and \(B^{H_{2}}\) we describe the canonical versions as follows: Let
\[\Omega_{1}=C_{0}([0,T];V)\cap C^{\beta}([0,T];V),\quad\Omega_{2}=C_{0}( \mathbb{R};V)\cap C^{\gamma^{\prime}}(\mathbb{R};V)\]
Then the canonical processes are given by
\[(\Omega_{1},\mathcal{B}(C_{0}([0,T];V))\cap\Omega_{1},\mathbb{P}^{\prime}_{H_ {1}}),\quad(\Omega_{2},\mathcal{B}(C_{0}(\mathbb{R};V))\cap\Omega_{2}, \mathbb{P}^{\prime}_{H_{2}})\]
where \(\mathbb{P}^{\prime}_{H_{1}}\) now stands for \(\mathbb{P}_{H_{1}}(\cdot\cap\Omega_{1})\) and similar for \(\mathbb{P}^{\prime}_{H_{2}}\).
We consider now the metric dynamical system (MDS)
\[(C(\mathbb{R},V),\mathcal{B}(C(\mathbb{R},V)),\mathbb{P}_{H_{2}},\theta)\]
with the measurable flow \(\theta=(\theta_{t})_{t\in\mathbb{R}}\) given by the shift operators \(\theta_{t}\omega_{2}(\cdot)=\omega_{2}(\cdot+t)-\omega_{2}(t)\), see Arnold [1, p. 546]. The measure \(\mathbb{P}_{H_{2}}\) is ergodic with respect to \((\theta_{t})_{t\in\mathbb{R}}\). For details we refer for instance to [15]. Since \(\Omega_{2}\) is \((\theta_{t})_{t\in\mathbb{R}}\)-invariant and has full measure that we can conclude \((\Omega_{2},\mathcal{B}(C_{0}(\mathbb{R},V))\cap\Omega_{2},\mathbb{P}^{\prime }_{H_{2}},\theta)\) is also ergodic.
Introduce the product measure \(\mathbb{P}:=\mathbb{P}_{H_{1}}\times\mathbb{P}_{H_{2}}\) on
\[(C_{0}([0,T];V) \times C_{0}(\mathbb{R};V),\mathcal{B}(C_{0}([0,T];V))\otimes \mathcal{B}(C_{0}(\mathbb{R};V)),\mathbb{P})\] \[=(C_{0}([0,T];V)\times C_{0}(\mathbb{R};V),\mathcal{B}(C_{0}([0,T] ;V)\times C_{0}(\mathbb{R};V)),\mathbb{P}).\]
We set \(\Omega:=\Omega_{1}\times\Omega_{2}\) and
\[\mathscr{F}=\mathcal{B}(C_{0}([0,T];V)\times C_{0}(\mathbb{R};V))\cap\Omega\]
being the trace \(\sigma\)-algebra w.r.t. \(\Omega\) equipped with the probability measure \(\mathbb{P}^{\prime}(\cdot)=\mathbb{P}(\cdot\cap\Omega)=\mathbb{P}^{\prime}_{H _{1}}\times\mathbb{P}^{\prime}_{H_{2}}\). Let us denote for the following the measures \(\mathbb{P}^{\prime}\), \(\mathbb{P}^{\prime}_{H_{1}}\), \(\mathbb{P}^{\prime}_{H_{2}}\) by \(\mathbb{P}\), \(\mathbb{P}_{H_{1}}\), \(\mathbb{P}_{H_{2}}\). Then we can describe the Holder continuous and canonical version of \((B^{H_{1}},B^{H_{2}})\) by the probability space \((\Omega,\mathscr{F},\mathbb{P})\) with paths \(\omega=(\omega_{1},\omega_{2})\in\Omega\).
### Pathwise stochastic integrals
Although this construction has already been done in the recent paper [4] (see also Maslowski et al. [28]), we present it here for the sake of completeness. We begin this subsection by introducing some function spaces. Let \(C^{\beta,\sim}([T_{1},T_{2}];V)\subset C([T_{1},T_{2}];V)\) be the set of functions with the finite norm
\[\|u\|_{\beta,\sim}=\|u\|_{\beta,\sim,T_{1},T_{2}}=\|u\|_{\infty,T_{1},T_{2}}+ \sup_{T_{1}<s<t\leq T_{2}}(s-T_{1})^{\beta}\frac{\|u(t)-u(s)\|}{|t-s|^{\beta}}.\]
For \(\rho>0\) we can consider the equivalent norm
\[\|u\|_{\beta,\rho,\sim}=\|u\|_{\beta,\rho,\sim,T_{1},T_{2}}=\sup_{s\in[T_{1},T _{2}]}e^{-\rho(s-T_{1})}\|u(s)\|\]
\[+\sup_{T_{1}<s<t\leq T_{2}}(s-T_{1})^{\beta}e^{-\rho(t-T_{1})}\frac{\|u(t)-u(s)\|}{ |t-s|^{\beta}}.\]
It is known that \(C^{\beta,\sim}([T_{1},T_{2}];V)\) is a Banach space, see [4], Lunardi [27, p.123].
Now we want to define the stochastic integral with \(\omega_{1}\in\Omega_{1}\) as integrator. The definition that we use throughout this article is given by Zahle [41] generalized to infinite dimensional case in Chen et al. [4]. Consider the separable Hilbert space \(L_{2}(V)\) of Hilbert Schmidt operators from \(V\) into \(V\) with the usual norm\(\|\cdot\|_{L_{2}(V)}\) and inner product \((\cdot,\cdot)_{L_{2}(V)}.\) A base \((E_{ji})_{j,i\in\mathbb{N}}\) in this space is given by
\[E_{ji}e_{k}=\begin{cases}0:i\neq k,\\ e_{j},i=k\end{cases}\]
where \((e_{k})_{k\in\mathbb{N}}\) is a complete orthonormal system in \(V\). Consider the mapping \(\Psi:[0,T]\to L_{2}(V)\) and suppose that \(\psi_{ji}:=(\Psi(\cdot),E_{ji})_{L_{2}(V)}\in I_{T_{1}+}^{\alpha}(L^{p}((T_{1},T_{2});\mathbb{R}))\) and \(\psi_{ji}(T_{1}+)\) (the right-side limit of \(\psi_{ji}\) at \(T_{1}\)) exists and \(\alpha p<1\). Moreover, assume that \(\zeta_{iT_{2}-}:=(\omega_{1,T_{2}-}(t),e_{i})_{V}\in I_{T_{2}-}^{-\alpha}(L^{ p^{\prime}}((T_{1},T_{2});\mathbb{R}))\) such that \(\frac{1}{p}+\frac{1}{p^{\prime}}\leq 1\), and the mapping
\[[T_{1},T_{2}]\ni r\mapsto\|D_{T_{1}+}^{\alpha}\Psi[r]\|_{L_{2}(V)}\|D_{T_{2}- }^{1-\alpha}\omega_{1,T_{2}-}[r]\|\in L^{1}((T_{1},T_{2});\mathbb{R}))\]
where
\[D_{T_{1}+}^{\alpha}\Psi[r] =\frac{1}{\Gamma(1-\alpha)}\left(\frac{\Psi(r)}{\left(r-T_{1} \right)^{\alpha}}+\alpha\int_{T_{1}}^{r}\frac{\Psi(r)-\Psi(q)}{(r-q)^{1+\alpha }}dq\right), \tag{3}\] \[D_{T_{2}-}^{1-\alpha}\omega_{1,T_{2}-}[r] =\frac{(-1)^{1-\alpha}}{\Gamma(\alpha)}\left(\frac{\omega_{1}(r) -\omega_{1}\left(T_{2}-\right)}{\left(T_{2}-r\right)^{1-\alpha}}+(1-\alpha) \int_{r}^{T_{2}}\frac{\omega_{1}(r)-\omega_{1}(q)}{(q-r)^{2-\alpha}}dq\right)\]
are Weyl fractional derivatives, being \(\omega_{1,T_{2}-}(r)=\omega_{1}(r)-\omega_{1}(T_{2}-)\), with \(\omega_{1}(T_{2}-)\) the left side limit of \(\omega_{1}\) at \(T_{2}\). For the definition of the space \(I_{T_{1}+}^{\alpha}(L^{p}((T_{1},T_{2});\mathbb{R}))\) and \(I_{T_{2}-}^{\alpha}(L^{p^{\prime}}((T_{1},T_{2});\mathbb{R}))\) we refer to Samko et al. [36] and Zahle [41, p. 337].
We then introduce
\[\int_{T_{1}}^{T_{2}}\Psi d\omega_{1}:=(-1)^{\alpha}\int_{T_{1}}^{T_{2}}D_{T_{1 }+}^{\alpha}\Psi[r]D_{T_{2}-}^{1-\alpha}\omega_{1,T_{2}-}[r]dr. \tag{4}\]
Due to Pettis' theorem and the separability of \(V\), the integrand is weakly measurable and, hence, measurable, and
\[\left\|\int_{T_{1}}^{T_{2}}\Psi(r)\,d\omega_{1}(r)\right\| =\bigg{(}\sum_{j=1}^{\infty}\bigg{\|}\sum_{i=1}^{\infty}\int_{T_{ 1}}^{T_{2}}D_{T_{1}+}^{\alpha}\psi_{ji}[r]D_{T_{2}-}^{1-\alpha}\zeta_{iT_{2}- }[r]dr\bigg{\|}^{2}\bigg{)}^{\frac{1}{2}}\] \[\leq\int_{T_{1}}^{T_{2}}\|D_{T_{1}+}^{\alpha}\Psi[r]\|_{L_{2}(V)} \|D_{T_{2}-}^{1-\alpha}\omega_{1,T_{2}-}[r]\|dr.\]
**Lemma 1**: _Suppose that \(\Psi\in C^{\gamma}([T_{1},T_{2}];L_{2}(V))\) and \(\omega_{1}\in\Omega_{1}\) such that \(1-\beta<\alpha<\gamma,1/2<\beta.\) Then (4) is well defined and there exists a positive constant \(c\) such that_
\[\bigg{\|}\int_{T_{1}}^{T_{2}}\Psi(r)\,d\omega_{1}(r)\bigg{\|}\leq c\|\Psi\|_{ \gamma}\,\left|\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
## 3 Pathwise mild solution of stochastic evolution equations driven by two FBMs
### Ornstein-Uhlenbeck processes driven by the FBM
Let \(V\) be a separable Hilbert space and let \(-B\) be a symmetric positive operator with compact inverse. Then \(B\) generates the strongly continuous analytic semigroup \(S_{B}\) and \(-B\) is closed. The eigenelements of \(-B\) generate a complete orthonormal system \((e_{B,i})_{i\in\mathbb{N}}\) with spectrum \(0<\lambda_{B,1}\leq\lambda_{B,2}\leq\cdots\) where these eigenvalues have finite multiplicity and tend to infinity.
Consider \(\omega_{2}\in\Omega_{2}\). For simplicity we assume that this random process can be presented by the orthonormal system \((e_{B,i})_{i\in\mathbb{N}}\) generated by the linear operator \(B\):
\[\omega_{2}(t)=\sum_{i=1}^{\infty}(q_{ii}^{2})^{\frac{1}{2}}e_{B,i}\omega_{2}^{ i}(t),\quad\sum_{i=1}^{\infty}q_{ii}^{2}<\infty,\quad q_{ij}^{2}=0\quad\text{for $i\neq j$.}\]
Then \(\omega_{2}^{i}\) are twosided one dimensional standard FBM which are iid where \(q_{ij}^{2}\) are the representations of \(Q_{2}\) w.r.t. the base \((e_{B,i})_{i\in\mathbb{N}}\).
Let \(\Omega_{2}^{*}\) be the set of \(\omega_{2}\in\Omega_{2}\) which are supexponentially growing:
\[\Omega_{2}^{*}=\bigcap_{m\in\mathbb{N}}\Omega_{2,m}^{*},\quad\omega_{2}\in \Omega_{2,m}^{*}\text{ iff }\lim_{t\to\pm\infty}\|\omega_{2}(t)\|e^{-\frac{1}{m}|t|}=0.\]
This set is straightforwardly \((\theta_{t})_{t\in\mathbb{R}}\)-invariant.
**Lemma 3.1**: \(\Omega_{2}^{*}\in\mathscr{F}_{2}\) _has measure one._
Note that \(\|\omega_{2}(t)\|e^{-\frac{1}{m}|t|}=0\) if and only if
\[\lim_{n\to\infty}\sup_{s\in[n,n+1]}\|\omega_{2}(s)\|e^{-\frac{1}{m}|s|}=0 \tag{1}\]
for \(t\to+\infty\) and similar for \(t\to-\infty\). Indeed, if (1) does not hold, then there exists a subsequence \((n_{i})_{i\in\mathbb{N}}\) and an \(\epsilon>0\) so that
\[\sup_{s\in[n_{i},n_{i}+1]}\|\omega_{2}(s)\|e^{-\frac{1}{m}|s|}>\epsilon\quad \text{for all $i\in\mathbb{N}$.}\]
Hence there exists a sequence \((n_{i})_{i\in\mathbb{N}}:[n_{i},n_{i}+1]\ni t_{i}\to\infty\) so that
\[\limsup_{i\to\infty}\|\omega_{2}(t_{i})\|e^{-\frac{1}{m}|t_{i}|}\geq\epsilon.\]
The mappings
\[\Omega_{2}\ni\omega_{2}\mapsto\sup_{s\in[n,n+1]}\|\omega_{2}(s)\|,\sup_{s\in[- n-1,-n]}\|\omega_{2}(s)\|\]
are \((\mathscr{F},\mathcal{B}(\mathbb{R}^{+}))\)-measurable. Hence \(\Omega_{2,m}^{*}\) and thus \(\Omega_{2}^{*}\) is measurable.
For \(t\to+\infty\) we have an asymptotically linearly bounded growths:
\[\|\omega_{2}(t)\|\leq\sum_{i=0}^{\lfloor t\rfloor}\sup_{s\in[0,1]}\|\theta_{i }\omega_{2}(s)\|\sim\mathbb{E}\sup_{s\in[0,1]}\|\theta_{i}\omega_{2}(s)\|\lfloor t\rfloor\]
with probability one by the ergodic theorem where the right hand side is finite, see Kunita [23, Theorem 1.4.1]. Similarly we can argue for \(t\to-\infty\). For \(\omega_{2}\) from this set we have
\[\limsup_{t\to\pm\infty}\frac{\|\omega_{2}(t)\|}{|t|}<\infty.\]
Hence \(\Omega_{2}^{*}\) contains a subset of measure one so that \(\mathbb{P}_{H_{2}}(\Omega_{2}^{*})=1\).
We consider the equation
\[dZ(t)=BZ(t)\,dt+d\omega_{2}(t),\quad Z(0)=Z_{0}\in V,\quad t\geq 0 \tag{3.2}\]
interpreted in mild form.
Let \(\pi_{p}\) be the orthonormal projection with respect to \((e_{B,i})_{i=1,\cdots,p}\). Then by Cheridito et al. [5, Proposition A.1] for some \(Z_{0}\in V\) we have a \(p\)-dimensional Ornstein Uhlenbeck process (O-U process) generated by the finite dimensional FBM \(\pi_{p}\omega_{2}\):
\[Z_{p}(t,\omega)=S_{B}(t)\pi_{p}Z_{0}+\pi_{p}\omega_{2}(t)+B\int_{0}^{t}S_{B}( t-r)\pi_{p}\omega_{2}(r)dr\]
Note that \(\pi_{p}\) commutes with \(B\) and \(S_{B}(t)\). \(\pi_{p}\omega(r)\) converges pointwise to \(\omega_{2}(r)\) on any interval \([0,t]\) and since \(\pi_{p}\omega_{2}\) has a uniform bounded \(\gamma^{\prime}\) Holder-norm on \([0,t]\) this convergence is uniform. On the other hand for \(\gamma<\gamma^{\prime}\) applying Maslowski and Nualart [28, (4.27)] we have the convergence of \((\pi_{p}\omega_{2})_{p\in\mathbb{N}}\) to \(\omega_{2}\) with respect to the \(\gamma\)-Holder norm. Then by the proof of Pazy [31, Lemma 4.3.4, Theorem 4.3.5 (iii)] or Lunardi [27, Theorem 4.3.1 (III)] we obtain that
\[S_{B}(t)\pi_{p}Z_{0}+\pi_{p}\omega_{2}(t)+B\int_{0}^{t}S_{B}(t-r )\omega_{2}(r)dr\] \[\mathop{\longrightarrow}_{p\to\infty}S_{B}(t)Z_{0}+\omega_{2}(t) +B\int_{0}^{t}S_{B}(t-r)\omega_{2}(r)dr\]
which we consider to be the solution of (3.2) formally written as
\[S(t)Z_{0}+\int_{0}^{t}S(t-r)d\omega_{2}(r).\]
This holds for every \(t>0\).
We replace now \(\omega_{2}\) by \(\theta_{-t}\omega_{2}\):
\[\theta_{-t}\omega_{2}(t)+ B\int_{0}^{t}S_{B}(t-r)\theta_{-t}\omega_{2}(r)dr \tag{3.3}\] \[=B\int_{0}^{t}S_{B}(t-r)\omega_{2}(r-t)dr-S_{B}(t)\omega_{2}(-t)\] \[=B\int_{-t}^{0}S_{B}(-r)\omega_{2}(r)dr-S_{B}(t)\omega_{2}(-t)\]
by
\[B\int_{0}^{t}S_{B}(t-r)\omega_{2}(-t)dr=S_{B}(t)\omega_{2}(-t)-\omega_{2}(-t).\]
We show that the right hand side of (3.3) is uniformly bounded for \(t>1\). We have
\[\begin{split}\left\|B\int_{-t}^{0}S_{B}(-r)\omega_{2}(r)dr\right\| \leq&\bigg{\|}B\int_{-1}^{0}S_{B}(-r)\omega_{2}(r)dr\bigg{\|}\\ &+\left\|B\int_{-t}^{-1}S_{B}(-r)\omega_{2}(r)dr\right\|.\end{split} \tag{4}\]
In particular we can estimate the first term
\[\begin{split}\left\|B\int_{-1}^{0}S_{B}(-r)\omega_{2}(r)dr\right\| \leq&\bigg{\|}B\int_{0}^{1}S_{B}(1-r)\theta_{-1}\omega_{2}(r)dr \bigg{\|}\\ &+\|S_{B}(1)\omega_{2}(-1)\|+\|\omega_{2}(-1)\|<\infty.\end{split}\]
The right hand side is finite by Pazy [31, Theorem 4.3.5]. For the second norm we have that for \(\omega_{2}\in\Omega_{2}^{*}\) for any \(\lambda_{B,1}>\lambda_{B}>2\zeta>0\) there exists a \(C(\zeta,\omega_{2})\) so that
\[\|\omega_{2}(t)\|\leq C(\zeta,\omega_{2})e^{\zeta|t|}\quad\text{for }t\in \mathbb{R}.\]
Note that there is a constant \(C_{\zeta}\) so that
\[\|BS_{B}(t)\|\leq C_{\zeta}\frac{1}{t}e^{-(\lambda_{B}-\zeta)t},\quad\text{ for }t>0.\]
Thus the last norm in (4) is bounded for any \(t>1\) by
\[\frac{C_{\zeta}C(\zeta,\omega_{2})}{\lambda_{B}-2\zeta}.\]
Hence the random variable
\[\begin{split} Z(\omega_{2})=& B\int_{-\infty}^{0}S_ {B}(-q)\omega_{2}(q)\,dq\\ &=\lim_{t\to\infty}\Big{(}B\int_{-t}^{0}S_{B}(-q)\omega_{2}(q)\, dq-S_{B}(t)\omega_{2}(-t)+S_{B}(t)Z_{0}\Big{)}\end{split}\]
is well defined.
Consider the mild form of (3.2) with initial time \(r\in\mathbb{R}\) and \(t>r\):
\[\begin{split} Z(t,\omega_{2})=& S_{B}(t-r)Z_{0}+\int_{r}^{t}S_{B}(t-q)\,d \omega_{2}(q)\\ =& S_{B}(t-r)Z_{0}+\int_{0}^{t-r}S_{B}(t-r-q)\,d \theta_{r}\omega_{2}(q)\\ =& S_{B}(t-r)Z_{0}+B\int_{0}^{t-r}S_{B}(t-r-q)\theta _{r}\omega_{2}(q)\,dq+\theta_{r}\omega_{2}(t-r).\end{split} \tag{5}\]
In particular for \(Z_{0}=Z(\theta_{r}\omega_{2})\) we have
\[Z(\theta_{t}\omega_{2})=Z(t,\omega_{2})\]
is a stationary solution to (3.2). In additional by Pazy [31, Theorem 4.3.5 (iii)] the second and the third term on the right hand side of (5) have a finite \(\gamma\)-Holder norm
with respect to \(t\in[r,T]\). Then the right hand side has a finite \(\gamma\)-Holder norm with respect to \(t\in[r+\delta,T],0<\delta<T-r\) so that \(Z(\theta_{\epsilon}\omega_{2})\) has a finite \(\gamma\)-Holder norm on any compact interval.
Let \(\omega_{2,\epsilon}(\cdot)\) be the scaled function \(\omega_{2}(\frac{1}{\epsilon}\cdot)\). Over the probability space \((\Omega_{2},\mathscr{F}_{2},\mathbb{P}_{H_{2}})\) this is an FBM which has the same distribution of \(\frac{1}{\epsilon^{H_{2}}}\omega_{2}(\cdot)\). We consider the stationary mild solution of
\[dZ^{\epsilon}(t)=\frac{1}{\epsilon}BZ^{\epsilon}(t)\,dt+d\omega_{2,\epsilon}(t). \tag{10}\]
Similar to above this solution process is given by
\[Z^{\epsilon}(\theta_{r}\omega_{2})=\frac{1}{\epsilon}B\int_{-\infty}^{0}S_{ \frac{B}{\epsilon}}(-q)\theta_{r}\omega_{2,\epsilon}(q)\,dq,\quad r\in\mathbb{R}\]
is a continuous random process which solves (10). We note that for any \(\epsilon\in(0,1]\)\(\omega_{2,\epsilon}\in\Omega_{2}^{*}\) if and only if \(\omega_{2}\in\Omega_{2}^{*}\).
The following lemma describes the relation between \(Z\) and \(Z^{\epsilon}\).
**Lemma 1**: _Let \(\omega_{2,\epsilon}(\cdot)=\omega_{2}(\frac{1}{\epsilon}\cdot)\). Then we have_
\[Z(\theta_{\frac{r}{\epsilon}}\omega_{2})=Z^{\epsilon}(\theta_{r}\omega_{2}) \quad r\in\mathbb{R}.\]
We have
\[Z(\theta_{\frac{r}{\epsilon}}\omega_{2}) = B\int_{-\infty}^{0}S_{B}(-q)\theta_{\frac{r}{\epsilon}}\omega_{ 2}(q)dq\] \[= B\int_{-\infty}^{0}S_{B}(-q)(\omega_{2,\epsilon}(r+\epsilon q)- \omega_{2,\epsilon}(r))dq\] \[= \frac{1}{\epsilon}B\int_{-\infty}^{0}S_{\frac{B}{\epsilon}}(-q) (\omega_{2,\epsilon}(r+q)-\omega_{2,\epsilon}(r))dq\] \[= \frac{1}{\epsilon}B\int_{-\infty}^{0}S_{\frac{B}{\epsilon}}(-q) \theta_{r}\omega_{2,\epsilon}(q)dq=Z^{\epsilon}(\theta_{r}\omega_{2}).\]
We note that \(S_{\frac{B}{\epsilon}}(t)=S_{B}(\frac{t}{\epsilon})\) for \(t\geq 0\).
**Lemma 2**: _We have:_
1. \(\mathbb{E}[\sup_{s\in[0,T]}\|Z(\theta_{s}\omega_{2})\|]<\infty\)_._
2. _Let_ \(T>0\)_. We have for_ \(\epsilon\to 0\) _on a_ \((\theta_{t})_{t\in\mathbb{R}}\)_-invariant set of full measure._ \[\sup_{s\in[0,T]}\|Z^{\epsilon}(\theta_{s}\omega_{2})\|=o(|\epsilon|^{-1}).\]
\({}_{\Box}\)
Proof: (1) To see that \(\mathbb{E}[\sup_{[0,T]}\|Z(\theta_{t}\omega_{2})\|]<\infty\) we have
\[\sup_{[0,T]}\|Z(\theta_{t}\omega_{2})\| \leq \sup_{t\in[0,T]}\|Z(\theta_{t}\omega_{2})-Z(\omega_{2})\|+\|Z( \omega_{2})\|\] \[\leq \sup_{s<t\in[0,T]}\|Z(\theta_{t}\omega_{2})-Z(\theta_{s}\omega_{2 })\|+\|Z(\omega_{2})\|\]
Then we have by Lunardi [27, Theorem 4.3.1 (III)]
\[\mathbb{E}[\sup_{s<t\in[0,T]}\|Z(\theta_{t}\omega_{2})-Z(\theta_{s}\omega_{2 })\|]\leq C\mathbb{E}[\|\omega_{2}\|_{\gamma}]T^{\gamma}.\]
\({}_{\Box}\)
Proof: (2) To see that \(\mathbb{E}[\sup_{s\in[0,T]}\|Z(\theta_{t}\omega_{2})\|]<\infty\) we have
\[\sup_{s\in[0,T]}\|Z(\theta_{t}\omega_{2})\| \leq \sup_{s\in[0,T]}\|Z(\theta_{t}\omega_{2})-Z(\omega_{2})\|+\|Z( \omega_{2})\|\] \[\leq \sup_{s<t\in[0,T]}\|Z(\theta_{t}\omega_{2})-Z(\theta_{s}\omega_{2 })\|+\|Z(\omega_{2})\|\]
Then we have by Lunardi [27, Theorem 4.3.1 (III)]
\[\mathbb{E}[\sup_{s<t\in[0,T]}\|Z(\theta_{t}\omega_{2})-Z(\theta_{s}\omega_{2 })\|]\leq C\mathbb{E}[\|\omega_{2}\|_{\gamma}]T^{\gamma}.\]
\({}_{\Box}\)
Proof: (2) To see that \(\mathbb{E}[\sup_{s\in[0,T]}\|Z(\theta_{t}\omega_{2})\|]<\infty\) we have
\[\sup_{s\in[0,T]}\|Z(\theta_{t}\omega_{2})\| \leq \sup_{s\in[0,T]}\|Z(\theta_{t}\omega_{2})-Z(\omega_{2})\|+\|Z( \omega_{2})\|\] \[\leq \sup_{s<t\in[0,T]
By Kunita [23, Theorem 1.4.1] we have a random variable \(K(\omega_{2})\) such that
\[\mathbb{E}[\|\omega_{2}\|_{\gamma}]\leq\mathbb{E}[K(\omega_{2})]\leq(\mathbb{E}[ K(\omega_{2})^{2n}])^{1/(2n)}<\infty\]
when \(2nH_{2}>1\). The finiteness of \(\mathbb{E}[\|Z(\omega)\|]\) follows by taking the expectation of the right hand side of (3.3) having in mind \(\mathbb{E}[\|\omega_{2}(t)\|^{2}]\leq(\sum_{i}q_{ii}^{2}|t|^{2H_{2}})\).
(2) By (1) we can apply Arnold [1, Proposition 4.1.3] and know that \(\|Z(\theta_{q}\omega_{2})\|\) is sublinear growing i.e.
\[\lim_{q\to\pm\infty}\|Z(\theta_{q}\omega_{2})\|\cdot|q|^{-1}=0\]
on a \((\theta_{t})_{t\in\mathbb{R}}\) invariant set of full measure. Suppose the assertion does not hold for \(\omega_{2}\) from the invariant set mentioned above. Then there exists a \(\delta>0\) and a sequence \((\epsilon_{j})_{j\in\mathbb{N}}\), \(\epsilon_{j}>0\) tending to zero so that by Lemma 3.2
\[\sup_{s\in[0,T]}\|Z^{\epsilon_{j}}(\theta_{s}\omega_{2})\|\epsilon_{j}=\sup_{ s\in[0,T]}\|Z(\theta_{\frac{s}{s_{j}}}\omega_{2})\|\epsilon_{j}>\delta \tag{3.7}\]
for every \(j\). Let \(s_{\epsilon_{j}}\) be the largest element in \([0,T/\epsilon_{j}]\) so that
\[\sup_{q\in[0,T/\epsilon_{j}]}\|Z(\theta_{q}\omega_{2})\|=\|Z(\theta_{s_{ \epsilon_{j}}}\omega_{2})\|.\]
For \(j\to\infty\) the sequence \((s_{\epsilon_{j}})\) tends to \(\infty\). Hence
\[0=\lim_{j\to\infty}\|Z(\theta_{s_{\epsilon_{j}}}\omega_{2})\|\cdot\frac{1}{s _{\epsilon_{j}}}\geq\lim_{j\to\infty}\|Z(\theta_{s_{\epsilon_{j}}}\omega_{2}) \|\cdot\frac{\epsilon_{j}}{T}=0 \tag{3.8}\]
which is a contradiction to (3.7).
### Description of the problem of stochastic evolution equations
Let \(u(t)=(u_{1}(t),u_{2}(t))\in V\times V\) be the solution of
\[du(t)=Ju(t)\,dt+F(u(t))\,dt+G(u(t))\,(d\omega_{1}(t),d\omega_{2}(t)) \tag{3.9}\]
where \(u(0)=u_{0}=(u_{01},u_{02})\in V\times V,\)\(\omega_{1}\) is a path of the canonical FBM with Hurst exponent \(H_{1}\), and \(\omega_{2}\) is a path of the canonical FBM with Hurst exponent \(H_{2}\) so that (2.2) holds. In contrast to the equation considered in Maslowski et al. [28] and Chen et al. [4], (3.9) contains the term \(\omega_{2}\) which does not have the Holder regularity of \(\omega_{1}\).
We describe the assumptions regarding the operator \(J\) and nonlinear terms \(F\) and \(G\) as follows
1. Let \(-J\) be a closed positive symmetric operator with compact inverse. Then, \(J\) generates an exponential analytic semigroup \(S_{J}\) on \(V\times V\), such that \(\|S_{J}(t)\|\leq e^{-\lambda_{J}t},\lambda_{J}>0\), for \(t\geq 0\).
2. The operator \(F:V\times V\to V\times V\) is Lipschitz continuous with Lipschitz constant \(c_{DF}\).
3. The operator is given by \[G=\left(\begin{array}{cc}h(u_{1})&0\\ 0&\mathrm{id}\end{array}\right)\] where \(h:V\to L_{2}(V)\). The latter space is the space of Hilbert Schmidt operators on \(V\). \(h\) has bounded first and second derivatives with bounds \(c_{Dh}\) and \(c_{D^{2}h}\).
Let \(-J\) from (H1). We can assume that \(-J\) has a positive spectrum of finite multiplicity \((\lambda_{J,i})_{i\in\mathbb{N}}\) so that \(\lim_{i\to\infty}\lambda_{J,i}=\infty.\) The associated eigenelements \((e_{J,i})_{i\in\mathbb{N}}\) are chosen so that they form a complete orthonormal system.
For \(\alpha\geq 0\) define the Banach spaces \(\tilde{V}_{\alpha}=D((-J)^{\alpha})\) where the norm of this space is given in the following definition
\[\tilde{V}_{\alpha}=\left\{u\in\tilde{V}:\|u\|_{\alpha}^{2}=\sum_{i=1}^{\infty }\tilde{\lambda}_{i}^{2\alpha}|\hat{u}_{i}|^{2}\right\}<\infty,\quad u=\sum_{i= 1}^{\infty}\hat{u}_{i}e_{J,i}\right\}\ \ \text{with }\tilde{V}=\tilde{V}_{0}.\]
Here \(\tilde{V}\) stands for \(V\times V\) which has the orthonormal basis \((e_{J,i})_{i\in\mathbb{N}}\in\tilde{V}\) coming from the eigenvalues of \(-J\) related to the eigenvectors \(\tilde{\lambda}_{i}=\lambda_{J,i}\). Then \(\tilde{V}_{1}=D(-J)\) is the domain of \(J\). However, later considering the operators \(-A,\,-B\) we set \(\tilde{V}=V\) and the eigenvalues and eigenvectors \(\lambda_{A,i},e_{A,i}\) and \(\lambda_{B,i},e_{B,i}\) are given by the eigenvalues, eigenvectors of \(-A,\,-B\). The properties of \(B\) has been used at the beginning of Section 3.1. The operators \(A,\,B\) generate an exponential semigroup \(S_{A},\,S_{B}\) like with similar properties as \(J\) but defined on \(\tilde{V}=V\). Let \(L(\tilde{V}_{v},\tilde{V}_{\zeta})\) denote the space of continuous linear operators from \(\tilde{V}_{v}\) into \(\tilde{V}_{\zeta}\). There exists a constant \(c>0\), such that for \(0\leq s<t\leq T\), we have
\[\left\|S_{J}(t)\right\|_{L(\tilde{V},\tilde{V}_{\sigma})} \leq ct^{-\sigma}e^{-\lambda_{J}t},\ \ \text{for}\ \,\sigma>0 \tag{10}\] \[\left\|S_{J}(t-s)-\operatorname{id}\right\|_{L(\tilde{V}_{\sigma },\tilde{V})} \leq c(t-s)^{\sigma},\ \ \text{for}\ \,\sigma\in[0,1]. \tag{11}\]
In (10) notice that \(\lambda_{J}\) is a positive constant. We also note that, for \(0<\sigma\leq 1\), there exists \(c>0\) such that for \(0\leq q\leq r\leq s\leq t\), we derive
\[\left\|S_{J}(t-r)-S_{J}(t-q)\right\|_{L(\tilde{V})} \leq c(r-q)^{\sigma}(t-r)^{-\sigma} \tag{12}\]
and for \(\varrho,\,\nu\in(0,1]\)
\[\left\|S_{J}(t-r)-S_{J}(s-r)- S_{J}(t-q)+S_{J}(s-q)\right\|_{L(\tilde{V})}\] \[\leq c(t-s)^{\varrho}(r-q)^{\nu}(s-r)^{-(\varrho+\nu)}.\]
_Remark 3.4_.: By (H3), it is easy to obtain the estimate (see for instance Maslowski et al. [28])
\[\left\|h(u)\right\|_{L_{2}(V)} \leq c_{h}+c_{Dh}\|u\|,\] \[\left\|h(u)-h(v)\right\|_{L_{2}(V)} \leq c_{Dh}\|u-v\|,\] \[\left\|h(u_{1})-h(u_{2})-h(v_{1})+h(v_{2})\right\|_{L_{2}(V)} \leq c_{Dh}\|u_{1}-v_{1}-(u_{2}-v_{2})\|\] \[+c_{D^{2}h}\|u_{1}-u_{2}\|+(\|u_{1}-v_{1}\|+\|u_{2}-v_{2}\|)\]
for all \(u,\,v,\,u_{i},\,v_{i}\in V,i=1,2\).
### Pathwise mild solution
Let \(t\to Z(\theta_{t}\omega_{2})\) be the stationary O-U process defined in Section 3.1.
In the different formulas, \(c\) will denote a generic constant that may differ from line to line. Sometimes we will write \(c_{T}\) when we want to stress the dependence on \(T\).
Let now
\[J=\left(\begin{array}{cc}A&0\\ 0&B\end{array}\right).\]
We interpret the equation (11) in mild form:
\[u(t) = S_{J}(t)u_{0}+\int_{0}^{t}S_{J}(t-r)F(u(r))\,dr+\int_{0}^{t}S_{J}(t -r)G(u(r))\,d\omega(r)\] \[= \left(\begin{array}{c}S_{A}(t)u_{01}\\ S_{B}(t)(u_{02}-Z(\omega_{2}))\end{array}\right)+\left(\begin{array}{c}0\\ Z(\theta_{t}\omega_{2})\end{array}\right)+\int_{0}^{t}S_{J}(t-r)F(u(r))dr\,+\] \[+ \left(\begin{array}{c}\int_{0}^{t}S_{A}(t-r)h(u_{1}(r))d\omega_ {1}(r)\\ 0\end{array}\right). \tag{13}\]
To estimate the stochastic integral of the last expression in (12), we refer to [4, 28, 41] where a similar estimate is derived. \(u\in C^{\gamma,\sim}([0,T],V\times V)\). In addition, from (10) it follows that we find \(\alpha,\,\beta,\gamma\) so that we assume that \(\beta>1/2\), \(0<\alpha<\gamma<\beta,\,\beta+\alpha>1\). The condition \(\alpha<\gamma\) allows to define \(D^{0}_{0+}h(u_{1}(\cdot))[r]\) and \(\alpha+\beta>1\) allows to define \(D^{1-\alpha}_{T-}\omega_{1,T}(\cdot)[r]\), see (10). Then we have
\[\int_{0}^{t}S_{A}(t-r)h(u_{1}(r))\,d\omega_{1}(r)=(-1)^{\alpha}\int_{0}^{t}D^ {\alpha}_{0+}S_{A}(t-\cdot)h(u_{1}(\cdot))[r]D^{1-\alpha}_{T-}\omega_{1,T}( \cdot)[r]\,dr.\]
We firstly introduce the operator \(u\mapsto\mathcal{T}(u,\omega_{1},\omega_{2},u_{0})\) for fixed \(u_{0}\) with domain \(C^{\gamma,\sim}([0,T];V\times V)\) and \(\omega_{1}\in C^{\beta,\sim}([0,T];V)\), \(\omega_{2}\in C^{\gamma,\sim}([0,T];V)\). This operator is defined by
\[\mathcal{T}(u,\omega_{1},\omega_{2},u_{0})[t] := \left(\begin{array}{c}S_{A}(t)u_{01}\\ S_{B}(t)(u_{02}-Z(\omega_{2}))\end{array}\right)+\left(\begin{array}{c}0\\ Z(\theta_{t}\omega_{2})\end{array}\right)\] \[+\int_{0}^{t}S_{J}(t-r)F(u(r))dr+\left(\begin{array}{c}\int_{0 }^{t}S_{A}(t-r)h(u_{1}(r))d\omega_{1}(r)\\ 0\end{array}\right).\]
**Lemma 3.5**: _Let (H1)-(H3) hold and for any \(T>0\) there exists a \(c_{T}>0\) such that under (10) we can ensure the above condition on \(\alpha,\,\beta,\,\gamma<\gamma^{\prime}<H_{2}\). For \(\rho>0,\omega_{1}\in C^{\beta}([0,T],V)\), \(\omega_{2}\in C^{\gamma^{\prime}}([0,T],V)\) and \(u\in C^{\gamma,\sim}([0,T],V\times V)\), we have_
\[\|\mathcal{T}(u,\omega_{1}, \omega_{2},u_{0})\|_{\gamma,\rho,\sim}\] \[\leq c_{T}(\|u_{01}\|+\|u_{02}\|+\|Z(\theta\,\omega_{2})\|_{\gamma })+C(\rho,\omega_{1},T)(1+\|u\|_{\gamma,\rho,\sim}) \tag{14}\]
_where \(C(\rho,\omega_{1},T)>0\) such that \(\lim_{\rho\to\infty}C(\rho,\omega_{1},T)=0\)._
Despite the fact that a quite similar result was proved in [4, Lemma 10] (but in that paper there is not any drift and \(\omega_{2}\)-term), for the sake of completeness we give the proof in Appendix 4.4.
**Lemma 3.6**: _Let (H1)-(H3) and (10) hold and for any \(T>0\) there exists a \(c_{T}>0\) such that for \(\rho>0,\omega_{1}\in C^{\beta}([0,T],V)\), \(\omega_{2}\in C^{\gamma}([0,T],V)\) and \(u^{1},u^{2}\in C^{\gamma,\sim}([0,T];V\times V)\),_
\[\|\mathcal{T}(u^{1},\omega_{1},\omega_{2},u_{0}) -\mathcal{T}(u^{2},\omega_{1},\omega_{2},u_{0})\|_{\gamma,\rho,\sim}\] \[\leq c_{T}(1+\|u^{1}\|_{\gamma,\rho,\sim}+\|u^{2}\|_{\gamma,\rho, \sim})\tilde{K}(\rho)\|u^{1}-u^{2}\|_{\gamma,\rho,\sim}\]
_where \(\lim_{\rho\to\infty}\tilde{K}(\rho)=0\)._
The proof follows by using the same techniques as in [4, Lemma 11], because the \(Z\)-term is cancelled.
**Lemma 3.7**: _Let (H1)-(H3) and (2.2) hold and \(u_{0}\in V\times V\). Then, for every \(T>0\), (3.9) has a unique solution \(u\) in \(C^{\gamma,\sim}([0,T];V\times V)\)._
According to (3.16), for sufficiently large \(\rho\) the centered and closed ball in \(C^{\gamma,\sim}([0,T];V\times V)\), \(\|\cdot\|_{\gamma,\rho,\sim}\) which is mapped by \(\mathcal{T}(\cdot,\omega_{1},\omega_{2},u_{0})\) into itself. Then, by Lemma 3.6 by Theorem 12 in [4], (3.9) has a unique solution \(u\) in \(C^{\gamma,\sim}([0,T];V\times V)\)
## 4 Almost Sure Averaging for Fast-Slow Evolution Equations
### Description of the averaging problem
In this subsection, our intention is to convert the original system (1.1)-(1.2) into reduced systems without fast component. Thus, we are interested in solving the following stochastic system
\[dX^{\epsilon}(t) = AX^{\epsilon}(t)\,dt+f(X^{\epsilon}(t),Y^{\epsilon}(t))\,dt+h(X ^{\epsilon}(t))\,d\omega_{1}(t),X^{\epsilon}(0)=X_{0}\in V, \tag{4.1}\] \[dY^{\epsilon}(t) = \frac{1}{\epsilon}BY^{\epsilon}(t)\,dt+\frac{1}{\epsilon}g(X^{ \epsilon}(t),Y^{\epsilon}(t))\,dt+d\omega_{2,\epsilon}(t),Y^{\epsilon}(0)=Y_{ 0}\in V \tag{4.2}\]
where \(\omega_{1}\) is a path of the canonical FBM with Hurst exponent \(H_{1}\), \(\omega_{2}\) is a path of the canonical FBM with Hurst exponent \(H_{2}\), and \(H_{1}\in(\frac{1}{2},1),H_{2}\in(1-H_{1},1)\), which have been given in Section 2. Then \(\omega_{2,\epsilon}(\cdot)=\omega_{2}(\frac{1}{\epsilon})\). By the solution of (4.1)-(4.2) on \([0,T]\), we mean a process \((X^{\epsilon},Y^{\epsilon})\) which satisfies
\[X^{\epsilon}(t) = S_{A}(t)X_{0}+\int_{0}^{t}S_{A}(t-r)f(X^{\epsilon}(r),Y^{ \epsilon}(r))\,dr \tag{4.3}\] \[+\int_{0}^{t}S_{A}(t-r)h(X^{\epsilon}(r))\,d\omega_{1}(r)\]
and
\[Y^{\epsilon}(t) = S_{\frac{B}{\epsilon}}(t)Y_{0}+\frac{1}{\epsilon}\int_{0}^{t}S_ {\frac{B}{\epsilon}}(t-r)g(X^{\epsilon}(r),Y^{\epsilon}(r))\,dr \tag{4.4}\] \[+\int_{0}^{t}S_{\frac{B}{\epsilon}}(t-r)\,d\omega_{2,\epsilon}(r).\]
We assume that the following conditions for the coefficients of the system are fulfilled.
1. We assume for the operators \(A,B\) the conditions of (H1).
2. The coefficient \(F(x,y)\) from (H2) is now specialized by \((f(x,y),1/\epsilon g(x,y))\). The coefficients \(f(x,y):V\times V\to V\) of (1.1) and \(g(x,y):V\times V\to V\) of (1.2) are globally Lipschitz continuous in \(x,y\) i.e., there exists a positive constant \(C_{1}\) and let \(C_{2}=\|g(0,0)\|\) such that \[\|f(x_{1},y_{1})-f(x_{2},y_{2})\|+\|g(x_{1},y_{1})-g(x_{2},y_{2})\| \leq C_{1}(\|x_{1}-x_{2}\|+\|y_{1}-y_{2}\|)\] \[\|g(x,y)\| \leq C_{1}(\|x\|+\|y\|)+C_{2}\] for all \(x_{1},y_{1},x_{2},y_{2},x,y\in V\).
3. \(h\) satisfies the conditions of (H3).
4. \(f\) is bounded.
**Lemma 4.1**: _Let (A1)-(A3) and (2.2) hold. For any \(X_{0}\in V,Y_{0}\in V\) and \(T>0\), there is a unique solution \((X^{\epsilon},Y^{\epsilon})\) to (4.3)-(4.4) in \(C^{\gamma,\sim}([0,T];V\times V)\)._
This is just the special case of Lemma 3.7, we omit the proof here.
We present now the main result of this article.
**Theorem 4.2**: _Let (A1)-(A4) and (2) hold and assume further that \(\lambda_{B}>C_{1}\). For any \(X_{0}\in V\), as \(\epsilon\to 0\) the solution of (4.1) converges to \(\bar{X}\) which solves following (4.5). That is, we have almost surely_
\[\lim_{\epsilon\to 0}\|X^{\epsilon}-\bar{X}\|_{\gamma,\sim}=0\]
_where \(\bar{X}\) is the mild solution to the averaged equation_
\[\bar{X}(t)=S_{A}(t)X_{0}+\int_{0}^{t}S_{A}(t-r)\bar{f}(\bar{X}(r))\,dr+\int_{0} ^{t}S_{A}(t-r)h(\bar{X}(r))\,d\omega_{1}(r) \tag{4.5}\]
_where the Lipschitz continuous function \(\bar{f}\) will be given in (4.12) later._
To prove Theorem 4.2, we first obtain the random fixed point for the RDS generated by (4.2) in Section 4.2. Then, we give the proof of Theorem 4.2 in Section 4.3.
### Random fixed points
To describe the behavior of the fast variable we have to introduce a RDS. Let \(B\) be a separable Banach space, and \((\hat{\Omega},\hat{\mathscr{F}},\hat{\mathbb{P}},\theta)\) be an ergodic MDS. A measurable mapping
\[\varphi:\mathbb{R}^{+}\times\hat{\Omega}\times B\to B\]
is called RDS if the cocycle property holds
\[\varphi(t+\tau,\omega,b)=\varphi(t,\theta_{\tau}\omega,\cdot)\circ\varphi( \tau,\omega,b)\quad\text{for $t$, $\tau\in\mathbb{R}^{+}$, $\omega\in\hat{\Omega}$, $b\in B$.}\]
and \(\varphi(0,\omega,\cdot)=\mathrm{id}_{B}\). For details we refer to Arnold [1].
Consider the parameterized equation
\[dY^{\epsilon,x}(t)=\frac{1}{\epsilon}BY^{\epsilon,x}(t)\,dt+\frac{1}{\epsilon }g(x,Y^{\epsilon,x}(t))\,dt+d\omega_{2,\epsilon}(t) \tag{4.6}\]
where \(x\in V\) is a fixed but arbitrary element. Straightforwardly this equation interpreted in mild sense generates an RDS for every \(x\in V\). We would like to show that under particular conditions on \(g\) the RDS generated by this equation has a random fixed point.
**Definition 4.3**: _Let \(\varphi\) be an RDS over an ergodic metric dynamical system \((\hat{\Omega},\hat{\mathscr{F}},\hat{\mathbb{P}},\theta)\) with values in the separable Banach space \(B\). A random variable \(Y:\omega\to B\) is called random fixed point for \(\varphi\) if_
\[\varphi(t,\omega,Y(\omega))=Y(\theta_{t}\omega)\]
_for all \(t\geq 0\) on a \((\theta_{t})_{t\in\mathbb{R}}\)-invariant set of full measure._
In particular when \(Y\) is a random fixed point for an RDS generated by a differential equation then the function \(t\to Y(\theta_{t}\omega)\) is a stationary solution of the equation (4.6).
We formulate conditions for the existence of a random fixed point. Recall that a random variable \(X(\omega)\geq 0\) is called tempered on a \((\theta_{t})_{t\in\mathbb{R}}\)-invariant set of full measure if
\[\lim_{t\to\pm\infty}\frac{\log^{+}X(\theta_{t}\omega)}{|t|}=0.\]
A family of sets \(\left(C(\omega)\right)_{\omega\in\hat{\Omega}}\), \(C(\omega)\neq\emptyset\) and closed is called tempered random set if \(\mathrm{distance}_{B}(y,C(\omega))\) for all \(y\in B\) is measurable, it is called tempered if
\[X(\omega)=\sup_{x\in C(\omega)}\|x\|_{B}\]
is tempered. We note that for every random set there exists a sequence of random variables \((x_{n})_{n\in\mathbb{N}}\) so that
\[C(\omega)=\overline{\bigcup_{n\in\mathbb{N}}\left\{x_{n}(\omega)\right\}}.\]
Hence the above supremum defines a random variable. Random variables \(y(\omega)\in C(\omega)\) for \(\omega\in\hat{\Omega}\) are called selectors of \(C\).
Let us present here an existence theorem random fixed points.
**Lemma 4.4**: _Suppose that the RDS \(\varphi\) has a random forward invariant set closed \(C\) which is tempered:_
\[\varphi(t,\omega,C(\omega))\subset C(\theta_{t}\omega)\quad\text{for }t\geq 0, \omega\in\hat{\Omega}.\]
_Let_
\[k(\omega)=\sup_{x\neq y\in C(\omega)}\log\left(\frac{\|\varphi(1,\omega,x)- \varphi(1,\omega,y)\|}{\|x-y\|}\right)\]
_so that \(\mathbb{E}k<0\). The random variable_
\[\omega\mapsto\sup_{t\in[0,1]}\|\varphi(t,\theta_{-t}\omega,y(\theta_{-t}\omega ))\| \tag{10}\]
_is assumed to be tempered for any selector \(y\) from \(C\). Then the RDS \(\varphi\) has a random fixed point \(Y(\omega)\in C(\omega)\) which is unique. In addition \(\|Y(\omega)\|\) is tempered. This random fixed point is pullback and forward attracting:_
\[\lim_{t\to\infty}\|\varphi(t,\theta_{-t}\omega,y(\theta_{-t}\omega))-Y(\omega )\|=0,\lim_{t\to\infty}\|\varphi(t,\omega,y(\omega))-Y(\theta_{t}\omega)\|=0 \tag{11}\]
_with exponential speed for every measurable selector \(y(\omega)\in C(\omega)\)._
For the definition and proof we refer to Chueshov and Schmalfuss [7, Definition 3.1.21, Theorem 4.2.1]. Let us check if the assumptions for our problem are satisfied.
**Theorem 4.5**: _Consider (10)interpreted in the mild sense. Then for every \(x\in V\) and \(\epsilon>0\) the RDS generated by (10) has a tempered pullback and forward exponentially attracting random fixed point \(Y_{F}^{\epsilon}(\omega_{2},x)\). The exponential rate of forward and pullback convergence to the random fixed point is given by \(\frac{\lambda_{B}-C_{1}}{\epsilon}\)._
We set
\[\tilde{g}_{\epsilon}(x,y,\omega_{2})=g(x,y+Z^{\epsilon}(\omega_{2}))=g\Big{(} x,y+\frac{1}{\epsilon}\int_{-\infty}^{0}BS_{\frac{R}{\epsilon}}(-q)\omega_{2, \epsilon}(q)dq\Big{)}. \tag{12}\]
Then the equation
\[d\tilde{Y}^{\epsilon,x}(t)=\frac{1}{\epsilon}B\tilde{Y}^{\epsilon,x}(t)\,dt+ \frac{1}{\epsilon}\tilde{g}_{\epsilon}(x,\tilde{Y}^{\epsilon,x}(t),\theta_{t} \omega_{2})\,dt\]
has a unique mild solution forming the RDS \(\tilde{\varphi}^{\epsilon,x}\). This RDS generates by conjugation an RDS for the mild version of (10). In particular we have
\[Y^{\epsilon,x}(t,\omega_{2})-Z^{\epsilon}(\theta_{t}\omega_{2})=\tilde{Y}^{ \epsilon,x}(t,\omega_{2}).\]
We obtain by (A2)
\[\|\tilde{g}_{\epsilon}(x,y,\omega_{2})\|\leq C_{1}\|y\|+\|\tilde{g}_{\epsilon}(x,0,\omega_{2})\|\]
where
\[\|\tilde{g}_{\epsilon}(x,0,\omega_{2})\|\leq C_{1}(\|Z^{\epsilon}(\omega_{2})\| +\|x\|)+C_{2}.\]
Let \(\tilde{y}^{\epsilon,x}\) be the solution of the one dimensional equation
\[\tilde{y}^{\epsilon,x}(t)=e^{-\frac{\lambda_{B}t}{\epsilon}}y_{0}+\int_{0}^{t }e^{-\frac{\lambda_{B}}{\epsilon}(t-r)}\frac{1}{\epsilon}\bigg{(}C_{1}\tilde{y }^{\epsilon,x}(r)+C_{1}(\|Z^{\epsilon}(\omega_{2})\|+\|x\|)+C_{2})\bigg{)}dr\]
which generates an one dimensional RDS with tempered forward and random fixed point \(\tilde{y}^{\epsilon,x}_{F}(\theta_{t}\omega_{2})\), see Chueshov and Schmalfuss [7, Theorem 3.1.23]. In addition, by a comparison argument
\[\|\tilde{Y}^{\epsilon,x}(t)\|\leq\tilde{y}^{\epsilon,x}(t)\quad\text{if }t\geq 0,\,y_{0}\geq\|\tilde{Y}(0)\|.\]
Hence the ball with radius
\[\tilde{R}^{\epsilon,x}(\omega_{2})=2\int_{-\infty}^{0}e^{\frac{(\lambda_{B}-C _{1})r}{\epsilon}}\frac{1}{\epsilon}(C_{1}(\|Z^{\epsilon}(\theta_{r}\omega_{2 })\|+\|x\|)+C_{2})dr \tag{4.10}\]
and center \(0\) defines a tempered forward and pullback absorbing set \(C^{\epsilon,x}(\omega_{2})\). The temperedness follows by Lemma 3.3.
Consider
\[\|\tilde{\varphi}^{\epsilon,x} (t,\omega_{2},\tilde{Y}_{01})-\tilde{\varphi}^{\epsilon,x}(t, \omega_{2},\tilde{Y}_{02})\|\] \[\leq\,e^{-\frac{\lambda_{B}t}{\epsilon}t}\|\tilde{Y}_{0,1}- \tilde{Y}_{0,2}\|+\int_{0}^{t}e^{-\frac{\lambda_{B}(t-r)}{\epsilon}}\frac{C_{ 1}}{\epsilon}\|\tilde{\varphi}^{\epsilon,x}(r,\omega_{2},\tilde{Y}_{01})- \tilde{\varphi}^{\epsilon,x}(r,\omega_{2},\tilde{Y}_{02})\|dr.\]
Then a Gronwall Lemma argument gives
\[\|\tilde{\varphi}^{\epsilon,x}(1,\omega_{2},\tilde{Y}_{01})-\tilde{\varphi}^{ \epsilon,x}(1,\omega_{2},\tilde{Y}_{02})\|\leq e^{-\frac{\lambda_{B}-C_{1}}{ \epsilon}}\|\tilde{Y}_{01}-\tilde{Y}_{02}\|\]
so that the logarithm of the contraction constant is given by \(k=\frac{1}{\epsilon}(-\lambda_{B}+C_{1})\). Note that this constant \(k\) is nonrandom, less than \(0\) and the estimate is true for any pair \(\tilde{Y}_{0i}\in V,\,i=1,\,2\). We have for some measurable selector \(y\) with \(y(\omega_{2})\in B(0,\tilde{R}^{\epsilon,x}(\omega_{2}))\)
\[\|\tilde{\varphi}^{\epsilon,x}(q,\theta_{-t}\omega_{2},y(\theta_ {-t}\omega_{2}))\|\leq e^{-\frac{\lambda_{B}}{\epsilon}q}\|y(\theta_{-t}\omega_{2})\|\] \[+\int_{0}^{q}e^{-\frac{\lambda_{B}}{\epsilon}(q-r)}\bigg{(}\frac{ C_{1}}{\epsilon}\|\tilde{\varphi}^{\epsilon,x}(r,\theta_{-t}\omega_{2},y( \theta_{-t}\omega_{2}))\|\] \[+\frac{1}{\epsilon}(C_{1}(\|Z^{\epsilon}(\theta_{r-t}\omega_{2}) \|+\|x\|)+C_{2})\bigg{)}dr.\]
Then by a Gronwall Lemma argument for \(t\in[0,1]\)
\[\|\tilde{\varphi}^{\epsilon,x} (t,\theta_{-t}\omega_{2},y(\theta_{-t}\omega_{2}))\|\] \[\leq\|y(\theta_{-t}\omega_{2}\|+\int_{0}^{t}e^{-\frac{(\lambda_{B} -C_{1})r}{\epsilon}}\frac{1}{\epsilon}(C_{1}(\|Z^{\epsilon}(\theta_{r}\omega_{ 2})\|+\|x\|)+C_{2})dr\] \[\leq\|y(\theta_{-t}\omega_{2}\|+\int_{-t}^{0}e^{\frac{(\lambda_{B} -C_{1})r}{\epsilon}}\frac{1}{\epsilon}(C_{1}(\|Z^{\epsilon}(\theta_{r}\omega_{ 2})\|+\|x\|)+C_{2})dr\] \[\leq\tilde{R}^{\epsilon,x}(\theta_{-t}\omega_{2})+\int_{-1}^{0} \frac{1}{\epsilon}(C_{1}(\|Z^{\epsilon}(\theta_{r}\omega_{2})\|+\|x\|)+C_{2})dr.\]
However integrals and suprema w.r.t compact time intervals of tempered random variables are tempered, see Chueshov and Schmalfuss [7, Remark 3.1.8, p. 186]
**Theorem 4.6**: _For fixed \(\omega_{2},\,\epsilon>0\) this fixed point depends Lipschitz continuously on \(x\) with Lipschitz constant \(\frac{C_{1}}{\lambda_{B}-C_{1}}\)._
We deal with the Lipschitz continuity of the fixed points \(\tilde{Y}^{\epsilon}_{F}(\omega_{2},x)\). We have for any \(t\geq 0\)
\[\tilde{Y}^{\epsilon}_{F}(\omega_{2},x_{1}) -\tilde{Y}^{\epsilon}_{F}(\omega_{2},x_{2})\] \[=\tilde{\varphi}^{\epsilon,x_{1}}(t,\theta_{-t}\omega_{2},\tilde {Y}^{\epsilon}_{F}(\theta_{-t}\omega_{2},x_{1}))-\tilde{\varphi}^{\epsilon,x _{2}}(t,\theta_{-t}\omega_{2},\tilde{Y}^{\epsilon}_{F}(\theta_{-t}\omega_{2}, x_{2}))\] \[=\tilde{\varphi}^{\epsilon,x_{1}}(t,\theta_{-t}\omega_{2},\tilde {Y}^{\epsilon}_{F}(\theta_{-t}\omega_{2},x_{1}))-\tilde{\varphi}^{\epsilon, x_{1}}(t,\theta_{-t}\omega_{2},\tilde{Y}^{\epsilon}_{F}(\theta_{-t}\omega_{2}, x_{2}))\] \[\quad+\tilde{\varphi}^{\epsilon,x_{1}}(t,\theta_{-t}\omega_{2}, \tilde{Y}^{\epsilon}_{F}(\theta_{-t}\omega_{2},x_{2}))-\tilde{\varphi}^{ \epsilon,x_{2}}(t,\theta_{-t}\omega_{2},\tilde{Y}^{\epsilon}_{F}(\theta_{-t} \omega_{2},x_{2}))\] \[=:K_{1}+K_{2}.\]
We set \(t=n+t^{\prime}\), \(t^{\prime}=t^{\prime}(t)\in[0,1)\). When \(\|x_{2}\|\leq\|x_{1}\|\) then \(\tilde{Y}^{\epsilon}_{F}(\omega_{2},x_{2})\in\tilde{C}^{\epsilon,x_{1}}( \omega_{2})\). Then
\[\|K_{1}\| =\|\tilde{\varphi}^{\epsilon,x_{1}}(t,\theta_{-t}\omega_{2}, \tilde{Y}^{\epsilon}_{F}(\theta_{-t}\omega_{2},x_{1}))-\tilde{\varphi}^{ \epsilon,x_{1}}(t,\theta_{-t}\omega_{2},\tilde{Y}^{\epsilon}_{F}(\theta_{-t} \omega_{2},x_{2}))\|\] \[\leq\sup_{y_{1}\neq y_{2}\in C^{\epsilon,x_{1}}(\theta_{-t}\omega _{2})}\frac{\|\tilde{\varphi}^{\epsilon,x_{1}}(1,\theta_{-t}\omega_{2},y_{1})- \tilde{\varphi}^{\epsilon,x_{1}}(1,\theta_{-t}\omega_{2},y_{2})\|}{\|y_{1}-y _{2}\|}\times\ldots\cdot\times\] \[\quad\times\sup_{y_{1}\neq y_{2}\in C^{\epsilon,x_{1}}(\theta_{- n}\omega_{2})}\frac{\|\tilde{\varphi}^{\epsilon,x_{1}}(1,\theta_{-n}\omega_{2},y_{1})- \tilde{\varphi}^{\epsilon,x_{1}}(1,\theta_{-n}\omega_{2},y_{2})\|}{\|y_{1}-y _{2}\|}\] \[\quad\times\|\tilde{Y}^{\epsilon}_{F}(\theta_{-n-t^{\prime}} \omega_{2},x_{1})-\tilde{Y}^{\epsilon}_{F}(\theta_{-n-t^{\prime}}\omega_{2},x _{2})\|\] \[\leq e^{-n(\lambda_{B}-C_{1})/\epsilon}\sup_{t^{\prime}\in[0,1]} \|\tilde{Y}^{\epsilon}_{F}(\theta_{-n-t^{\prime}}\omega_{2},x_{1})-\tilde{Y}^ {\epsilon}_{F}(\theta_{-n-t^{\prime}}\omega_{2},x_{2})\|.\]
Here, \(\tilde{Y}^{\epsilon}_{F}(\omega_{2},x_{1})\), \(\tilde{Y}^{\epsilon}_{F}(\omega_{2},x_{2})\in C^{\epsilon,x_{1}}(\omega_{2})\) are tempered, see (4.7), hence
\[\sup_{s\in[0,1]}\|Y^{\epsilon}_{F}(\theta_{-s}\omega_{2},x_{1})\|,\,\sup_{s \in[0,1]}\|Y^{\epsilon}_{F}(\theta_{-s}\omega_{2},x_{2})\|\]
is tempered, see Chueshov and Schmalfuss [7, Remark 3.1.8, p. 186]. Thus the right hand side can be made arbitrarily small when \(n\) is sufficiently large. We then have that \(K_{1}\) is zero for \(t\to\infty\). We deal with \(K_{2}\).
\[\|K_{2}\| =\|\tilde{\varphi}^{\epsilon,x_{1}}(t,\theta_{-t}\omega_{2}, \tilde{Y}^{\epsilon}_{F}(\theta_{-t}\omega_{2},x_{2}))-\tilde{\varphi}^{ \epsilon,x_{2}}(t,\theta_{-t}\omega_{2},\tilde{Y}^{\epsilon}_{F}(\theta_{-t} \omega_{2},x_{2}))\|\] \[\leq\int_{0}^{t}e^{-\frac{\lambda_{B}(t-r)}{\epsilon}}\frac{C_{1} }{\epsilon}(\|\tilde{\varphi}^{\epsilon,x_{1}}(r,\theta_{-t}\omega_{2}, \tilde{Y}^{\epsilon}_{F}(\theta_{-t}\omega_{2},x_{2}))\] \[\quad-\tilde{\varphi}^{\epsilon,x_{2}}(r,\theta_{-t}\omega_{2}, \tilde{Y}^{\epsilon}_{F}(\theta_{-t}\omega_{2},x_{2}))\|+\|x_{1}-x_{2}\|)dr.\]
Then by the Gronwall lemma
\[\|\tilde{\varphi}^{\epsilon,x_{1}}(t,\theta_{-t}\omega_{2}, \tilde{Y}^{\epsilon}_{F}(\theta_{-t}\omega_{2},x_{2})) -\tilde{\varphi}^{\epsilon,x_{2}}(t,\theta_{-t}\omega_{2},\tilde{Y}^ {\epsilon}_{F}(\theta_{-t}\omega_{2},x_{2}))\|\] \[\leq\int_{0}^{t}e^{-\frac{(\lambda_{B}-C_{1})(t-r)}{\epsilon}} \frac{C_{1}}{\epsilon}\|x_{1}-x_{2}\|dr\] \[\leq\frac{C_{1}}{\lambda_{B}-C_{1}}\|x_{1}-x_{2}\|\]
so that the Lipschitz constant is independent of \(\epsilon\) and \(\omega_{2}\). Finally we have
\[\|\tilde{Y}^{\epsilon}_{F}(\omega_{2},x_{1})-\tilde{Y}^{\epsilon}_{F}(\omega_{2}, x_{2})\|\leq\frac{C_{1}}{\lambda_{B}-C_{1}}\|x_{1}-x_{2}\|.\]
**Lemma 4.7**: _We have_
\[Y_{F}^{\epsilon}(\theta_{r}\omega_{2},x)=Y_{F}^{1}(\theta_{\frac{r}{\epsilon}} \omega_{2},x)\]
We prove the equality of \(\tilde{Y}_{F}^{\epsilon}(\theta_{r}\omega_{2},x)\), \(\tilde{Y}_{F}^{1}(\theta_{\frac{r}{\epsilon}}\omega_{2},x)\) for any \(\epsilon>0\). Then by Lemma 3.2, \(Z(\theta_{1/\epsilon}\omega_{2})\) and \(Z^{\epsilon}(\theta.\omega_{2})\) are equal. This causes for \(r^{\prime}=\frac{r}{\epsilon}\)
\[\tilde{Y}_{F}^{\epsilon}(\omega_{2},x) =\int_{-\infty}^{0}S_{\frac{\theta_{r}}{\epsilon}}(-r)\frac{1}{ \epsilon}\tilde{g}_{\epsilon}(x,\tilde{Y}_{F}^{\epsilon}(\theta_{r}\omega_{2},x),\theta_{r}\omega)dr\] \[=\int_{-\infty}^{0}S_{B}(-\frac{r}{\epsilon})\frac{1}{\epsilon} g(x,\tilde{Y}_{F}^{\epsilon}(\theta_{r}\omega_{2},x)+Z^{\epsilon}(\theta_{r} \omega_{2}))dr\] \[=\int_{-\infty}^{0}S_{B}(-\frac{r}{\epsilon})\frac{1}{\epsilon} g(x,\tilde{Y}_{F}^{\epsilon}(\theta_{r}\omega_{2},x)+Z(\theta_{\frac{r}{\epsilon}} \omega_{2}))dr\] \[=\int_{-\infty}^{0}S_{B}(-r^{\prime})g(x,\tilde{Y}_{F}^{\epsilon} (\theta_{r^{\prime}\epsilon}\omega_{2},x)+Z(\theta_{r^{\prime}}\omega_{2}))dr^ {\prime}\] \[=\int_{-\infty}^{0}S_{B}(-r^{\prime})\tilde{g}_{1}(x,\tilde{Y}_{F }^{\epsilon}(\theta_{r^{\prime}\omega}\omega_{2},x),\theta_{r^{\prime}}\omega _{2})dr^{\prime}.\]
On the other hand we have by the uniqueness of the fixed point
\[\tilde{Y}_{F}^{1}(\omega_{2},x)=\int_{-\infty}^{0}S_{B}(-r^{\prime})\tilde{g} _{1}(x,\tilde{Y}_{F}^{1}(\theta_{r^{\prime}}\omega_{2},x),\theta_{r^{\prime}} \omega_{2})dr^{\prime}.\]
Note that a random fixed point can be presented by such an integral over an infinite domain. This follows by
\[\tilde{Y}_{F}^{1}(\omega_{2},x)=S_{B}(t)\tilde{Y}_{F}^{1}(\theta_{-t}\omega_{ 2},x)+\int_{-t}^{0}S_{B}(-r^{\prime})\tilde{g}_{1}(x,\tilde{Y}_{F}^{1}(\theta _{r^{\prime}}\omega_{2},x),\theta_{r^{\prime}}\omega_{2})dr^{\prime}.\]
The temperedness of \(\|\tilde{Y}_{F}^{1}(\omega_{2},x)\|\) and the exponential decay of \(S_{B}\) ensure that the right hand side converges to the integral over the infinite domain.
We have
\[\tilde{g}_{\epsilon}(x,y,\theta_{r}\omega_{2})=g(x,y+Z^{\epsilon}(\theta_{r} \omega_{2}))\]
and hence
\[\varphi^{\epsilon,x}(t,\omega_{2},y)=Z^{\epsilon}(\theta_{t}\omega_{2})+ \tilde{\varphi}^{\epsilon,x}(t,\omega_{2},y-Z^{\epsilon}(\omega_{2})).\]
\(\varphi^{\epsilon,x}(t,\omega_{2},\cdot)\) is the RDS version of the solution to (4.6). There exists a random fixed point
\[Y_{F}^{\epsilon}(\omega_{2},x)=\tilde{Y}_{F}^{\epsilon}(\omega_{2},x)+Z^{ \epsilon}(\omega_{2}). \tag{4.11}\]
The tempered set \(\tilde{C}^{\epsilon,x}(\omega_{2})\) is changed to
\[C^{\epsilon,x}(\omega_{2})=\tilde{C}^{\epsilon,x}(\omega_{2})+Z^{\epsilon}( \omega_{2})\]
which is a ball with center \(Z^{\epsilon}(\omega_{2})\) and radius \(\tilde{R}^{\epsilon,x}(\omega_{2})\).
**Lemma 4.8**: _The mapping \(r\to Y_{F}^{\epsilon}(\theta_{r}\omega_{2},x)\) is \(\gamma\)-Holder continuous for \(\gamma<H_{2}\) and \(x\in V\)._
Proof.: Let \(r>r^{\prime}\in\mathbb{R}\). We consider
\[\|\tilde{Y}_{F}^{\epsilon}(\theta_{r}\omega_{2},x)-\tilde{Y}_{F}^{ \epsilon}(\theta_{r^{\prime}}\omega_{2},x)\|\leq \|S_{\frac{\theta}{\epsilon}}(r-r^{\prime})-\mathrm{id})\tilde{Y}_ {F}^{\epsilon}(\theta_{r^{\prime}}\omega_{2},x)\|\] \[+\bigg{\|}\int_{r^{\prime}}^{r}S_{\frac{\theta}{\epsilon}}(r-s) \frac{1}{\epsilon}\tilde{g}_{\epsilon}(x,\tilde{Y}_{F}^{\epsilon}(\theta_{s} \omega_{2},x),\theta_{s}\omega)ds\bigg{\|}.\]
Since \(s\mapsto\tilde{Y}_{F}^{\epsilon}(\theta_{s}\omega_{2},x)\) and \(s\mapsto Z^{\epsilon}(\theta_{s}\omega_{2})\) are continuous the Lebesgue integral can be estimated by \(C_{\epsilon,T}|r-r^{\prime}|\) and by (3.10), (3.11) the first term on the right hand side of the last inequality causes Holder continuity on any interval \(r\in[r^{\prime}+\delta,r^{\prime}+T]\), \(0<\delta<T\) and \(x\in V\).
Then by the transformation (4.11) and by Lemma 3.3 again, \(r\mapsto Y_{F}^{\epsilon}(\theta_{r}\omega_{2},x)\) is \(\gamma\)-Holder continuous.
_Remark 4.9_.: For the random variable \(\tilde{R}^{\epsilon,x}\) introduced in (4.10) we can prove that \(\sup_{r\in[0,T]}\tilde{R}^{\epsilon,x}(\theta_{r}\omega_{2})\) is defined on \((\theta_{t})_{t\in\mathbb{R}}\) invariant set of full measure independent of \(x\in V\) and \(\epsilon>0\). This random variable is tempered.
### An ergodic theorem for separable Hilbert-spaces
In this subsection we formulate an ergodic theorem in separable Hilbert spaces. We assume that \(f\) is bounded. Define
\[\tilde{f}(x)=\mathbb{E}[f(x,Y_{F}^{1}(\omega_{2},x))] \tag{4.12}\]
where this Hilbert-space valued expectation is determined by \(\mathbb{E}[(f(x,Y_{F}^{1}(\omega_{2},x),y))]\) for all \(y\) in a dense countable set of \(V\).
**Lemma 4.10**: \(\tilde{f}\) _is Lipschitz continuous._
Proof.: Because \(Y_{F}^{\epsilon}\) depends Lipschitz continuously on \(x\) with Lipschitz constant \(\frac{C_{1}}{\lambda_{B}-C_{1}}\), for all \(x_{1},x_{2}\in V\), we have
\[\|\tilde{f}(x_{1})-\tilde{f}(x_{2})\| \leq\mathbb{E}[\|f(x_{1},Y_{F}^{1}(\omega_{2},x_{1}))-f(x_{2},Y_{ F}^{1}(\omega_{2},x_{2}))\|]\] \[\leq C_{1}\mathbb{E}[(\|x_{1}-x_{2}\|+\|Y_{F}^{1}(\omega_{2},x_{1 })-Y_{F}^{1}(\omega_{2},x_{1})\|)]\] \[\leq(C_{1}+\frac{C_{1}^{2}}{\lambda_{B}-C_{1}})\|x_{1}-x_{2}\|=:C ^{\prime}\|x_{1}-x_{2}\|.\]
Thus, the desired result is obtained.
**Lemma 4.11**: _Let \(\nu>0\). There exists a \((\theta_{t})_{t\in\mathbb{R}}\) invariant set in \(\Omega_{2}\) of full measure we have for \(\omega_{2}\) from this set and for \(x\in V\)_
\[\lim_{T\to\pm\infty}\frac{1}{|T|}\bigg{\|}\int_{0}^{T}(-A)^{-\nu}(f(x,Y_{F}^{1 }(\theta_{r}\omega_{2},x))-\bar{f}(x))\,dr\bigg{\|}=0.\]
Proof.: Consider the operator \((-A)^{-\nu}\) which is a compact operator on \(V\). Let \(\Omega_{2,x}\) be the \((\theta_{t})_{t\in\mathbb{R}}\)-invariant set so that for \(\omega_{2}\in\Omega_{2,x}\)
\[\lim_{T\to\pm\infty}\frac{1}{|T|}\bigg{\|}\int_{0}^{T}(-A)^{-\nu}(f(x,Y_{F}^{1 }(\theta_{r}\omega_{2},x))-\bar{f}(x))\,dr\bigg{\|}=0\]
which follows for every \(x\in V\) by Chueshov at al. [6, Section 2] and for the invariance assertion see Arnold [1, Appendix 1]. For a dense and countable set \(D\subset V\) the set
\[\bigcap_{x\in D}\Omega_{2,x}=:\tilde{\Omega}_{2}\]
is \((\theta_{t})_{t\in\mathbb{R}}\) invariant and has full measure. Let \(x\not\in D\) and \((x_{n})_{n\in\mathbb{N}}\) be a sequence in \(D\) so that
\[\lim_{n\to\infty}\|x-x_{n}\|=0.\]
By the Lipschitz continuity and the uniform Lipschitz constant of \(x\mapsto Y^{1}_{F}(\omega_{2},x)\) with respect to \(\omega_{2}\) and \(\epsilon>0\) we have by Lemma 4.10 and Theorem 4.5 for any \(\zeta>0\) an \(\tilde{n}\in\mathbb{N}\) so that
\[\|(-A)^{-\nu} ((f(x_{\tilde{n}},Y^{1}_{F}(\theta_{r}\omega_{2},x_{\tilde{n}}))- \bar{f}(x_{\tilde{n}}))-(f(x_{\tilde{n}},Y^{1}_{F}(\theta_{r}\omega_{2},x))- \bar{f}(x)))\|\] \[\leq 2C^{\prime}\|(-A)^{-\nu}\|\|x-x_{\tilde{n}}\|\leq\frac{\zeta}{2}.\]
On the other hand we choose an \(T_{0}=T_{0}(\omega_{2},\zeta)>0\) so that for all \(|T|>T_{0}\) we have that
\[\lim_{T\to\pm\infty}\frac{1}{|T|}\bigg{\|}\int_{0}^{T}(-A)^{-\nu}(f(x_{ \tilde{n}},Y^{1}_{F}(\theta_{r}\omega_{2},x_{\tilde{n}}))-\bar{f}(x_{\tilde{n }}))\,dr\bigg{\|}\leq\frac{\zeta}{2}\]
on \(\tilde{\Omega}_{2}\)[6]. Hence
\[\frac{1}{|T|}\bigg{\|}\int_{0}^{T}(-A)^{-\nu}(f(x,Y^{1}_{F}(\theta _{r}\omega_{2},x))-\bar{f}(x))\,dr\bigg{\|}\] \[\leq \frac{1}{|T|}\bigg{\|}\int_{0}^{T}(-A)^{-\nu}(f(x,Y^{1}_{F}( \theta_{r}\omega_{2},x_{\tilde{n}}))-\bar{f}(x_{\tilde{n}}))\,dr\bigg{\|}\] \[\quad+2\|(-A)^{-\nu}\|C^{\prime}_{1}\|x-x_{\tilde{n}}\|\leq\zeta\]
for \(|T|>T_{0}\) and for \(\omega_{2}\in\tilde{\Omega}_{2}\).
### Proof of Theorem 4.2
Following the discretization techniques inspired by Khasminskii in [20], we divide \([0,T]\) into intervals of size \(\delta\), where \(\delta\in(0,1)\) is a fixed number. Then, we construct an auxiliary process \(\hat{Y}^{\epsilon}\) with initial value \(\hat{Y}^{\epsilon}(0)=Y^{\epsilon}(0)=Y_{0}\), and for \(t\in[k\delta,\min\{(k+1)\delta,T\})\),
\[\hat{Y}^{\epsilon}(t) = S_{\frac{B}{\epsilon}}(t-k\delta)\hat{Y}^{\epsilon}(k\delta)+ \frac{1}{\epsilon}\int_{k\delta}^{t}S_{\frac{B}{\epsilon}}(t-s)g(X^{\epsilon }(k\delta),\hat{Y}^{\epsilon}(s))\,ds \tag{4.13}\] \[+\int_{k\delta}^{t}S_{\frac{B}{\epsilon}}(t-s)\,d\omega_{2, \epsilon}(s)\]
i.e.
\[\hat{Y}^{\epsilon}(t) = S_{\frac{B}{\epsilon}}(t)Y_{0}+\frac{1}{\epsilon}\int_{0}^{t}S _{\frac{B}{\epsilon}}(t-s)g(X^{\epsilon}(s_{\delta}),\hat{Y}^{\epsilon}(s))\,ds \tag{4.14}\] \[+\int_{0}^{t}S_{\frac{B}{\epsilon}}(t-s)\,d\omega_{2,\epsilon}(s)\]
where \(s_{\delta}=\lfloor s/\delta\rfloor\delta\) is the nearest breakpoint preceding \(s\). Also, we define the process \(\hat{X}^{\epsilon}\), by
\[\hat{X}^{\epsilon}(t) = S_{A}(t)X_{0}+\int_{0}^{t}S_{A}(t-r)f(X^{\epsilon}(r_{\delta}), \hat{Y}^{\epsilon}(r))\,ds\] \[+\int_{0}^{t}S_{A}(t-r)h(X^{\epsilon}(r))\,d\omega_{1}(r).\]
**Lemma 4.12**: _Assume_ (A1)-(A4)_. Then, for all \(T>0\), we have for large \(\rho\) have_
\[\|X^{\epsilon}\|_{\gamma,\rho,\sim}\leq C(\|X_{0}\|+1)\]
_where \(C>0\) is a constant which is independent of \(\epsilon\)._
The proof of this result can be done by a slight generalization of [4, Lemma 9]. It is easy to see
\[\|X^{\epsilon}\|_{\gamma,\rho,\sim}\leq c_{T}\|X_{0}\|+c_{T}K(\rho)\left| \!\left|\omega_{1}\right|\!\right|_{\beta}(1+\|\;X^{\epsilon}\|_{\gamma,\rho, \sim})\]
then, taking \(\rho\) big enough such that \(c_{T}K(\rho)\left|\!\left|\omega_{1}\right|\!\right|_{\beta}<\frac{1}{2}\), we have
\[\|X^{\epsilon}\|_{\gamma,\rho,\sim}\leq 2c_{T}\|X_{0}\|+1.\]
Here \(K(\rho)\) is a positive function tending to zero for \(\rho\to\infty\).
Due to the boundedness of \(f\), \(Y^{\epsilon}\) does not have any effect on the estimate for \(\|X^{\epsilon}\|_{\gamma,\rho,\sim}\).
Assume (A1)-(A4). Then, for all \(T>0\), we have
\[\|\dot{X}^{\epsilon}\|_{\gamma,\rho,\sim}+\|\ddot{X}\|_{\gamma,\rho,\sim} \leq C(\|X_{0}\|+1)\]
where \(C>0\) is a constant which is independent of \(\epsilon\). We obtain by the same method an similar estimate for \(\|\dot{X}^{\epsilon}\|_{\gamma,\rho,\sim}\).
**Lemma 4.15**: _For any solution \(Y^{\epsilon}\) of (4.2) and any solution \(\hat{Y}^{\epsilon}\) of (4.14), \(t\in[0,T]\), we have_
\[\|Y^{\epsilon}\|_{\infty}+\|\hat{Y}^{\epsilon}\|_{\infty}\leq C\big{(}1+\|X_{0 }\|+\|Y_{0}\|+o(\epsilon^{-1})\big{)}\]
_where \(C\) is a constant which is independent of \(\epsilon\)._
For \(t\in[0,T]\), from (4.2), one has
\[Y^{\epsilon}(t)= \,S_{\frac{B}{\epsilon}}(t)Y_{0}+\frac{1}{\epsilon}\int_{0}^{t}S _{\frac{B}{\epsilon}}(t-r)g(X^{\epsilon}(r),Y^{\epsilon}(r))\,dr+\int_{0}^{t} S_{\frac{B}{\epsilon}}(t-r)\,d\omega_{2,\epsilon}(r)\] \[= \,S_{\frac{B}{\epsilon}}(t)(Y_{0}-Z^{\epsilon}(\omega_{2}))+Z^{ \epsilon}(\theta_{t}\omega_{2})+\frac{1}{\epsilon}\int_{0}^{t}S_{\frac{B}{ \epsilon}}(t-r)g(X^{\epsilon}(r),Y^{\epsilon}(r))\,dr.\]
Then, we have
\[\|Y^{\epsilon}(t)\| \leq\|S_{\frac{B}{\epsilon}}(t)\|\|Y_{0}-Z^{\epsilon}(\omega_{2}) \|+\|Z^{\epsilon}(\theta_{t}\omega_{2})\|\] \[\quad+\Big{\|}\frac{1}{\epsilon}\int_{0}^{t}S_{\frac{B}{\epsilon }}(t-r)g(X^{\epsilon}(r),Y^{\epsilon}(r))\,dr\Big{\|}\] \[\leq e^{\frac{\lambda_{B}{\epsilon}}{\lambda_{B}}t}\|Y_{0}-Z^{ \epsilon}(\omega_{2})\|+\|Z^{\epsilon}(\theta_{t}\omega_{2})\|\] \[\quad+\frac{1}{\epsilon}\int_{0}^{t}e^{-\frac{\lambda_{B}}{ \epsilon}(t-r)}\big{(}\|C_{2}+C_{1}(\|X^{\epsilon}(r)\|+\|Y^{\epsilon}(r)\|) \big{)}\,dr.\]
By Lemma 4.12 and (A2), it is easy to know
\[\sup_{t\in[0,T]}\|Y^{\epsilon}(t)\| \leq\|Y_{0}\|+2\sup_{t\in[0,T]}\|Z^{\epsilon}(\theta_{t}\omega_{2 })\|\] \[\quad+\sup_{t\in[0,T]}\frac{1}{\epsilon}\int_{0}^{t}e^{-\frac{ \lambda_{B}}{\epsilon}(t-r)}(C_{2}+C_{1}\|X^{\epsilon}(r)\|)\,ds\]
\[\leq C\big{(}1+\|X_{0}\|+\|Y_{0}\|+o(\epsilon^{-1})\big{)}.\]
Indeed we have for \(\epsilon\to 0\)
\[\sup_{t\in[0,T]}\|Z^{\epsilon}(\theta_{t}\omega_{2})\|=\sup_{t\in[0,T]}\|Z( \theta_{\frac{t}{\epsilon}}\omega_{2})\|=o(\epsilon^{-1})\]
by Lemma 3.3. The estimate for \(\|\hat{Y}^{\epsilon}\|_{\infty}\) can be obtained in a similar way.
**Lemma 4.16**: _For the stationary solution and any solution of (4.13), for \(t\in[k\delta,\min\{(k+1)\delta,T\}),\) we have_
\[\int_{k\delta}^{(k+1)\delta}\|\hat{Y}^{\epsilon}(t)-Y_{F}^{\epsilon}(\theta_{ t}\omega_{2},X^{\epsilon}(k\delta))\|\,dt\leq C\big{(}1+\|X_{0}\|+\|Y_{0}\|+o( \epsilon^{-1})\big{)}\epsilon\]
_where \(C\) is a constant independent of \(\epsilon\) and \(\delta\)._
By the Gronwall lemma argument, we have
\[\|\hat{Y}^{\epsilon}(t)-Y_{F}^{\epsilon}(\theta_{t}\omega_{2},X^{\epsilon}(k \delta))\|\leq e^{-\frac{\lambda_{B}-C_{1}}{\epsilon}(t-k\delta)}\|\hat{Y}^{ \epsilon}(k\delta)-Y_{F}^{\epsilon}(\theta_{k\delta}\omega_{2},X^{\epsilon}(k \delta))\|\]
Integrate above inequality from \(k\delta\) to \((k+1)\delta\), due to \(\lambda_{B}>C_{1}\), we have
\[\int_{k\delta}^{(k+1)\delta}\|\hat{Y}^{\epsilon}(t)-Y_{F}^{\epsilon }(\theta_{t}\omega_{2},X^{\epsilon}(k\delta))\|dt\] \[\leq C\|\hat{Y}^{\epsilon}(k\delta)-Y_{F}^{\epsilon}(\theta_{k \delta}\omega_{2},X^{\epsilon}(k\delta))\|\frac{\epsilon}{\lambda_{B}-C_{1}}( 1-e^{\frac{-(\lambda_{B}-C_{1})\delta}{\epsilon}})\] \[\leq C(\|\hat{Y}^{\epsilon}(k\delta)\|+\frac{C_{1}}{\lambda_{B}-C_{1} }\|X^{\epsilon}(k\delta)\|+\|Y_{F}^{\epsilon}(\theta_{k\delta}\omega_{2},0)\| )\frac{\epsilon}{\lambda_{B}-C_{1}}\] \[\leq C\sup_{r\in[0,T]}(\|X^{\epsilon}(r)\|+\|Y^{\epsilon}(r)\|+R^{ \epsilon,0}(\theta_{r}\omega_{2})+\|Z^{\epsilon}(\theta_{r}\omega_{2})\|) \frac{\epsilon}{\lambda_{B}-C_{1}}.\]
Thus, by Lemmas 3.3, 4.12 and 4.15, the desired result is obtained.
**Lemma 4.17**: _For the solution \(\hat{Y}^{\epsilon}\) of (4.13) and the solution \(Y^{\epsilon}\) of (4.2), \(s\in[k\delta,\min\{(k+1)\delta,T\}),s<t\leq T,\rho>1,k\geq 1,\)\(\epsilon\) small enough, we have_
\[e^{-\rho t}\int_{k\delta}^{(k+1)\delta}\|Y^{\epsilon}(s)-\hat{Y}^{\epsilon}(s )\|\,ds\leq C\delta^{1+\gamma}(1+(k\delta)^{-\gamma})\]
_where \(C\) is a constant which is independent of \(\epsilon\) and \(\delta\)._
For \(s\in[k\delta,\min\{(k+1)\delta,T\}),\) by Lemma 4.15, one has
\[\|Y^{\epsilon}(s)-\hat{Y}^{\epsilon}(s)\|\leq e^{-\frac{\lambda_{B}}{\epsilon} (s-k\delta)}\|Y^{\epsilon}(k\delta)-\hat{Y}^{\epsilon}(k\delta)\|\]
\[\begin{split}&+\left\|\frac{1}{\epsilon}\int_{k\delta}^{s}{{{S_{\frac{ \mu}{\epsilon}}}}(s-r)(g(X^{\epsilon}(r),Y^{\epsilon}(r))-g(X^{\epsilon}(r_{ \delta}),\hat{Y}^{\epsilon}(r)))\,dr}\right\|\\ \leq& Ce^{-\frac{\lambda_{B}}{\epsilon}(s-k\delta)}( \|Y^{\epsilon}\|_{\infty}+\|\hat{Y}^{\epsilon}\|_{\infty})\\ &+\frac{C_{1}}{\epsilon}\int_{k\delta}^{s}e^{-\frac{\lambda_{B}} {\epsilon}(s-r)}\|X^{\epsilon}(r)-X^{\epsilon}(r_{\delta})\|\,dr\\ &+\frac{C_{1}}{\epsilon}\int_{k\delta}^{\delta}e^{-\frac{\lambda _{B}}{\epsilon}(s-r)}\|Y^{\epsilon}(r)-\hat{Y}^{\epsilon}(r)\|\,dr.\end{split}\]
Then, multiplying both sides of the above equation by \(e^{\frac{\lambda_{B}}{\epsilon}s}\), we have
\[\begin{split} e^{\frac{\lambda_{B}}{\epsilon}s}\|Y^{\epsilon}(s) -\hat{Y}^{\epsilon}(s)\|&\leq Ce^{\frac{\lambda_{B}}{\epsilon}k \delta}\big{(}1+\|X_{0}\|+\|Y_{0}\|+o(\epsilon^{-1})\big{)}\\ &+\frac{C_{1}}{\epsilon}\int_{k\delta}^{s}e^{\frac{\lambda_{B}} {\epsilon}r}\|X^{\epsilon}(r)-X^{\epsilon}(r_{\delta})\|\,dr\\ &+\frac{C_{1}}{\epsilon}\int_{k\delta}^{s}e^{\frac{\lambda_{B}} {\epsilon}r}\|Y^{\epsilon}(r)-\hat{Y}^{\epsilon}(r)\|\,dr.\end{split}\]
By the Gronwall inequality [8, p.37] and [9, p.13], we have
\[\begin{split}\|Y^{\epsilon}(s)-\hat{Y}^{\epsilon}(s)\|& \leq Ce^{\frac{-\lambda_{B}}{\epsilon}(s-k\delta)}\big{(}1+\|X_{0}\|+\|Y_{0} \|+o(\epsilon^{-1})\big{)}e^{\frac{C_{1}}{\epsilon}(s-k\delta)}\\ &+\frac{C_{1}}{\epsilon}\int_{k\delta}^{s}e^{\frac{-(\lambda_{B} -C_{1})}{\epsilon}(s-r)}\|X^{\epsilon}(r)-X^{\epsilon}(r_{\delta})\|\,dr.\end{split}\]
Next, multiplying both sides of the above equation by \(e^{-\rho t}\) with \(t>s,\rho>1\), we have
\[\begin{split} e^{-\rho t}&\|Y^{\epsilon}(s)-\hat{Y }^{\epsilon}(s)\|\\ \leq& Ce^{-\rho t}e^{\frac{-(\lambda_{B}-C_{1})}{ \epsilon}(s-k\delta)}\big{(}1+\|X_{0}\|+\|Y_{0}\|+o(\epsilon^{-1})\big{)}\\ &+\frac{C_{1}}{\epsilon}\int_{k\delta}^{s}e^{\frac{-(\lambda_{B} -C_{1})}{\epsilon}(s-r)}(r-r_{\delta})^{\gamma}r_{\delta}^{-\gamma}e^{-\rho(t -r)}\frac{\gamma_{\delta}^{\gamma}e^{-\rho r}\|X^{\epsilon}(r)-X^{\epsilon}(r_ {\delta})\|}{(r-r_{\delta})^{\gamma}}\,dr\\ \leq& Ce^{\frac{-(\lambda_{B}-C_{1})}{\epsilon}(s-k \delta)}\big{(}1+\|X_{0}\|+\|Y_{0}\|+o(\epsilon^{-1})\big{)}\\ &+\delta^{\gamma}\|X^{\epsilon}\|_{\gamma,\rho,\sim}\frac{C_{1}} {\epsilon}\int_{k\delta}^{s}e^{\frac{-(\lambda_{B}-C_{1})}{\epsilon}(s-r)}r_{ \delta}^{-\gamma}\,dr.\end{split}\]
Integrate the above inequality from \(k\delta\) to \((k+1)\delta\), by Lemma 4.12, we have
\[\begin{split} e^{-\rho t}&\int_{k\delta}^{(k+1) \delta}\|Y^{\epsilon}(s)-\hat{Y}^{\epsilon}(s)\|\,ds\\ \leq& C\int_{k\delta}^{(k+1)\delta}e^{\frac{-( \lambda_{B}-C_{1})}{\epsilon}(s-k\delta)}\big{(}1+\|X_{0}\|+\|Y_{0}\|+o( \epsilon^{-1})\big{)}\,ds\\ &+\int_{k\delta}^{(k+1)\delta}\delta^{\gamma}\|X^{\epsilon}\|_{ \gamma,\rho,\sim}\frac{C_{1}}{\epsilon}\int_{k\delta}^{s}e^{\frac{-(\lambda_{B }-C_{1})}{\epsilon}(s-r)}r_{\delta}^{-\gamma}\,dr\,ds\\ \leq& C\epsilon\big{(}1+\|X_{0}\|+\|Y_{0}\|+o( \epsilon^{-1})\big{)}+C\delta^{\gamma}\|X^{\epsilon}\|_{\gamma,\rho,\sim}\int_{ k\delta}^{(k+1)\delta}s_{\delta}^{-\gamma}ds\\ \leq& C\epsilon\big{(}1+\|X_{0}\|+\|Y_{0}\|+o( \epsilon^{-1})\big{)}+C(1+\|X_{0}\|)\delta^{1+\gamma}(k\delta)^{-\gamma}\\ \leq& C\delta^{1+\gamma}(1+(k\delta)^{-\gamma})\end{split}\]
where we take \(\epsilon\big{(}1+\|X_{0}\|+\|Y_{0}\|+o(\epsilon^{-1})\big{)}\leq\delta^{1+\gamma}\) for \(\epsilon\) small enough.
**Lemma 4.18**: _Let (A1)-(A4) and (2) hold. For any \(X_{0}\in V\), as \(\epsilon\to 0\) the solution of (4.15) converges to \(\bar{X}\) which solves (4.5)_
\[\lim_{\epsilon\to 0}\|\hat{X}^{\epsilon}-\bar{X}\|_{\gamma,\sim}=0\]
_where this norm is considered with respect to a fixed interval \([0,T]\)._
For the following we fix \(\gamma<\sigma<1-\sigma^{\prime\prime},\sigma^{\prime}<1-\gamma\) and define \(\tilde{\sigma}=\min\{\sigma^{\prime},\sigma^{\prime\prime},\gamma\}\). We will show that for almost every \((\omega_{1},\omega_{2})\) and every \(\mu>0\) there exists an \(\epsilon_{0}>0\) so that for \(\epsilon<\epsilon_{0}\), \(\rho>\rho_{0}\) we have
\[\|\hat{X}^{\epsilon}-\bar{X}\|_{\gamma,\rho,\sim}\leq\mu. \tag{4.16}\]
Note that the norm here is equivalent to the norm in the conclusion. In the following proof a constant \(C\) appears. This constant can change from inequality to inequality. \(C\) may depend on \(T,\,\omega_{1},\,\omega_{2},\,\sigma^{\prime},\,\sigma^{\prime\prime},\,\gamma\) and other parameters like the Lipschitz constant of \(f\) and of \(x\mapsto Y_{F}^{\epsilon}(\omega_{2},x)\). But \(C\) does not depend on \(\mu\), \(\epsilon\), \(\rho\), \(\delta\). Here \(\delta\in(0,1)\) is a parameter depending on \(\mu\). To estimate all the terms in the following inequality we have to consider 3 cases. For the first case the right hand side will be absorbed by the left hand side of the inequality when \(\rho\) is sufficiently large. The second case includes terms providing estimates like \(C\delta^{\tilde{\sigma}},\tilde{\sigma}>0\) where \(C\) is a priori determined by \(T\), \(\omega_{1}\), \(\omega_{2}\), \(\sigma^{\prime}\), \(\sigma^{\prime\prime}\), \(\gamma\) but independent of \(\mu\), \(\epsilon\), \(\rho\), \(\delta\), then we choose _fixed_\(\delta\) so that \(C\delta^{\tilde{\sigma}}<\lambda\mu\), \(\lambda>0\) sufficiently small. The third case contains terms providing an estimate \(C\delta^{-\tilde{\sigma}},\tilde{\sigma}>0\) which can be made arbitrarily small when \(\epsilon\) is sufficiently small, taking in account that \(\delta\) is fixed.
By applying triangle inequality to \(\|\hat{X}^{\epsilon}-\bar{X}\|_{\gamma,\rho,\sim}\), we obtain
\[\begin{array}{l}\|\hat{X}^{\epsilon}-\bar{X}\|_{\gamma,\rho,\sim}\\ \leq\left\|\int_{0}S_{A}(\cdot-r)(f(X^{\epsilon}(r_{\delta}),\hat{Y}^{ \epsilon}(r))-f(X^{\epsilon}(r_{\delta}),Y_{F}^{\epsilon}(\theta_{r}\omega_{ 2},X^{\epsilon}(r_{\delta}))))\,dr\right\|_{\gamma,\rho,\sim}\\ \quad+\left\|\int_{0}S_{A}(\cdot-r)\Delta_{f}(X^{\epsilon}(r_{\delta});X^{ \epsilon}(r))\,dr\right\|_{\gamma,\rho,\sim}\\ \quad+\left\|\int_{0}^{\cdot}S_{A}(\cdot-r)\Delta_{f}(X^{\epsilon}(r);\hat{X }^{\epsilon}(r))\,dr\right\|_{\gamma,\rho,\sim}\\ \quad+\left\|\int_{0}S_{A}(\cdot-r)\Delta_{f}(\hat{X}^{\epsilon}(r);\bar{X }(r_{\delta}))\,dr\right\|_{\gamma,\rho,\sim}\\ \quad+\left\|\int_{0}^{\cdot}S_{A}(\cdot-r)\Delta_{f}(\bar{X}(r);\bar{X}(r_{ \delta}))\,dr\right\|_{\gamma,\rho,\sim}\\ \quad+\left\|\int_{0}^{\cdot}S_{A}(\cdot-r)(f(\bar{X}(r_{\delta}),Y_{F}^{ \epsilon}(\theta_{r}\omega_{2},\bar{X}(r_{\delta})))-\bar{f}(\bar{X}(r_{ \delta})))\,dr\right\|_{\gamma,\rho,\sim}\\ \quad+\left\|\int_{0}S_{A}(\cdot-r)(\bar{f}(\bar{X}(r_{\delta}))-\bar{f}(\bar {X}(r)))\,dr\right\|_{\gamma,\rho,\sim}\\ \quad+\left\|\int_{0}S_{A}(\cdot-r)(h(X^{\epsilon}(r))-h(\hat{X}^{ \epsilon}(r)))\,d\omega_{1}(r)\right\|_{\gamma,\rho,\sim}\\ \quad+\left\|\int_{0}S_{A}(\cdot-r)(h(\hat{X}^{\epsilon}(r))-h(\bar{X}(r))) \,d\omega_{1}(r)\right\|_{\gamma,\rho,\sim}=:\sum_{i=1}^{9}I_{i}\end{array}\]
where \(r_{\delta}=\lfloor r/\delta\rfloor\delta\) is the nearest breakpoint preceding \(r\) and for \(U(r),\hat{U}(r)\in V\)
\[\Delta_{f}(U(r);\hat{U}(r)):=f(U(r),Y_{F}^{\epsilon}(\theta_{r}\omega_{2},U(r)) )-f(\hat{U}(r),Y_{F}^{\epsilon}(\theta_{r}\omega_{2},\hat{U}(r))).\]
To proceed, we adapt the approach used in the proof of Lemma 3.5 to estimate \(I_{2}\).
\[I_{2} \leq \sup_{t\in[0,T]}e^{-\rho t}\bigg{\|}\int_{0}^{t}S_{A}(t-r)\Delta_{f }(X^{\epsilon}(r);X^{\epsilon}(r_{\delta}))\,dr\bigg{\|}\] \[+\sup_{0<s<t\leq T}s^{\gamma}e^{-\rho t}(t-s)^{-\gamma}\bigg{\|} \int_{s}^{t}S_{A}(t-r)\Delta_{f}(X^{\epsilon}(r);X^{\epsilon}(r_{\delta}))\,dr \bigg{\|}\] \[+\sup_{0<s<t\leq T}s^{\gamma}e^{-\rho t}(t-s)^{-\gamma}\bigg{\|} \int_{0}^{s}(S_{A}(t-r)-S_{A}(s-r))\Delta_{f}(X^{\epsilon}(r);X^{\epsilon}(r_{ \delta}))\,dr\bigg{\|}\] \[=: I_{21}+I_{22}+I_{23}.\]
For \(I_{21}\), if \(0\leq t<\delta\), it is easy to see \(I_{21}\leq C\delta\), then we consider that \(\delta\leq t\) and by the Lipschitz continuity of \(f\), \(Y^{\epsilon}_{F}\) and Lemma 4.12, also the boundedness of \(f\),
\[I_{21} \leq C\bigg{(}\sup_{t\in[0,\delta]}e^{-\rho t}\int_{0}^{t}\| \Delta_{f}(X^{\epsilon}(r);X^{\epsilon}(r_{\delta}))\|\,dr+\sup_{t\in[\delta, T]}e^{-\rho t}\int_{0}^{t}\|\Delta_{f}(X^{\epsilon}(r);X^{\epsilon}(r_{\delta}))\|\,dr \bigg{)}\] \[\leq C\delta+C\sup_{t\in[\delta,T]}e^{-\rho t}\bigg{(}\int_{0}^{t }\|\Delta_{f}(X^{\epsilon}(r);X^{\epsilon}(r_{\delta}))\|\,dr+\int_{\delta}^{ t}\|\Delta_{f}(X^{\epsilon}(r);X^{\epsilon}(r_{\delta}))\|\,dr\bigg{)}\] \[\leq C\delta+C\delta+C\sup_{t\in[\delta,T]}\int_{\delta}^{t}e^{- \rho(t-r)}(r-r_{\delta})^{\gamma}r_{\delta}^{-\gamma}\frac{r_{\delta}^{\gamma }e^{-\rho r}\|X^{\epsilon}(r)-X^{\epsilon}(r_{\delta})\|}{(r-r_{\delta})^{ \gamma}}\,dr\] \[\leq C\delta+C\delta^{\gamma}\sup_{t\in[\delta,T]}\bigg{(}\int_{ \delta}^{t}r_{\delta}^{-\gamma}\,dr\bigg{)}\,\|\!X^{\epsilon}\|\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\[\leq C\sup_{0<s<t\leq T}s^{\gamma}e^{-\rho t}(t-s)^{-\gamma}\int_{0}^{s}(t-s)^{ \gamma}(s-r)^{-\gamma}\,dr\leq C\delta\]
where we use the fact that \(f\) is bounded.
We continue with the area \(0<s<t\leq T\) and \(2\delta\leq s\) for \(I_{22}\) :
\[\sup_{2\delta\leq s<t\leq T}s^{\gamma}e^{-\rho t}(t-s)^{-\gamma} \bigg{\|}\int_{s}^{t}S_{A}(t-r)\Delta_{f}(X^{\epsilon}(r);X^{\epsilon}(r_{ \delta}))\,dr\bigg{\|}\] \[\leq \sup_{2\delta\leq s<t\leq T}\int_{s}^{t}\frac{\|S_{A}(t-r)\|s^{ \gamma}r_{\delta}^{\epsilon}-e^{\rho(t-r)}r_{\delta}^{\gamma}e^{-\rho\nu}\|X^ {\epsilon}(r)-X(r_{\delta})\|}{(t-s)^{\gamma}(r-r_{\delta})^{-\gamma}(r-r_{ \delta})^{\gamma}}\,dr\] \[\leq C\sup_{2\delta\leq s<t\leq T}\int_{s}^{t}\frac{s^{\gamma}r_ {\delta}^{-\gamma}e^{-\rho(t-r)}}{(t-s)^{\gamma}(r-r_{\delta})^{-\gamma}}\,dr \,\big{\|}X^{\epsilon}\big{\|}_{\gamma,\rho,\sim}\leq C\delta^{\gamma}.\]
where \(s^{\gamma}r_{\delta}^{-\gamma}\leq c\) independently of \(\delta>0\) and by the Lipschitz continuity of \(f\), \(Y^{\epsilon}_{F}\) and Lemma 4.12. Next, for \(I_{23}\) we have
\[\sup_{2\delta\leq s<t\leq T}s^{\gamma}e^{-\rho t}\frac{\|\int_{0} ^{2\delta}(S_{A}(t-r)-S_{A}(s-r))\Delta_{f}(X^{\epsilon}(r);X^{\epsilon}(r_{ \delta}))\,dr\|}{(t-s)^{\gamma}}\] \[+\sup_{2\delta\leq s<t\leq T}s^{\gamma}e^{-\rho t}\frac{\|\int_{ 2\delta}^{s}(S_{A}(t-r)-S_{A}(s-r))\Delta_{f}(X^{\epsilon}(r);X^{\epsilon}(r_{ \delta}))\,dr\|}{(t-s)^{\gamma}}\] \[\leq \sup_{2\delta\leq s<t\leq T}s^{\gamma}e^{-\rho t}\frac{\int_{2 \delta}^{2\delta}\|(S(t-s)-\mathrm{id})S_{A}(s-r)\|\|\Delta_{f}(X^{\epsilon}( r);X^{\epsilon}(r_{\delta}))\|\,dr}{(t-s)^{\gamma}}\] \[+\sup_{2\delta\leq s<t\leq T}\frac{\int_{2\delta}^{s}s^{\gamma}\| (S(t-s)-\mathrm{id})S_{A}(s-r)\|e^{-\rho(t-r)}e^{-\rho\nu}\|X^{\epsilon}(r)-X ^{\epsilon}(r_{\delta})\|\,dr}{(t-s)^{\gamma}}\] \[\leq C\sup_{2\delta\leq s<t\leq T}s^{\gamma}e^{-\rho t}(t-s)^{- \gamma}\int_{0}^{2\delta}(t-s)^{\gamma}(s-r)^{-\gamma}\,dr\] \[+C\sup_{2\delta\leq s<t\leq T}\int_{2\delta}^{s^{\gamma}r_{ \delta}^{-\gamma}(t-s)^{\gamma}(s-r)^{-\gamma}e^{-\rho(t-r)}r_{\delta}^{\gamma }e^{-\rho\nu}\|X^{\epsilon}(r)-X^{\epsilon}(r_{\delta})\|}{(t-s)^{\gamma}(r-r _{\delta})^{-\gamma}(r-r_{\delta})^{\gamma}}\,dr\] \[\leq C\sup_{2\delta\leq s<t\leq T}s^{\gamma}e^{-\rho t}\int_{0}^{2 \delta}(2\delta-r)^{-\gamma}\,dr+C\sup_{2\delta\leq s<t\leq T}s^{\gamma} \delta^{\gamma}\int_{2\delta}^{s}\big{(}\frac{r}{r_{\delta}}\big{)}^{\gamma}r ^{-\gamma}(s-r)^{-\gamma}\,dr\] \[\leq C\delta^{1-\gamma}+C\sup_{2\delta\leq s<t\leq T}s^{\gamma} \delta^{\gamma}\int_{0}^{s}r^{-\gamma}(s-r)^{-\gamma}\,dr\leq C\delta^{\delta}\]
where we use the Lipschitz continuity of \(f\), \(Y^{\epsilon}_{F}\) and Lemma 4.12, also the boundedness of \(f\). Thus, putting above estimates together, we have
\[I_{2}\leq C\delta^{\delta}.\]
Based on the Lipschitz continuity of \(f,\bar{f}\), \(Y^{\epsilon}_{F}\) and boundedness of \(f,\bar{f}\), Remark 4.14 we can apply the estimates for \(I_{2}\) to estimate \(I_{5}\) and \(I_{7}\). We have
\[I_{5}+I_{7}\leq C\delta^{\gamma}.\]
Then, by the Lipschitz continuity of \(f\), \(Y^{\epsilon}_{F}\) and Remark 4.14 again, we have
\[I_{4} \leq \sup_{t\in[0,T]}e^{-\rho t}\bigg{\|}\int_{0}^{t}S_{A}(t-r)\Delta_ {f}(\hat{X}^{\epsilon}(r);\bar{X}(r))\,dr\bigg{\|}\] \[+\sup_{0<s<t\leq T}s^{\gamma}e^{-\rho t}(t-s)^{-\gamma}\bigg{\|} \int_{s}^{t}S_{A}(t-r)\Delta_{f}(\hat{X}^{\epsilon}(r);\bar{X}(r))\,dr\bigg{\|}\]
\[I_{61} \leq \sup_{t\in[0,T]}\left\|\int_{0}^{t}(S_{A}(t-r)-S_{A}(t-r_{\delta})) \Delta_{f,f}^{\epsilon}(\theta_{r}\omega_{2},\bar{X}(r_{\delta}))\,dr\right\|\] \[=: I_{61}+I_{62}+I_{63}.\]
The first term above can be written
\[I_{61} \leq \sup_{t\in[0,T]}\left\|\int_{0}^{t}(S_{A}(t-r)-S_{A}(t-r_{\delta})) \Delta_{f,f}^{\epsilon}(\theta_{r}\omega_{2},\bar{X}(r_{\delta}))\,dr\right\|\]
\[+\sup_{t\in[0,T]}\left\|\int_{0}^{t}S_{A}(t-r_{\delta})\Delta_{f,f}^{ \epsilon}(\theta_{r}\omega_{2},\bar{X}(r_{\delta}))\,dr\right\|\] \[=:I_{611}+I_{612}.\]
By the boundedness property of \(f,\bar{f}\) and Lemmas 3.10 and 3.11, we have
\[I_{611} \leq\sup_{t\in[0,T]}\int_{0}^{t}\|(S_{A}(t-r)-S_{A}(t-r_{\delta})) \|\|\Delta_{f,\bar{f}}^{\epsilon}(\theta_{r}\omega_{2},\bar{X}(r_{\delta}))\|\,dr\] \[\leq C\delta^{\sigma}\sup_{t\in[0,T]}\int_{0}^{t}\|(-A)^{\sigma}S_ {A}(t-r)\|\,dr\leq C\delta^{\sigma}.\]
Then, one has
\[I_{612} \leq\sup_{t\in[0,T]}\left\|\int_{t_{\delta}}^{t}S_{A}(t-r_{\delta} )\Delta_{f,\bar{f}}^{\epsilon}(\theta_{r}\omega_{2},\bar{X}(r_{\delta}))\,dr\right\|\] \[\quad+\sup_{t\in[0,T]}\left\|\sum_{k=0}^{\lfloor t/\delta\rfloor- 1}\int_{k\delta}^{(k+1)\delta}S_{A}(t-k\delta)\Delta_{f,\bar{f}}^{\epsilon}( \theta_{r}\omega_{2},\bar{X}(r_{\delta}))\,dr\right\|\] \[\leq\sup_{t\in[0,T]}\int_{t_{\delta}}\|S_{A}(t-r_{\delta})\|\| \Delta_{f,\bar{f}}^{\epsilon}(\theta_{r}\omega_{2},\bar{X}(r_{\delta}))\|\,dr\] \[\quad+\sup_{t\in[0,T]}\sum_{k=0}^{\lfloor t/\delta\rfloor-1}\|( -A)^{\sigma}S_{A}(t-k\delta)\|\right\|\int_{k\delta}^{(k+1)\delta}(-A)^{- \sigma}\Delta_{f,\bar{f}}^{\epsilon}(\theta_{r}\omega_{2},\bar{X}(k\delta))\, dr\right\|\] \[\leq C\delta+C\delta^{-1}\max_{0\leq k\leq\lfloor T/\delta\rfloor- 1}\left\|\int_{k\delta}^{(k+1)\delta}(-A)^{-\sigma}\Delta_{f,\bar{f}}^{1}( \theta_{r}\omega_{2},\bar{X}(k\delta))\,dr\right\|\] \[\leq C\delta+C\delta^{-1}\max_{0\leq k\leq\lfloor T/\delta\rfloor- 1}\left\|\int_{k\delta}^{(k+1)\delta}(-A)^{-\sigma}\Delta_{f,\bar{f}}^{1}( \theta_{r}\omega_{2},\bar{X}(k\delta))\,dr\right\|\] \[\leq C\delta+C\delta^{-1}\max_{0\leq k\leq\lfloor T/\delta\rfloor- 1}\frac{T\epsilon}{\delta k}\left\|\int_{0}^{\frac{k\delta}{\epsilon}}(-A)^{- \sigma}\Delta_{f,\bar{f}}^{1}(\theta_{r}\omega_{2},\bar{X}(k\delta))\,dr\right\|\] \[\leq C\delta+C\delta^{-1}\max_{0\leq k\leq\lfloor T/\delta\rfloor- 1}\frac{T\epsilon}{\delta k}\left\|\int_{0}^{\frac{k\delta}{\epsilon}}(-A)^{- \sigma}\Delta_{f,\bar{f}}^{1}(\theta_{r}\omega_{2},\bar{X}(k\delta))\,dr\right\|\] \[\quad+C\delta^{-1}\max_{0\leq k\leq\lfloor T/\delta\rfloor-1} \frac{T\epsilon}{\delta k}\left\|\int_{0}^{\frac{k\delta}{\epsilon}}(-A)^{- \sigma}\Delta_{f,\bar{f}}^{1}(\theta_{r}\omega_{2},\bar{X}(k\delta))\,dr\right\|\]
where we use the fact that
\[\sup_{t\in[0,T]}\sum_{k=0}^{\lfloor t/\delta\rfloor-1}\|(-A)^{\sigma}S_{A}(t- k\delta)\|\leq C\delta^{-1},\,\sigma\in(0,1), \tag{4.18}\]
see Page 28 in Pei et al. [33]. We have for \(\epsilon\to 0\), \(\frac{(k+1)\delta}{\epsilon}\to+\infty\) for any \(k,1\leq k\leq\lfloor T/\delta\rfloor-1\). In addition we take the maximum over finitely many elements determined by the fixed number \(\delta\) given and \(T\). Following Lemma 4.11, we have for every element under the maximum
\[\frac{\epsilon}{\delta(k+1)}\left\|\int_{0}^{\frac{(k+1)\delta}{\epsilon}}(-A) ^{-\sigma}\Delta_{f,\bar{f}}^{1}(\theta_{r}\omega_{2},\bar{X}(k\delta))\,dr \right\|\to 0,\quad\text{as}\quad\epsilon\to 0 \tag{4.19}\]
almost surely. We note that by Lemma 4.11 we can consider as an argument the random variable \(\bar{X}(k\delta)\) inside the integrand of the last integral because the exceptional
set for the convergence is independent of \(x\). Thus, we have for \(\epsilon\) sufficiently small depending on \((\omega_{1},\omega_{2})\) almost surely and the \(\delta\) given
\[I_{61}\leq C\delta^{\sigma}. \tag{4.20}\]
Next, we turn to estimate \(I_{62}\):
\[I_{62} \leq C\sup_{0<s<t\leq T}\frac{\left\|\int_{s}^{t}(S_{A}(t-r)-S_{A} (t-r_{\delta}))\Delta_{f,\bar{f}}^{\epsilon}(\theta_{r}\omega_{2},\bar{X}(r_{ \delta}))\,dr\right\|}{(t-s)^{\gamma}}\] \[+C\sup_{0<s<t\leq T}\frac{\left\|\int_{s}^{t}S_{A}(t-r_{\delta}) \Delta_{f,\bar{f}}^{\epsilon}(\theta_{r}\omega_{2},\bar{X}(r_{\delta}))\,dr \right\|}{(t-s)^{\gamma}}=:I_{621}+I_{622}.\]
For above estimate, let us begin with \(I_{621}\). Taking \(\sigma^{\prime}<1-\gamma\) into account, by the boundedness property of \(f,\bar{f}\), we have
\[I_{621} \leq C\sup_{0<s<t\leq T}\left\{(t-s)^{-\gamma}\int_{s}^{t}\|(S_{A }(t-r)-S_{A}(t-r_{\delta}))\|\Delta_{f,\bar{f}}^{\epsilon}(\theta_{r}\omega_{ 2},\bar{X}(r_{\delta}))\|\,dr\right\}\] \[\leq C\sup_{0<s<t\leq T}\left\{(t-s)^{-\gamma}\delta^{\sigma^{ \prime}}\int_{s}^{t}(t-r)^{-\sigma^{\prime}}\,dr\right\}\] \[\leq C\sup_{0<s<t\leq T}\left\{(t-s)^{-\gamma+1-\sigma^{\prime}} \delta^{\sigma^{\prime}}\right\}\leq C\delta^{\sigma^{\prime}}.\]
Now, we deal with \(I_{622}\). Consider \(\ell_{t}:=\{s<t:t<(\lfloor\frac{s}{\delta}\rfloor+2)\delta\}\), \(\ell_{t}^{c}=\{s<t:t\geq(\lfloor\frac{s}{\delta}\rfloor+2)\delta\}\). Note that we have for \(s\in\ell_{t}\) that \(t-s<2\delta\) and for \(s\in\ell_{t}^{c}\) that \(t-s\geq\delta\).
\[I_{622} \leq C\sup_{0<s<t\leq T}\left\{\frac{\|\int_{s}^{t}S_{A}(t-r_{ \delta})\Delta_{f,\bar{f}}^{\epsilon}(\theta_{r}\omega_{2},\bar{X}(r_{\delta} ))\,dr\|}{(t-s)^{\gamma}}\mathbf{1}_{\ell_{t}}(s)\right\}\] \[+C\sup_{0<s<t\leq T}\left\{\frac{\|\int_{s}^{(\lfloor s\delta^{-1 }\rfloor+1)\delta}S_{A}(t-r_{\delta})\Delta_{f,\bar{f}}^{\epsilon}(\theta_{r} \omega_{2},\bar{X}(r_{\delta}))\,dr\|}{(t-s)^{\gamma}}\mathbf{1}_{\ell_{t}^{c }}(s)\right\}\] \[+C\sup_{0<s<t\leq T}\left\{\frac{\|\int_{\lfloor t\delta^{-1} \rfloor\delta}^{\lfloor t\delta^{-1}\rfloor\delta}S_{A}(t-r_{\delta})\Delta_{ f,\bar{f}}^{\epsilon}(\theta_{r}\omega_{2},\bar{X}(r_{\delta}))\,dr\|}{(t-s)^{ \gamma}}\mathbf{1}_{\ell_{t}^{c}}(s)\right\}\] \[+C\sup_{0<s<t\leq T}\left\{\frac{\|\int_{(\lfloor s\delta^{-1} \rfloor+1)\delta}^{\lfloor t\delta^{-1}\rfloor\delta}S_{A}(t-r_{\delta}) \Delta_{f,\bar{f}}^{\epsilon}(\theta_{r}\omega_{2},\bar{X}(r_{\delta}))\,dr \|}{(t-s)^{\gamma}}\mathbf{1}_{\ell_{t}^{c}}(s)\right\}.\]
The first three expressions on the right hand side of the last inequality can be estimated by \(C\delta^{1-\gamma}\). Thus, we have
\[I_{622} \leq C\delta^{1-\gamma}\] \[+C\sup_{0<s<t\leq T}\left\{\frac{\|\sum_{k=\lfloor s\delta^{-1} \rfloor+1}^{t\delta^{-1}}S_{A}(t-r_{\delta})\int_{k\delta}^{(k+1)\delta}\Delta _{f,\bar{f}}^{\epsilon}(\theta_{r}\omega_{2},\bar{X}(r_{\delta}))\,dr\|}{(t-s) ^{\gamma}}\mathbf{1}_{\ell_{t}^{c}}(s)\right\}\] \[\leq C\delta^{1-\gamma}+C\delta^{-1}\max_{0\leq k\leq\lfloor T/ \delta\rfloor-1}\bigg{\|}\int_{k\delta}^{(k+1)\delta}(-A)^{-\sigma}\Delta_{f, \bar{f}}^{\epsilon}(\theta_{r}\omega_{2},\bar{X}(k\delta))\,dr\bigg{\|}\]
where we apply (4.18). Using the ergodic theorem again, the remaining term on the right hand side can be estimated similar to \(I_{612}\), see (4.19). We have
\[I_{62}\leq C\delta^{\delta}\]
for sufficiently small \(\epsilon>0\).
The next term is
\[I_{63} \leq C\sup_{0<s<t\leq T}\frac{\int_{0}^{s}\|(S_{A}(t-s)-\mathrm{id})( S_{A}(s-r)-S_{A}(s-r_{\delta}))\|\|\Delta^{\epsilon}_{f,\bar{f}}(\theta_{r}\omega_{2}, \bar{X}(r_{\delta}))\|\,dr}{(t-s)^{\gamma}}\] \[+C\sup_{0<s<t\leq T}\frac{\left\|\int_{0}^{s}(S_{A}(t-s)-\mathrm{ id})S_{A}(s-r_{\delta})\Delta^{\epsilon}_{f,\bar{f}}(\theta_{r}\omega_{2},\bar{X}(r_{ \delta}))\,dr\right\|}{(t-s)^{\gamma}}=:I_{631}+I_{632}.\]
For \(I_{631}\), taking \(\gamma<\sigma<1-\sigma^{\prime\prime}\), by the boundedness property of \(f,\bar{f}\) and \(r-r_{\delta}\leq\delta\), we have
\[I_{631} \leq C\sup_{0<s<t\leq T}\int_{0}^{s}(t-s)^{\sigma-\gamma}\|(-A)^{ \sigma}(S_{A}(s-r)-S_{A}(s-r_{\delta}))\|\,dr\] \[\leq C\delta^{\sigma^{\prime\prime}}\sup_{0<s<t\leq T}\int_{0}^{ s}(t-s)^{\sigma-\gamma}\|(-A)^{\sigma+\sigma^{\prime\prime}}S_{A}(s-r)\|\,dr\] \[\leq C\delta^{\sigma^{\prime\prime}}\sup_{0<s<t\leq T}\int_{0}^{ s}(t-s)^{\sigma-\gamma}(s-r)^{-\sigma-\sigma^{\prime\prime}}\,dr\leq C \delta^{\sigma^{\prime\prime}}\]
and for \(I_{632}\) and \(\gamma<\sigma<1-\sigma^{\prime\prime}\),
\[I_{632} \leq C\sup_{0<s<t\leq T}\frac{\left\|\int_{\lfloor\frac{s}{2} \rfloor\delta}^{s}(-A)^{\sigma}S_{A}(s-\lfloor\frac{r}{\delta}\rfloor\delta) \Delta^{\epsilon}_{f,\bar{f}}(\theta_{r}\omega_{2},\bar{X}(r_{\delta}))\,dr \right\|}{(t-s)^{\gamma-\sigma}}\] \[\quad+C\sup_{0<s<t\leq T}\frac{\left\|\sum_{k=0}^{\lfloor\frac{s }{2}\rfloor-1}\int_{k\delta}^{(k+1)\delta}(-A)^{\sigma}S_{A}(s-k\delta)\Delta^ {\epsilon}_{f,\bar{f}}(\theta_{r}\omega_{2},\bar{X}(k\delta))\,dr\right\|}{(t -s)^{\gamma-\sigma}}\] \[\leq C\sup_{0<s<t\leq T}\left\{(t-s)^{-\gamma+\sigma}\int_{ \lfloor\frac{s}{2}\rfloor\delta}^{s}(s-\lfloor\frac{r}{\delta}\rfloor\delta)^{ -\sigma}dr\right\}\] \[\quad+C\sup_{0<s<t\leq T}\frac{\left\|\sum_{k=0}^{\lfloor\frac{s }{2}\rfloor-1}(-A)^{\sigma+\sigma^{\prime\prime}}S_{A}(s-k\delta)\int_{k\delta }^{(k+1)\delta}(-A)^{-\sigma^{\prime\prime}}\Delta^{\epsilon}_{f,\bar{f}}( \theta_{r}\omega_{2},\bar{X}(k\delta))\,dr\right\|}{(t-s)^{\gamma-\sigma}}\] \[\leq C\delta^{1-\sigma}+C\delta^{-1}\max_{0\leq k\leq\lfloor T/ \delta\rfloor-1}\left\|\int_{k\delta}^{(k+1)\delta}(-A)^{-\sigma^{\prime \prime}}\Delta^{\epsilon}_{f,\bar{f}}(\theta_{r}\omega_{2},\bar{X}(k\delta))\, dr\right\|\]
where we apply (4.18) again. Using ergodic theorem and the estimate similar to (4.19) again and taking \(\epsilon\) small enough, we have
\[I_{63}\leq C\delta^{\bar{\sigma}}.\]
To deal with \(I_{1}\), by replacing \(\Delta^{\epsilon}_{f,\bar{f}}(\theta_{r}\omega_{2},\bar{X}(r_{\delta}))\) in \(I_{6}\) by
\[(f(X^{\epsilon}(r_{\delta}),\hat{Y}^{\epsilon}(r))-f(X^{\epsilon}(r_{\delta}), Y^{\epsilon}_{\bar{F}}(\theta_{r}\omega_{2},X^{\epsilon}(r_{\delta}))))\]
we can apply the techniques to estimate \(I_{6}\). But instead the ergodic theory argument we apply Lemma 4.16 so that
\[I_{1}\leq C\delta^{\bar{\sigma}}+C\delta^{-1}\big{(}1+\|X_{0}\|+\|Y_{0}\|+o( \epsilon^{-1})\big{)}\epsilon\leq C\delta^{\bar{\sigma}}\]
for \(\epsilon<\epsilon_{0}\) and \(\delta\) given we have that \(\delta^{-1}o(\epsilon^{-1})\epsilon<C\delta^{\gamma}\).
The estimates for \(I_{8}\) and \(I_{9}\) follow by using the same techniques in Appendix and [4, Lemma 9]. Thus, we have
\[I_{8}+I_{9}\leq C\left\|\omega_{1}\right\|_{\beta}(1+\|X^{\epsilon}\|_{\gamma, \rho,\sim}+\|\hat{X}^{\epsilon}\|_{\gamma,\rho,\sim})K(\rho)\|X^{\epsilon}- \hat{X}^{\epsilon}\|_{\gamma,\rho,\sim}\]
\[+C\left\|\omega_{1}\right\|_{\beta}(1+\|\bar{X}\|_{\gamma,\rho,\sim}+\|\hat{X}^{ \epsilon}\|_{\gamma,\rho,\sim})K(\rho)\|\hat{X}^{\epsilon}-\bar{X}\|_{\gamma, \rho,\sim} \tag{4.21}\]
where \(\lim\limits_{\rho\to\infty}K(\rho)=0\).
For \(\|X^{\epsilon}-\hat{X}^{\epsilon}\|_{\gamma,\rho,\sim}\), by (4.3) and (4.15), it is easy to see
\[\|X^{\epsilon}-\hat{X}^{\epsilon}\|_{\gamma,\rho,\sim} \leq \bigg{\|}\int_{0}^{\cdot}S_{A}(\cdot-r)(f(X^{\epsilon}(r),Y^{ \epsilon}(r))-f(X^{\epsilon}(r_{\delta}),Y^{\epsilon}(r)))\,dr\bigg{\|}_{\gamma,\rho,\sim} \tag{4.22}\] \[+\bigg{\|}\int_{0}^{\cdot}S_{A}(\cdot-r)(f(X^{\epsilon}(r_{ \delta}),Y^{\epsilon}(r))-f(X^{\epsilon}(r_{\delta}),\hat{Y}^{\epsilon}(r)))\, dr\bigg{\|}_{\gamma,\rho,\sim}\] \[=:J_{1}+J_{2}.\]
By the same techniques for \(I_{2}\), it is easy to see the \(J_{1}\) is less than \(C\delta^{\gamma}\). For the second term of the right side of (4.22), we can apply the similar techniques used in the estimate of \(I_{6}\), replacing \(\Delta_{f,f}^{\epsilon}(X^{\epsilon}(r_{\delta}))\) by
\[\Delta_{f}^{Y^{\epsilon},\hat{Y}^{\epsilon}}(X^{\epsilon}(r_{\delta})):=f(X^{ \epsilon}(r_{\delta}),Y^{\epsilon}(r))-f(X^{\epsilon}(r_{\delta}),\hat{Y}^{ \epsilon}(r)).\]
Thus, we have
\[J_{2} \leq \sup_{t\in[0,T]}e^{-\rho t}\int_{0}^{t}\|(S_{A}(t-r)-S_{A}(t-r_{ \delta}))\|\|\Delta_{f}^{Y^{\epsilon},\hat{Y}^{\epsilon}}(X^{\epsilon}(r_{ \delta}))\|\,dr\] \[+\sup_{t\in[0,T]}e^{-\rho t}\bigg{\|}\int_{0}^{t}S_{A}(t-r_{ \delta})\Delta_{f}^{Y^{\epsilon},\hat{Y}^{\epsilon}}(X^{\epsilon}(r_{\delta}) )\,dr\bigg{\|}\] \[+\sup_{0<s<t\leq T}s^{\gamma}e^{-\rho t}\frac{\int_{s}^{t}\|(S_{ A}(t-r)-S_{A}(t-r_{\delta}))\|\|\Delta_{f}^{Y^{\epsilon},\hat{Y}^{\epsilon}}(X^{ \epsilon}(r_{\delta}))\|\,dr}{(t-s)^{\gamma}}\] \[+\sup_{0<s<t\leq T}s^{\gamma}e^{-\rho t}\frac{\|\int_{s}^{t}S_{ A}(t-r_{\delta})\Delta_{f}^{Y^{\epsilon},\hat{Y}^{\epsilon}}(X^{\epsilon}(r_{ \delta}))\,dr\|}{(t-s)^{\gamma}}\] \[+\sup_{0\leq s<t\leq T}s^{\gamma}e^{-\rho t}\frac{\|\int_{s}^{t}S _{A}(t-r_{\delta})\Delta_{f}^{Y^{\epsilon},\hat{Y}^{\epsilon}}(X^{\epsilon}(r_ {\delta}))\,dr\|}{(t-s)^{\gamma}}\] \[+\sup_{0<s<t\leq T}s^{\gamma}e^{-\rho t}\frac{\|\int_{0}^{s}\|(S _{A}(t-s)-\mathrm{id})(S_{A}(s-r)-S_{A}(s-r_{\delta}))\|\|\Delta_{f}^{Y^{ \epsilon},\hat{Y}^{\epsilon}}(X^{\epsilon}(r_{\delta}))\|\,dr}{(t-s)^{\gamma}}\] \[+\sup_{0\leq s<t\leq T}s^{\gamma}e^{-\rho t}\frac{\|\int_{0}^{s}( S_{A}(t-s)-\mathrm{id})S_{A}(s-r_{\delta})\Delta_{f}^{Y^{\epsilon},\hat{Y}^{ \epsilon}}(X^{\epsilon}(r_{\delta}))\,dr\|}{(t-s)^{\gamma}}\] \[+\sup_{2\delta\leq s<t\leq T}s^{\gamma}e^{-\rho t}\frac{\|\int_{2 \delta}^{s}(S_{A}(t-s)-\mathrm{id})S_{A}(s-r_{\delta})\Delta_{f}^{Y^{\epsilon},\hat{Y}^{\epsilon}}(X^{\epsilon}(r_{\delta}))\,dr\|}{(t-s)^{\gamma}}=\sum_ {i=1}^{9}J_{2i}.\]
For the terms \(J_{21},J_{23},J_{24},J_{26},J_{27},J_{28}\), it is easy to know
\[J_{21}+J_{23}+J_{24}+J_{26}+J_{27}+J_{28}\leq C\delta^{\tilde{\sigma}}.\]
Then, using same method with the estimates of \(I_{612},I_{622}\) and \(I_{632}\), we apply Lemma 4.17 instead the ergodic theory argument. By the Lipschitz continuity and
boundedness property of \(f\) and Lemma 4.17, using same method with the estimate of \(I_{21}\), we have
\[J_{22} \leq\sup_{t\in[0,T]}e^{-\rho t}\int_{t_{\delta}}^{\delta}\|S_{A}(t- r_{\delta})\Delta_{f}^{Y^{*},\hat{Y}^{*}}(X^{\epsilon}(r_{\delta}))\|\,dr\] \[\quad+\sup_{t\in[0,T]}e^{-\rho t}\int_{t_{\delta}}^{t}\|S_{A}(t- r_{\delta})\Delta_{f}^{Y^{*},\hat{Y}^{*}}(X^{\epsilon}(r_{\delta}))\|\,dr\] \[\quad+\sup_{t\in[0,T]}e^{-\rho t}\bigg{\|}\sum_{k=1}^{\lfloor t/ \delta\rfloor-1}\int_{k\delta}^{(k+1)\delta}S_{A}(t-k\delta)\Delta_{f}^{Y^{*}, \hat{Y}^{*}}(X^{\epsilon}(r_{\delta}))\,dr\bigg{\|}\] \[\leq C\delta+C\sup_{t\in[0,T]}\sum_{k=1}^{\lfloor t/\delta\rfloor- 1}e^{-\rho t}\int_{k\delta}^{(k+1)\delta}\|Y^{\epsilon}(r)-\hat{Y}^{\epsilon} (r)\|\,dr\] \[\leq C\delta+C\sup_{t\in[0,T]}\sum_{k=1}^{\lfloor t/\delta\rfloor- 1}\delta^{1+\gamma}(1+(k\delta)^{-\gamma})\] \[\leq C\delta^{\gamma}+C\sup_{t\in[0,T]}\sum_{k=1}^{\lfloor t/ \delta\rfloor-1}\delta\int_{k-1}^{k}k^{-\gamma}dv\leq C\delta^{\gamma}.\]
To proceed, for \(J_{25}\) and \(J_{29}\), by the Lipschitz continuity and boundedness property of \(f\) and Lemma 4.17, one has
\[J_{25} \leq\sup_{2\delta\leq s<t\leq T}\bigg{\{}s^{\gamma}e^{-\rho t} \frac{\int_{s}^{t}\|S_{A}(t-r_{\delta})\|\|\Delta_{f}^{Y^{*},\hat{Y}^{*}}(X^{ \epsilon}(r_{\delta}))\|\,dr}{(t-s)^{\gamma}}\mathbf{1}_{\ell_{t}}(s)\bigg{\}}\] \[\quad+\sup_{2\delta\leq s<t\leq T}\bigg{\{}s^{\gamma}e^{-\rho t} \frac{\int_{s}^{s_{\delta}+\delta}\|S_{A}(t-r_{\delta})\|\|\Delta_{f}^{Y^{*}, \hat{Y}^{*}}(X^{\epsilon}(r_{\delta}))\|\,dr}{(t-s)^{\gamma}}\mathbf{1}_{ \ell_{t}^{\epsilon}}(s)\bigg{\}}\] \[\quad+\sup_{2\delta\leq s<t\leq T}\bigg{\{}s^{\gamma}e^{-\rho t} \frac{\int_{t_{\delta}}^{t}\|S_{A}(t-r_{\delta})\|\|\Delta_{f}^{Y^{*},\hat{Y} ^{*}}(X^{\epsilon}(r_{\delta}))\|\,dr}{(t-s)^{\gamma}}\mathbf{1}_{\ell_{t}^{ \epsilon}}(s)\bigg{\}}\] \[\quad+C\sup_{2\delta\leq s<t\leq T}\bigg{\{}s^{\gamma}e^{-\rho t} \frac{\sum_{k=\lfloor s\delta^{-1}\rfloor+1}^{\lfloor t\delta^{-1}\rfloor-1} \int_{k\delta}^{(k+1)\delta}\|Y^{\epsilon}(r)-\hat{Y}^{\epsilon}(r)\|\,dr}{(t -s)^{\gamma}}\mathbf{1}_{\ell_{t}^{\epsilon}}(s)\bigg{\}}\] \[\leq C\delta^{1-\gamma}+C\sup_{2\delta\leq s<t\leq T}\bigg{\{} \frac{\sum_{k=\lfloor s\delta^{-1}\rfloor+1}^{\lfloor t\delta^{-1}\rfloor-1} \nolimits s^{\gamma}\delta^{1+\gamma}(1+(k\delta)^{-\gamma})}{(t-s)^{\gamma}} \mathbf{1}_{\ell_{t}^{\epsilon}}(s)\bigg{\}}\leq C\delta^{\tilde{\sigma}}\]
where the first three terms are less than \(C\delta^{1-\gamma}\) by the boundedness property of \(f\) and
\[\sum_{k=\lfloor s\delta^{-1}\rfloor+1}^{\lfloor t\delta^{-1}\rfloor-1}s^{ \gamma}(1+(k\delta)^{-\gamma}) \leq\sum_{k=\lfloor s\delta^{-1}\rfloor+1}^{\lfloor t\delta^{-1} \rfloor-1}(s^{\gamma}+\big{(}\frac{s}{k\delta}\big{)}^{\gamma})\leq\sum_{k= \lfloor s\delta^{-1}\rfloor+1}^{\lfloor t\delta^{-1}\rfloor-1}(T^{\gamma}+1)\] \[\leq C\big{(}\lfloor t\delta^{-1}\rfloor-1-\big{(}\lfloor s\delta^ {-1}\rfloor+1\big{)}+1\big{)}\leq C\delta^{-1}(t-s)\]
has been used in the last term. Then, we have
\[J_{29} \leq\sup_{2\delta\leq s<t\leq T}s^{\gamma}e^{-\rho t}\frac{\big{\|} \int_{s_{\delta}}^{s}(S_{A}(t-s)-\mathrm{id})S_{A}(s-r_{\delta})\Delta_{f}^{Y^ {*},\hat{Y}^{*}}(X^{\epsilon}(r_{\delta}))\,dr\big{\|}}{(t-s)^{\gamma}}\] \[\quad+\sup_{2\delta\leq s<t\leq T}s^{\gamma}e^{-\rho t}\frac{\big{\|} \int_{2\delta}^{s_{\delta}}(S_{A}(t-s)-\mathrm{id})S_{A}(s-r_{\delta})\Delta_{f }^{Y^{*},\hat{Y}^{*}}(X^{\epsilon}(r_{\delta}))\,dr\big{\|}}{(t-s)^{\gamma}}\]
\[\leq C\delta^{1-\gamma}+C\sup_{2\delta\leq s<t\leq T}\sum_{k=2}^{[s \delta^{-1}]-1}s^{\gamma}(s-k\delta)^{-\gamma}e^{-\rho t}\int_{k\delta}^{(k+1) \delta}\|Y^{\epsilon}(r)-\hat{Y}^{\epsilon}(r)\|\,dr\] \[\leq C\delta^{1-\gamma}+C\sup_{2\delta\leq s<t\leq T}\sum_{k=2}^{ [s\delta^{-1}]-1}s^{\gamma}(s-k\delta)^{-\gamma}e^{-\rho t}\int_{k\delta}^{(k+1 )\delta}\|Y^{\epsilon}(r)-\hat{Y}^{\epsilon}(r)\|\,dr\] \[\leq C\delta^{1-\gamma}+C\sup_{2\delta\leq s<t\leq T}\sum_{k=2}^{ [s\delta^{-1}]-1}s^{\gamma}(s-k\delta)^{-\gamma}\delta^{1+\gamma}(1+(k\delta)^ {-\gamma})\] \[\leq C\delta^{1-\gamma}+C\delta^{\gamma}+C\sup_{2\delta\leq s<t \leq T}\sum_{k=2}^{[s\delta^{-1}]-1}\int_{k\delta}^{(k+1)\delta}s^{\gamma} \delta^{\gamma}(s-k\delta)^{-\gamma}(k\delta)^{-\gamma}\,dr\] \[\leq C\delta^{1-\gamma}+C\delta^{\gamma}+C\delta^{\gamma}\sup_{2 \delta\leq s<t\leq T}\int_{2\delta}^{s\delta}s^{\gamma}(s-r_{\delta})^{-\gamma }r_{\delta}^{-\gamma}\,dr\] \[\leq C\delta^{1-\gamma}+C\delta^{\gamma}+C\delta^{\gamma}\sup_{2 \delta\leq s<t\leq T}\int_{2\delta}^{s}s^{\gamma}(s-r)^{-\gamma}r^{-\gamma}\,dr\] \[\leq C\delta^{1-\gamma}+C\delta^{\gamma}+C\delta^{\gamma}\sup_{2 \delta\leq s<t\leq T}\int_{0}^{s}s^{\gamma}(s-r)^{-\gamma}r^{-\gamma}\,dr\leq C \delta^{\delta}.\]
Thus, we have
\[\|X^{\epsilon}-\hat{X}^{\epsilon}\|_{\gamma,\rho,\sim}\leq C\delta^{\tilde{ \sigma}}. \tag{4.23}\]
Then, by (4.23) and taking \(\rho\) large enough and \(\delta\) small enough, we have
\[I_{8}+I_{9}\leq C\delta^{\tilde{\sigma}}+\frac{1}{3}\|\hat{X}^{\epsilon}-\bar {X}\|_{\gamma,\rho,\sim}. \tag{4.24}\]
To deal with \(I_{3}\), we can apply the similar techniques used in the estimate of \(I_{4}\). By the Lipschitz continuity of \(f\), (A2), Lemma 4.12, Lemma 4.17 and (4.23), it is easy to see
\[I_{3}\leq C\|X^{\epsilon}-\hat{X}^{\epsilon}\|_{\gamma,\rho,\sim}\leq C\delta ^{\tilde{\sigma}}.\]
Thus, putting above estimates together, we have for sufficiently small \(\epsilon>0\)
\[\|\hat{X}^{\epsilon}-\bar{X}\|_{\gamma,\rho,\sim}\leq C\delta^{\tilde{\sigma} }+\frac{2}{3}\|\hat{X}^{\epsilon}-\bar{X}\|_{\gamma,\rho,\sim} \tag{4.25}\]
so that (4.16) holds.
**Lemma 4.19**: _Let (A1)-(A4) and (2.2) hold. For any \(X_{0}\in V\), as \(\epsilon\to 0\) the solution of (4.15) converges to \(X^{\epsilon}\) which solves (4.3). That is, we have almost surely_
\[\lim_{\epsilon\to 0}\|X^{\epsilon}-\hat{X}^{\epsilon}\|_{\gamma,\sim}=0\]
_where this norm is considered with respect to a fixed interval \([0,T]\)._
Note that the norm in (4.23) is equivalent to the norm in the conclusion. By (4.23), similar to the argument of (4.16), the desired result will be obtained.
To close this section, we note that Lemma 4.18 and Lemma 4.19 yield Theorem 4.2. This completes the proof of Theorem 4.2.
**Appendix: Several auxiliary technical lemmas.** We recall the following technical lemma from [4, 13].
**Lemma 4.20**: _[_4_, Lemma 8]_ _Let \(a>-1,b>-1\) and \(a+b\geq-1,d>0\) and \(t\in[0,T]\). For \(\rho>0\) we define_
\[K(\rho):=\sup_{t\in[0,T]}t^{d}\int_{0}^{1}e^{-\rho t(1-v)}v^{a}(1-v)^{b}dv,\]
_then we have that \(\lim_{\rho\to\infty}K(\rho)=0\)._
**Lemma 4.21**: _[_14_, Lemma 14]_ _For any non-negative a and \(d\) such that \(a+d<1,\) and for any \(\rho\geq 1\), there exists a positive constant \(c\) such that_
\[\int_{0}^{t}e^{-\rho(t-r)}(t-r)^{-a}r^{-d}dr\leq c\rho^{a+d-1}.\]
**Proof of Lemma 3.5** By the definition of the norm and of \(\mathcal{T}\),
\[\|\mathcal{T}(u,\omega_{1},\omega_{2},u_{0})\|_{\gamma,\rho, \sim} \leq \|S_{A}(\cdot)u_{01}\|_{\gamma,\rho,\sim}+\|S_{B}(\cdot)(u_{02}- Z(\omega_{2}))\|_{\gamma,\rho,\sim}\] \[+\|Z(\theta.\omega_{2})\|_{\gamma,\rho,\sim}+\left\|\int_{0}S_{J }(\cdot-r)F(u(r))\,dr\right\|_{\gamma,\rho,\sim}\] \[+\left\|\int_{0}S_{A}(\cdot-r)h(u_{1}(r))\,d\omega_{1}(r)\right\| _{\gamma,\rho,\sim}\] \[=: \mathbf{I}_{1}+\mathbf{I}_{2}+\mathbf{I}_{3}+\mathbf{I}_{4}+ \mathbf{I}_{5}.\]
By Lemma 4.8, we begin with estimate for \(\mathbf{I}_{1}+\mathbf{I}_{2}+\mathbf{I}_{3}\)
\[\mathbf{I}_{1}+\mathbf{I}_{2}+\mathbf{I}_{3} = \sup_{\begin{subarray}{c}t\in[0,T]\\ t\in[0,T]\end{subarray}}e^{-\rho t}\|S_{A}(t)u_{01}\|+\sup_{0<s<t\leq T}s^{ \gamma}e^{-\rho t}\frac{\|S_{A}(t)u_{01}-S_{A}(s)u_{01}\|}{(t-s)^{\gamma}}\] \[+\sup_{t\in[0,T]}e^{-\rho t}\|S_{B}(t)(u_{02}-Z(\omega_{2}))\|+\|Z (\theta.\omega_{2})\|_{\gamma,\rho,\sim}\] \[+\sup_{0<s<t\leq T}s^{\gamma}e^{-\rho t}\frac{\|S_{B}(t)(u_{02}-Z (\omega_{2}))-S_{B}(s)(u_{02}-Z(\omega_{2}))\|}{(t-s)^{\gamma}}\] \[\leq cr(\|u_{01}\|+\|u_{02}\|+\|Z(\theta.\omega_{2})\|_{\gamma})\]
where we use \(\|Z(\theta.\omega_{2})\|_{\gamma,\rho,\sim}\leq c_{T}\|Z(\theta.\omega_{2})\|_ {\gamma}\) and Lemma 4.8.
Then, by [15, Lemma 4, (9)], for \(\mathbf{I}_{4}\), we have
\[\mathbf{I}_{4}\leq c_{T}\bar{K}(\rho)(1+\|u\|_{\gamma,\rho,\sim})\]
where \(\bar{K}\) has similar properties like \(K\).
Now, we show the \(\|\cdot\|_{\gamma,\rho,\sim}\)-norm of the stochastic integral.
\[\mathbf{I}_{5} = \sup_{0<s<t\leq T}\frac{s^{\gamma}e^{-\rho t}}{(t-s)^{\gamma}} \bigg{\|}\int_{s}^{t}S_{A}(t-r)h(u_{1}(r))\,d\omega_{1}(r)\bigg{\|}\] \[+\sup_{0<s<t\leq T}\frac{s^{\gamma}e^{-\rho t}}{(t-s)^{\gamma}} \bigg{\|}\int_{0}^{s}(S_{A}(t-r)-S_{A}(s-r))h(u_{1}(r))\,d\omega_{1}(r)\bigg{\|}\] \[+\sup_{t\in[0,T]}e^{-\rho t}\bigg{\|}\int_{0}^{t}S_{A}(t-r)h(u_{1} (r))\,d\omega_{1}(r)\bigg{\|}=:\mathbf{I}_{51}+\mathbf{I}_{52}+\mathbf{I}_{53}.\]
Since \(\left\|D_{t-}^{1-\alpha}\omega_{1,t-}[r]\right\|\leq c\left|\!\left|\omega_{1} \right|\!\right|_{\beta}(t-r)^{\alpha+\beta-1}\), by using the inequality of (3.12) and Remark 3.4 we get
\[s^{\gamma}e^{-\rho t}\bigg{\|}\int_{s}^{t}S_{A}(t-r)h(u_{1}(r))\, d\omega_{1}(r)\bigg{\|}\] \[\leq cs^{\gamma}e^{-\rho t}\int_{s}^{t}\left(\frac{\|S_{A}(t-r)\|_{L( V)}\|h(u_{1}(r))\|_{L_{2}(V)}}{(r-s)^{\alpha}}\right.\] \[\left.+\int_{s}^{r}\frac{\|S_{A}(t-r)-S_{A}(t-q)\|_{L(V)}\|h(u_{1} (r))\|_{L_{2}(V)}}{(r-q)^{1+\alpha}}dq\right.\] \[\left.+\int_{s}^{r}\frac{\|S_{A}(t-q)\|_{L(V)}\|h(u_{1}(r))-h(u_{ 1}(q))\|_{L_{2}(V)}}{(r-q)^{1+\alpha}}dq\right)\frac{\left|\!\left|\omega_{1} \right|\!\right|_{\beta}}{(t-r)^{-\alpha-\beta+1}}dr\] \[\leq cT^{\gamma}\left|\!\left|\omega_{1}\right|\!\right|_{\beta} \left(\int_{s}^{t}e^{-\rho(t-r)}\frac{(c_{h}+c_{Dh}|u_{1}(r)|)e^{-\rho r}}{(r -s)^{\alpha}}(t-r)^{\alpha+\beta-1}dr\right.\] \[\left.+\int_{s}^{t}\int_{s}^{r}e^{-\rho(t-r)}\frac{e^{-\rho r}(c_ {h}+c_{Dh}|u_{1}(r)|)(r-q)^{\gamma}}{(t-r)^{\gamma}(r-q)^{1+\alpha}}dq(t-r)^{ \alpha+\beta-1}dr\right.\] \[\left.+\int_{s}^{t}\int_{s}^{r}e^{-\rho(t-r)}\frac{e^{-\rho r}c_ {Dh}|u_{1}(r)-u_{1}(q)|q^{\gamma}(r-q)^{\gamma}}{(r-q)^{1+\alpha}q^{\gamma}(r -q)^{\gamma}}dq(t-r)^{\alpha+\beta-1}dr\right)\] \[\leq cT^{\gamma}\left|\!\left|\omega_{1}\right|\!\right|_{\beta}(t-s)^ {\beta}(1+\|u_{1}\|_{\gamma,\rho,\sim})\int_{s}^{t}e^{-\rho(t-r)}(r-s)^{- \alpha}(t-r)^{\alpha-1}dr\] \[+cT^{\gamma}\left|\!\left|\omega_{1}\right|\!\right|_{\beta}(1+\| u_{1}\|_{\gamma,\rho,\sim})\int_{s}^{t}e^{-\rho(t-r)}(r-s)^{\gamma-\alpha}(t-r)^{ \alpha+\beta-1-\gamma}dr \tag{4.27}\] \[\left.+cT^{\gamma}\left|\!\left|\omega_{1}\right|\!\right|_{\beta }(t-s)^{\beta}\|u_{1}\|_{\gamma,\rho,\sim}\int_{s}^{t}e^{-\rho(t-r)}(r-s)^{- \alpha}(t-r)^{\alpha-1}dr.\right.\]
By a change of variable, \(\gamma<\beta\), it is easy to see that
\[(t-s)^{\beta}\int_{s}^{t}e^{-\rho(t-r)}(r-s)^{-\alpha}(t-r)^{ \alpha-1}dr\] \[= (t-s)^{\gamma}(t-s)^{\beta-\gamma}\int_{0}^{1}e^{-\rho(t-s)(1-v) }v^{-\alpha}(1-v)^{\alpha-1}dv\leq(t-s)^{\gamma}K(\rho),\]
taking in Lemma 4.20\(a=-\alpha,b=\alpha-1,d=\beta-\gamma\) and \(t-s\) as the corresponding \(t\) there. The second integral on the right side may be rewritten in the same way, since
\[\int_{s}^{t}e^{-\rho(t-r)}(r-s)^{\gamma-\alpha}(t-r)^{\alpha+ \beta-1-\gamma}dr\] \[\leq (t-s)^{\gamma}(t-s)^{\beta-\gamma}\int_{s}^{t}e^{-\rho(t-r)}(r-s)^ {-\alpha}(t-r)^{\alpha-1}dr.\]
Thus, we have
\[\mathbf{I}_{51}\leq c_{T}K(\rho)\left|\!\left|\omega_{1}\right|\!\right|_{\beta }(1+\|u\|_{\gamma,\rho,\sim}) \tag{4.28}\]
For \(\mathbf{I}_{52}\), we should follow similar steps than before when obtaining (4.27). Now we need to replace the estimates for \(\|S_{A}(t-r)\|_{L(V)}\) and \(\|S_{A}(t-r)-S_{A}(t-q)\|_{L(V)}\) by estimates for \(\|S_{A}(t-r)-S_{A}(s-r)\|_{L(V)}\) and \(\|S_{A}(t-r)-S_{A}(t-q)-(S_{A}(s-r)-S_{A}(s-q))\|_{L(V)}\) respectively, for which we use (3.12) and (3.13) for appropriate parameters. Then it is not hard to see that for \(\alpha^{\prime}+\gamma<\alpha+\beta\), \(0<\alpha<\alpha^{\prime}<1\):
\[s^{\gamma}e^{-\rho t}\bigg{\|}\int_{0}^{s}(S_{A}(t-r)-S_{A}(s-r))h(u_{1}(r))\, d\omega_{1}(r)\bigg{\|}\]
\[\leq c(t-s)^{\gamma}T^{\gamma}\left|\!\left|\omega_{1}\right|\!\right|_{ \beta}(1+\left|u_{1}\right|\!\right|_{\gamma,\rho,\sim})\int_{0}^{s}e^{-\rho(t- r)}r^{-\alpha}(s-r)^{\alpha-\gamma+\beta-1}dr.\]
The third integral on the right hand side of the last inequality can be estimated by
\[s^{\beta-\gamma}\int_{0}^{1}e^{-\rho s(1-v)}v^{-\alpha}(1-v)^{\alpha-1}dv\]
and in a similar manner the other integrals. We have
\[\mathbf{I}_{52}\leq c_{T}K(\rho)\left|\!\left|\omega_{1}\right|\!\right|_{ \beta}(1+\left|u_{1}\right|\!\right|_{\gamma,\rho,\sim})\leq c_{T}K(\rho) \left|\!\left|\omega_{1}\right|\!\right|_{\beta}(1+\left|u\right|\!\right|_{ \gamma,\rho,\sim}).\]
In a similar manner than before for the first expression on \(\mathbf{I}_{51}\) we obtain
\[\mathbf{I}_{53}\leq c_{T}\left|\!\left|\omega_{1}\right|\!\right|_{\beta}K( \rho)(1+\left|u\right|\!\right|_{\gamma,\rho,\sim}).\]
All the previous estimates imply that
\[\mathbf{I}_{5}\leq c_{T}\left|\!\left|\omega_{1}\right|\!\right|_{\beta}K( \rho)(1+\left|u\right|\!\right|_{\gamma,\rho,\sim}).\]
Collecting all the above estimates we have a constant \(C(\rho,\omega_{1},T)>0\) such that \(\lim_{\rho\to\infty}C(\rho,\omega_{1},T)=0\) and
\[\|\mathcal{T}(u,\omega_{1},\omega_{2},u_{0})\|_{\gamma,\rho,\sim}\leq c_{T}( \|u_{01}\|+\|u_{02}\|+\|Z(\theta.\omega_{2})\|_{\gamma})+C(\rho,\omega_{1},T)( 1+\|u\|_{\gamma,\rho,\sim}).\]
|
2306.12223 | Exact analogue of the Hatano-Nelson model in 1D continuous nonreciprocal
systems | We propose a general framework that enables the exact mapping of continuous
nonreciprocal 1D periodic systems to the Hatano-Nelson (HN) model. Our
approach, based on the two-port transfer matrix, is broadband and is applicable
across various physical systems and, as an illustration, we consider the
implementation of our model in acoustic waveguides. Through theoretical
analysis and experimental demonstrations, we successfully achieve the mapping
to the HN model by utilizing active acoustic elements, thereby observing the
renowned skin effect. Moreover, our experimental setup enables the exploration
of the transition from periodic to open boundary conditions by employing
diaphragms of varying radii. Our experimental results, unveil the exponential
sensitivity of the system to changes in boundary conditions. By establishing a
profound connection between continuous systems and the fundamental discrete HN
model, our results significantly broaden the potential application of
nonreciprocal wave systems and the underlying phenomena. | Anis Maddi, Yves Auregan, Guillaume Penelet, Vincent Pagneux, Vassos Achilleos | 2023-06-21T12:33:49Z | http://arxiv.org/abs/2306.12223v1 | # Exact analogue of the Hatano-Nelson model in 1D continuous nonreciprocal systems
###### Abstract
We propose a general framework that enables the exact mapping of continuous nonreciprocal 1D periodic systems to the Hatano-Nelson (HN) model. Our approach, based on the two-port transfer matrix, is broadband and is applicable across various physical systems and, as an illustration, we consider the implementation of our model in acoustic waveguides. Through theoretical analysis and experimental demonstrations, we successfully achieve the mapping to the HN model by utilizing active acoustic elements, thereby observing the renowned skin effect. Moreover, our experimental setup enables the exploration of the transition from periodic to open boundary conditions by employing diaphragms of varying radii. Our experimental results, unveil the exponential sensitivity of the system to changes in boundary conditions. By establishing a profound connection between continuous systems and the fundamental discrete HN model, our results significantly broaden the potential application of nonreciprocal wave systems and the underlying phenomena.
## I Intro
In recent years, the interest in the intriguing features of non-Hermitian Hamiltonians has increased extensively[1; 2]. These Hamiltonians enable the study of non-conservative systems, characterized by complex eigenenergies, that reflect the presence of gains/losses. The growing attention to such systems, despite their inherent complexity, builds upon the pioneering work of Bender on PT symmetry [3; 4], who first demonstrated through his study of parity and time conserved operator that non-Hermitian systems can still exhibit real eigenenergies when gains and losses are perfectly balanced. Later on, many novel features of PT-symmetric systems were uncovered, such as particular sensitivity[5; 6; 7; 8], CPA-Lasing [9; 10; 11; 12; 13] and unidirectional invisibility [14; 15].
More recently, there has been a surge of interest in the interplay of non-Hermiticity and topological phenomena given their promise for the unidirectional control of waves [16; 17; 18], and the development of zero sensors [19; 20; 21; 22]. In this perspective, the topological phase of matter explores the relationship between the bulk properties of a lattice and its behavior at the boundary, using topological invariants. This has led to the discovery of new non-Hermitian properties, such as the Non-Hermitian skin effect[23; 24], which occurs when transitioning from periodic boundary conditions (PBC) to open boundary conditions (OBC) and results in the localization of energy at one boundary. This effect has been extensively studied theoretically with experimental demonstration in electrical circuits[25; 26; 27] and acoustic [28].
The Hatano-Nelson [29] (HN) model is one of the most prominent models in the field of non-Hermitian topology. This model describes a one-dimensional lattice where each site is coupled by a pair of asymmetric (nonreciprocal) hopping. This asymmetry shifts the bulk states towards the boundaries, localizing them at the side of the stronger hopping, resulting in a non-Hermitian skin effect (NHSE). In addition, since the recent work of Ref.[30] the topological properties of the model and of its generalizations have given birth to a field of no-Hermitian topology in discrete lattices[31; 32; 33]. Since the breaking of reciprocity is a prerequisite to observe the NHSE, the experimental realization of such systems is rather challenging, as it typically requires an external energy source/sink, and can potentially lead to instabilities.
Taking a step forward, in this study, we propose a comprehensive, broadband and exact mapping of the HN model to _continuous_ 1D nonreciprocal periodic systems. The approach adopted consists in the development of a general theoretical framework that allows the mapping of a 1D linear nonreciprocal system described by its unit cell transfer matrix. We validate our model using the example of an acoustic waveguide and furthermore we performed experiments employing a network of active loudspeakers [34] to observe the NHSE. Our setup allows us to also build a stable setup with periodic boundaries, due to the inherent losses of the system. Using diaphragms we experimentally observe the transition of the eigenfrequencies from periodic boundary conditions (PBCs) to open boundary conditions (OBCs) and exhibit the exponential sensitivity of the system to changes of the boundaries.
## II Model
### Hatano-Nelson model for 1D continuous systems
We consider wave propagation in a continuous medium which is periodic and at the edges of its unit cell only one mode is propagating (monomode approximation). Such a 2-port unit cell can be described using a two by two transfer matrix of the general form [see also Fig.1 (a)]
\[\mathbf{M}=\begin{pmatrix}a&b\\ c&d\end{pmatrix},\quad\det(\mathbf{M})=\mathrm{t}. \tag{1}\]
The state vector of the system at the position \(x\) takes the form \([A(x),B(x)]^{T}\) and for simplicity below we use the notation \(A_{n}\equiv A(x_{n})\). Eq.(1), allows us to write the following system of equations between three consecutive equidistant points,
\[\begin{pmatrix}A_{n+1}\\ B_{n+1}\end{pmatrix}=\mathbf{M}\begin{pmatrix}A_{n}\\ B_{n}\end{pmatrix},\ \begin{pmatrix}A_{n-1}\\ B_{n-1}\end{pmatrix}=\mathbf{M}^{-1}\begin{pmatrix}A_{n}\\ B_{n}\end{pmatrix}. \tag{2}\]
The first line of each of the above system of equations is explicitly written as
\[A_{n+1} =aA_{n}+bB_{n}, \tag{3}\] \[tA_{n-1} =dA_{n}-bB_{n}. \tag{4}\]
Therefore, by adding the two Eqs.(3) and (4), the problem can be simplified to the following discrete equation
\[A_{n+1}+tA_{n-1}=EA_{n}. \tag{5}\]
The last equation provides an exact analogue of the Hatano-Nelson (HN) model and can be readily applied to various continuous physical systems. Note that the same equation is also true for \(B_{n}\). According to our mapping the transfer matrices of both the continuous and the discrete unit cell have the same non unitary determinant. This is important since it was recently shown in [35] that the nonzero determinant is a key parameter to re-establish the bulk-boundary correspondence for non-Hermitian systems. The _energy_\(E\) in Eq.5 is simply given by
\[E=\mathrm{tr}(\mathbf{M})=\mathrm{a}+\mathrm{d}, \tag{6}\]
and provides the direct link between the eigenvalues of the Eq.5 and the elements of \(\mathbf{M}\). In practice, for wave systems the only restriction of our model is that the determinant \(t\) of the transfer matrix does not depend on the frequency. When the latter is satisfied our exact analogue of the HN is broadband and only requires periodicity, the monomode approximation and a non-unimodular transfer matrix.
We now briefly summarize the properties of the HN model starting from the dispersion relation obtained by considering periodic solutions of Eq.(5) in the form \(A_{n}=A\exp(-iqn)\) which yields
\[E_{q}=(1+t)\cos(q)+i(1-t)\sin(q). \tag{7}\]
When \(t\neq 1\), in the presence of non-reciprocity, the energies are complex. Furthermore \(E(q)\) creates a closed loop in the complex plane for \(q\in[-\pi/2,\pi/2]\). The fact that the energy itself is a complex function has motivated researchers to attribute topological properties to such non-Hermitian models. In particular it is now well established that one can define the following winding number
\[w_{E}=\frac{1}{2\pi}\oint_{\mathcal{C}}\frac{d}{dz}\mathrm{arg}\mathrm{E}(z), \tag{8}\]
where \(z=\exp(iq)\). This integral along the Brillouin zone gives \(w_{E}=1\) (\(w_{E}=-1\)) for \(|t|>1\) (\(|t|<1\)) signaling a transition at \(=0\). This transition is now known to be related with the appearance of the so called skin modes[36], i.e. localised modes at one edge of a finite structure. The sign of the winding number indicates (in 1D) the side at which the skin modes are localised. Let us now make a quick remainder of the spectra of the system under PBC and OBC.
PbcFor the case of a lattice of \(N\) sites with periodic boundary conditions the corresponding eigenvalue problem is
\[H_{\mathrm{PBC}}\mathbf{A}=E\mathbf{A},\quad H_{\mathrm{PBC}}=\begin{pmatrix} 0&1&0&\ldots&t\\ t&0&1&&\vdots\\ 0&t&0&\ddots&0\\ \vdots&&\ddots&\ddots&1\\ 1&\ldots&0&t&0\end{pmatrix}, \tag{9}\]
where \(H_{\mathrm{PBC}}\) is a circulant matrix and its eigenvalues are given by Eq.7 after replacing \(q_{n}=2\pi n/N\). The corresponding eigenvectors are known to be of the following form
\[A_{j}=\frac{1}{N}\left(1,\lambda^{j},\lambda^{2j}\ldots\lambda^{(N-1)j}\right) ^{T}, \tag{10}\]
where \(\lambda=e^{\frac{i2\pi}{N}}\) is the N-th root of unity. These modes are independent of \(|t|\) and are thus extended. Regarding the spectrum, there is a big difference between the case with \(|t|=1\) and the one with \(|t|\neq 1\). The spectrum of the lattice model undergoes a change from a straight line into a closed loop in the complex. An example for \(t=1.2\) and \(N=8\) is shown Fig2 (a).
Figure 1: (a) A sketch of a continuous system which is composed by periodically arranged unit cells of length \(l\). The edges of each unit cell are connected via the transfer matrix \(\mathbf{M}\). (b) By a simple manipulation of the transfer matrix equations we map the continuous system to the HN model where the nonreciprocal coupling strength \(t\) is equal to the determinant of the transfer matrix.
\(Obc\)The next most studied scenario is the case of open boundary conditions i.e. when \(A_{0}=A_{N}=0\). For the continuous system this translates to Dirichlet boundary condition of \(A(x)\) at the ends. The corresponding eigenvalue problem can be written as
\[H_{\text{OBC}}\mathbf{A}=E\mathbf{A},\quad H_{\text{OBC}}=\begin{pmatrix}0&1&0& \ldots&0\\ t&0&1&&\vdots\\ 0&t&0&\ddots&0\\ \vdots&&\ddots&\ddots&1\\ 0&\ldots&0&t&0\end{pmatrix}. \tag{11}\]
The latter matrix is tridiagonal Toeplitz, with positive off-diagonal elements. Interestingly, although \(H_{\text{OBC}}\) is non-symmetric it can be transformed into a _symmetric_ matrix under the similarity transformation \(\tilde{H}=D^{-1}H_{\text{OBC}}D\) where \(D=\text{diag}(\text{d}_{0},\text{d}_{1}.\text{d}_{N-1})\) with elements \(d_{n}=\sqrt{t^{n}},\quad n=0,\ldots N-1\). The eigenvalues of \(H_{\text{OBC}}\) are real and are given by the following expression
\[E_{n}=2\sqrt{t}\cos(\frac{n\pi}{N+1}),\quad n=1,\ldots,N. \tag{12}\]
In addition the \(j\)-th right eigenvector of \(H_{\text{OBC}}\) has the following form
\[A_{j}^{L}=\left(A_{j,1}^{L},A_{j,2}^{L}\cdots A_{j,N}^{L}\right)^{T},\ A_{j,n }^{L}=t^{n/2}\sin\frac{jn\pi}{N+1}. \tag{13}\]
It is clear that due the prefactor \((t)^{n/2}\) (which is independent of the eigenvalue \(j\)), as long as \(t>1\) (\(t<1\)) all states localize on the right (left) hand side. This is exactly the skin effect used to describe the fact that all modes are localized at one edge.
Note that all the results derived apply to \(B_{n}\) as well and if the necessary boundary conditions (i.e \(B_{0}=B_{N}=0\)) are assigned we derive the same results.
### An example: nonreciprocal periodic acoustic waveguides
According to our proposed model, we thus expect that both the closed loop spectrum and the skin modes of the HN model can be observed in a continuous medium. We now give an example of a physical system that can be mapped to the HN model following the aforementioned procedure. Nonreciprocity in acoustics can be achieved using (among others) active elements, spatio-temporal modulations, or the thermoacoustic effect [34, 37, 38, 39, 40, 41, 42]. Here we will use the idea of an active loudspeaker to break the reciprocity and achieve \(|t|\neq 1\). To find the corresponding acoustic modes, we need the elements of the matrix \(M\) for a particular setup. Here, we use the transfer matrix corresponding to a unit cell of length \(l\) and a loudspeaker with a feedback loop mounted in the middle of the cell. In view of the experiments in acoustics the appropriate transfer matrix is the one connecting the acoustic pressure \(p(x)\) and the acoustic flux \(u(x)\) thus identifying \(A\to u\) and \(B\to p\). For low frequencies where the monomode approximation is valid we may then write an anlytical expression for the transfer matrix elements which leads to the following mapping (see Appendix) \(E(k)=(t+1)\cos(kL)-\left(a\frac{k^{2}-k_{0}^{2}}{k}+i\beta(k)\right)\sin(kL)\). Here we have used the notation for the frequency \(\omega=c_{0}k\) with \(k\) the wavenumber and \(c_{0}\) the speed of sound. Furthermore, \(k_{0}\) and \(\beta\) denote the resonance frequency of the loudspeaker and its electro-mechanical losses respectively. Using the expression for \(E(k)\) each eigenvalue for the PBC [Eq.(7)] or the OBC [Eq.(12)] is mapped to the corresponding acoustic eigenfrequency \(k_{n}\). One such example of eigenfrequencies is plotted in panels (c) and (d) of Fig2 for lattice with \(t=1.2\).
An important observation here is that for the PBC, although the HN model predicts a set of generally unstable modes (with \(\text{Im}(E)>0\)), the losses of the acoustic system, embedded in the mapping, allows for a closed loop spectrum below the real axis and thus stable acoustic modes. In accordance, the straight line spectrum of the OBC for the acoustic system becomes an arc lying in the stable part of the complex plane. Another consequence of the mapping is that the smallest (largest) eigenvalues are swapped (see white and blue po
Figure 2: (a) (a-b) The energy of the modes in the complex plane for a periodic HN lattice with periodic (PBC) and open (OBC) boundaries. (c-d) The corresponding acoustic complex frequencies of the waveguide. In all panels, the number of sites is set to \(N=8\) and the hopping factor to \(t=1.2\).
## III Experimental realization and the transition from PBC to OBC
### Skin effect and measured spectra
We now show the experimental realization of the acoustic HN model. A sketch of the unit cell used used in the experiment is displayed in Figure3 (a) and consists of a cavity connected to two ducts. A speaker is installed in the center of the cavity and is controlled by a feedback loop consisting of a current amplifier and a microphone mounted in the vicinity of the loudspeaker. The non-reciprocity arises from the electroacoustic feedback loop, in which an electrical current supplied to the loudspeaker is proportional to the feedback gain \(G\) and the pressure measured by a nearby microphone. This generates an additional oscillating force that acts on the loudspeaker membrane. For frequencies below the cutoff, the acoustic pressure and velocity at the edges of the unit cell are connected through a transfer matrix \(\mathbf{M}\) as in Eq.(1), and the hopping parameter \(t\) is simply adjusted by the amplifier gain \(G\).
To confirm the nonreciprocity of the unit cell, the experimentally measured determinant of the transfer matrix \(\det(\mathbf{M})\) is displayed as a function of the frequency in Figure3.(b). This measurement was conducted using an impedance sensor [43], further details can be found in the supplementary information. In the absence of a feedback loop \(G=0\), the system is reciprocal (i.e., \(t=\det(\mathbf{M})=1\)) since the loudspeaker behaves as a passive resonator. However, if a gain \(G\neq 0\) is applied, the reciprocity is broken and the hopping term \(t\) becomes nonunitary, thereby favoring propagation in one direction. For the measurements shown in Fig.3.(b) we have tuned the gain such that \(t=3.7\) which would lead to a right-side localization with (\(w_{E}>1\)). Furthermore, one can see that the hopping factor is independent of the frequency and thus the mapping is broadband.
For the experimental realization of the HN model with OBC and the observation of skin-modes, a periodic system composed of \(N=8\) identical unit cells is constructed. The two ends of the waveguide are then closed with rigid walls at the extremities which corresponds to Dirichlet boundary conditions for the acoustic flux. We excite the system from the one end (left) and measure the pressure at equidistant points designated by \(p(\omega,x_{j})\) as shown in the top of Fig.4.
The bottom panels of Fig.4 depict the magnitude of the measured acoustic pressure at different sites as a function of frequency. Starting with the reciprocal case (\(t=1\)), the system is excited and the propagation occurs only within the interval \(f\in[120,300]\) Hz which is the first allowed band of the periodic lattice. Furthermore, the field appears to have greater amplitude near the source due to the damping (mainly caused by the loudspeakers), since away from the source the wave is rapidly dissipated.
On the other hand, by turning on the feedback gain
Figure 4: **OBC configuration**. The experimentally measured pressure as a function of the index site \(j\) and the frequency \(f\), for a symmetrical (\(t=1\), left) and asymmetrical (\(t=3.7\), right) hopping, where the system is excited from the left side at \(j=0\).
Figure 3: **Experimental unit cell.** (a) sketch of the unit cell (b) determinant of the transfer matrix of the unit cell (equivalently the hopping factor \(t\)) as a function of the frequency, where black and blue represents respectively the passive and active cell.
and reaching a value of the asymmetric hopping \(t=3.7\), we clearly see the appearance of the non-hermitian skin effect at the opposite boundary of the system. This means that despite the high damping, any excitation from the left leads to a strong localization on the right side, with an amplification ratio from the first site \(j=1\) to the last \(j=N\) roughly equal to \(p(x_{N})/p(x_{1})=120\). Note that the accumulation of energy on the right hand side is persistent for all frequencies in this band confirming the fact that all modes exhibit the skin effect. This amplification is in quantitative agreement with the OBC solutions of Eq.(13) where the amplification ratio for the modes is \(\sim t^{N/2}\).
Now, we focus in more detail to evaluate the experimentally obtained complex eigenfrequencies \(f_{n}\) of the corresponding modes of the underlying cavity. These can be estimated using different fitting algorithms in the framework of the so called experimental modal analysis [44; 45]. Such algorithms have proven to be reliable in the study of complex structures. To operate, they require a collection of frequency response functions FRFs. Herein, it takes the form of the measured acoustic pressure \(p(\omega,x_{j})\). In addition, the proposed acoustic model allows us to perform such measurements also using PBC and thus we are able to observe the looped spectrum in the complex plane.
Figure 5. shows the obtained eigenfrequencies of the acoustic problem in the complex plane for both the PBC and OBC configurations. For the OBC, the reciprocal (\(t=1\)) and non-reciprocal (\(t=1.35\)) arrangements have seven acoustic modes and as predicted by the theory, they form an arc lying in the negative imaginary part of the complex plane. For the HN model the OBC spectrum always lies on the real axis for any value of \(t\). However here we see that increasing the gain (thus \(t\)) the modes are pushed towards the real axis. This property, which is embedded in the proposed mapping, reveals the fact that adding gain to the system is able to better compensate losses. To further reveal the NHSE we measure the pressure field at different positions of the waveguide for the corresponding eigenfrequencies. An example of the mode shape for \(t=3.7\) is shown in Fig.6 where the energy is clearly localized predominantly on the right boundary (\(j=8\)).(b). Moreover, we plot the experimentally obtained inverse participation ratio 'IPR' \(\sum_{k}|p(x_{k})|^{2}/(\sum_{k}|p(x_{k})|)^{2}\) as a function of \(t\) in Fig.6.(c). This ratio quantifies the localization level of the eigenmodes, for instance a value of \(\text{IPR}=1\) or \(\text{IPR}=0\) indicates respectively a total localization at one site or a full delocalization. As anticipated, the present results indicate that the increase in the hopping factor leads to a stronger localization of the eigenmodes on the right side of the system.
In panel (d) Fig.6 we show the results for the PBC waveguide loaded with \(N=8\). In the reciprocal case with \(t=1\) it has 5 modes which among them three are degenerate with multiplicity 2 due the angular symmetry. On the other hand in the non-reciprocal case \(t=1.35\), the degenerate modes split in pairs and the spectrum forms a closed loop in the complex plane indicated by the filled circles in panel (d). An important aspect of the proposed acoustic system is that the inherent losses themselves allow for the obervation of the looped spectrum since all modes are stable. As expected, there is a maximum value of the gain after which some modes become unstable.
Figure 5: **Experimental results.**(a) complex eigenfrequencies of the problem in the PBC and OBC configuration, for \(t=1\) (white) and \(t=1.35\) (orange). (b) The inverse participation ratio of each modeships as a function of the hopping factor \(t\) in the OBC configuration. (c) Modeshape of the OBC configuration for \(t=3.7\).
### PBC to OBC and boundary sensitivity
Another interesting aspect of the proposed system is that it allows to study experimentally the transition from PBC to OBC. This transition has been the subject of several studies since it gives insights on the sensitivity of the underlying spectrum under changes of the BCs. Experimentally it has only been observed in discrete lattices[46] but not in a continuous wave system. Here, we achieve the transition in a rather natural way by adding a thin diaphragm of radius \(r_{d}\) inside the looped waveguide of radius \(r_{w}\) and progressively reducing the ratio \(r=r_{d}/r_{w}\). In Fig.6.(a) we plot the experimentally obtained spectrum in the complex plane for various values of \(r\) between PBC (\(r=1\)) and OBC (\(r=0\)). The first row corresponds to a relatively small gain \(t=1.35\), and the transition is explicitly demonstrated for all the ranges of the diaphragm radius (first line). As the radius decreases, the ellipse gradually shrinks until it transforms into an arc for the OBC. This transition is clearly visible for various values of \(t\) as shown in Fig.6 (a). In addition, as expected by the theory, by increasing the hopping factor for a fixed radius ratio (e.g. the column for 0.1), the ellipse expands, which is a characteristic of the HN model.
Interestingly, for higher values of \(t\) the system becomes unstable before completely opening the diaphragm, since the mode with the largest imaginary part approaches the real axis. This is where the sensitivity of the system to boundary conditions is revealed. For the marginal case of \(t=2.4\) if we open a small hole (10% of the waveguide radius) to the stable OBC configuration abruptly becomes unstable. In fact it was recently argued that the sensitivity of the HN with respect to the system size is exponential[47; 48; 49].
Due to the inherent instabilities and the limited number of cells used here, we cannot quantify this type of sensitivity directly from experiments. However, we do further investigate the sensitivity semi-analytically by using the experimentally obtained transfer matrix of the unit cell \(M_{\rm exp}\). In particular, we use an analytical 1D model taking into account the waveguide, the cavity and the loudspeaker (see Fig.3) and fit it to the experimentally obtained elements of the transfer matrix. Then the effect of a thin diaphragm, at low frequencies, can be approximated by assuming continuity of the acoustic flux and discontinuity of the acoustic pressure at the location of the diaphragm \(x_{d}\) in the form [50; 51]
\[[\ p\ ]_{x_{d}}=z(r,\omega)u(x_{d}).\]
The parameter \(z\) includes both the resistive and reacting parts of the diaphragm (see Appendix). The corresponding transfer matrix of the defect can then be written as
\[M_{d}=\begin{pmatrix}1&-z\\ 0&1\end{pmatrix}. \tag{14}\]
For a periodic system with N-cells one can then calculate the solutions of \(\det(M_{d}M_{\rm exp}^{N}-\mathds{1})=0\) to find the corresponding eigenfrequencies. Figure 6 (b) exhibits the eigenfrequencies of a system with \(N=8\) and \(t=1.5\) [corresponding to the second row of Fig.6 (a)]. Here we vary the ratio \(r\) by increments of 0.1. What we observe is that with \(r=0.1\) the eigenfrequencies have slightly shifted as observed in the experiments. Then for \(r>0.2\)
Figure 6: **Transition from PBC to OBC.** (a) A schematic of the diaphragm used in experiments inside the waveguide. The eigenfrequencies obtained from experimental results for different values of the nonreciprocal hopping t and the ratio r. (b) The transition from PBC to OBC obtain using the experimentally fitted transfer matrix. (c) The exponential sensitivity of absolute value of the eigenfrequency as a function of the system size.
a large ellipse has been formed in complex plane indicating a strong change in the eigenfrequencies. To further quantify this sensitivity we have calculated the change in absolute value of frequency for the mode in the center of the ellipse for a change of the ratio \(\Delta r=0.1\). The results for three different values of \(t\) is shown in Figure 6 (c) in a logarithmic scale. It is clear that the proposed acoustic system is indeed exponentially sensitive to its size.
## IV Conclusion
As a conclusion, an exact map of the Hatano-Nelson model to one dimensional nonreciprocal continuous acoustic systems is presented. The mapping is achieved solely by using a transfer matrix approach, and can be applied to a plenitude of systems, provided that non-reciprocity can be implemented by the given device. The experimental results show the emergence of non-Hermitian skin effect once an asymmetric hopping is achieved, and by analyzing the complex frequency of the acoustic mode, the theoretical model is validated. Finally, while using diaphragms of different hole sizes, the transition from PBC to OBC, and the subsequent exponential sensitivity to the system is exhibited. Using the proposed method many other variances of the HN model can be constructed in continuous media, including various nonreciprocal topological models or higher dimensional models which profit from the interplay between topology and non-hermiticity.
## V Acknowledgments
V.A. acknowledges financial support from the No-HeNA project funded under the program Etoiles Montantes of the Region Pays de la Loire. V.A. Is supported by the EU H2020 ERC StG "NASA" Grant Agreement No. 101077954
## Appendix A Analytical transfer matrix
Assuming a loudspeaker with a mechanical impedance \(Z_{m}\) and force factor \(B\ell\), is supplied by an electrical current \(i\), the equation of motion is given by,
\[Z_{m}v=(p_{l}-p_{r})S_{m}+(B\ell)i\]
where the subscript \(l\) and \(r\) denotes the rear and forward face of cross section \(S_{m}\).
Now, if the current is provided through an electroacoustic feedback with a static gain \(G\), such as it satisfied the following equation \(i=Gp_{l}\), in addition to the conservation of flow, one gets the following transfer matrix,
\[M=\begin{pmatrix}t&-iX-\beta\\ 0&1\end{pmatrix} \tag{10}\]
where \(t=1-\frac{GB\ell}{S_{m}^{2}}\), \(\beta=\text{Re}(Z_{m}/S_{m}^{2})\), and \(X=\text{Im}(Z_{m}/S_{m}^{2})\).
## Appendix B Mapping energy to frequency
\[t=(1-\alpha). \tag{11}\]
For the loudspeakers of the same kind as the ones used in Ref.[34], the parameters in Eq.(10) are given by
\[X=g\left(\frac{k^{2}-k_{0}^{2}}{k}\right),\quad\beta=\frac{S_{W}}{c_{0}\rho S _{m}^{2}}\left(R_{m}+\frac{(B\ell)^{2}}{Z_{e}}\right) \tag{12}\]
with \(g=\frac{S_{w}c_{0}M_{m}}{c_{0}\rho Sm^{2}}\) and \(k_{0}^{2}=(c_{0}^{2}M_{m}C_{m})^{-1/2}\). Note that \(\beta\) quantifies the loudspeaker losses where the ratio between the loudspeaker membrane section \(S_{m}\) and the waveguide section \(S_{w}\) can be used as a tuning parameter.
|
2303.17585 | The He I 10830 A line: Radiative transfer and differential illumination
effects | We study the formation of the Stokes profiles of the He I multiplet at 10830
A when relaxing two of the approximations that are often considered in the
modeling of this multiplet, namely the lack of self-consistent radiation
transfer and the assumption of equal illumination of the individual multiplet
components. This He I multiplet is among the most important ones for the
diagnostic of the outer solar atmosphere from spectropolarimetric observations,
especially in prominences, filaments, and spicules. However, the goodness of
these approximations is yet to be assessed, especially in situations where the
optical thickness is of the order or larger than one, and radiation transfer
has a significant impact in the local anisotropy and the ensuing spectral line
polarization. This issue becomes particularly relevant in the ongoing
development of new inversion tools which take into account multi-dimensional
radiation transfer effects. To relax these approximations we generalize the
multi-term equations for the atomic statistical equilibrium to allow for
differential illumination of the multiplet components and implement them in a
one-dimensional radiative transfer code. We find that, even for this simple
geometry and relatively small optical thickness, both radiation transfer and
differential illumination effects have a significant impact on the emergent
polarization profiles. This should be taken into account in order to avoid
potentially significant errors in the inference of the magnetic field vector. | Andres Vicente Arevalo, Jiri Stepan, Tanausu del Pino Aleman, Maria Jesus Martinez Gonzalez | 2023-03-30T17:53:50Z | http://arxiv.org/abs/2303.17585v2 | # The He i 10830 A line: Radiative transfer and differential illumination effects
# The He i 10830 A line: Radiative transfer and differential illumination effects
Andres Vicente Arevalo
1Instituto de Astrofisica de Canarias, E-38205 La Laguna, Tenerife, Spain 12Departamento de Astrofisica, Universidad de La Laguna, E-38206 La Laguna, Tenerife, Spain 2
Jiri Stepan
3Astronomical Institute of the Academy of Sciences, Ondrejov, Czech Republic.3
Tanausu del Pino Aleman
1Instituto de Astrofisica de Canarias, E-38205 La Laguna, Tenerife, Spain 12Departamento de Astrofisica, Universidad de La Laguna, E-38206 La Laguna, Tenerife, Spain 2
Maria Jesus Martinez Gonzalez
1Instituto de Astrofisica de Canarias, E-38205 La Laguna, Tenerife, Spain 12Departamento de Astrofisica, Universidad de La Laguna, E-38206 La Laguna, Tenerife, Spain 2
Received XXXX; accepted XXXX
Key Words.:Atomic processes - Polarization - Radiative transfer - Sun: atmosphere We study the formation of the Stokes profiles of the He i multiplet at 10830 A when relaxing two of the approximations that are often considered in the modeling of this multiplet, namely the lack of self-consistent radiation transfer and the assumption of equal illumination of the individual multiplet components. This He i multiplet is among the most important ones for the diagnostic of the outer solar atmosphere from spectropolarimetric observations, especially in prominences, filaments, and spicules. However, the goodness of these approximations is yet to be assessed, especially in situations where the optical thickness is of the order or larger than one, and radiation transfer has a significant impact in the local anisotropy and the ensuing spectral line polarization. This issue becomes particularly relevant in the ongoing development of new inversion tools which take into account multi-dimensional radiation transfer effects. To relax these approximations we generalize the multi-term equations for the atomic statistical equilibrium to allow for differential illumination of the multiplet components and implement them in a one-dimensional radiative transfer code. We find that, even for this simple geometry and relatively small optical thickness, both radiation transfer and differential illumination effects have a significant impact on the emergent polarization profiles. This should be taken into account in order to avoid potentially significant errors in the inference of the magnetic field vector.
## 1 Introduction
The magnetic field is fundamental to understand commonly observed plasma structures such as prominences, filaments, and spicules, being responsible for their structure, properties, and even their existence (e.g., the reviews of Mackay et al. 2010 and Tsiropoula et. al 2012). Spectropolarimetric observations of the He i multiplets at 10830 A (hereafter, 10830 multiplet) and 5876 A (usually dubbed D\({}_{3}\)) have been extensively acquired and analyzed to diagnose these structures and, in particular, to infer their magnetic fields (see Trujillo Bueno & del Pino Aleman 2022, and references therein).
The formation of the Stokes profiles of these orthohelium spectral lines is somewhat difficult to model due to their sensitivity to the coronal UV/FUV illumination (Judge & Centeno 2008), which ionizes the neutral helium atoms that then can be recombined to populate the relatively high excitation energy triplet states. However, this property is what has also allowed to significantly simplify the modeling of these lines. It turns out that the 10830 multiplet and D\({}_{3}\) cannot effectively form in quiet Sun conditions, being only observable in magnetically active regions and in plasma structures such as prominences, filaments, and spicules. Because they form in more or less localized plasma regions with somewhat small optical thickness, it is generally assumed that it is possible to model these multiplets with a relatively simple slab model, without accounting for radiation transfer (RT) effects and with the most complex processes such as the coronal illumination abstracted into the optical depth of the slab. This fact has been exploited in the HAZEL inversion code (Asensio Ramos et al. 2008) that has been widely used for the analysis of spectropolarimetric observations in prominences, filaments, and spicules (see the review Trujillo Bueno & del Pino Aleman 2022, and references therein).
Apart from this apparent simplicity in their modeling, the 10830 and D\({}_{3}\) multiplets are both well observable with today's instrumentation and their spectral line polarization is sensitive to the magnetic field with strengths between a fraction of a gauss to some hundred gauss, values expected to be typical in the outer atmosphere structures. Moreover, their polarization is sensitive to the magnetic field via the Hanle and Zeeman effects, and elastic collisions with neutral hydrogen atoms in chromospheric and prominence plasma are unable to destroy the atomic polarization of the He i levels (Casini et al. 2009). All these facts have made these He i multiplets really useful for the inference of the magnetic field vector in the above mentioned regions of the solar atmosphere.
The formation of the Stokes profiles of the 10830 and D\({}_{3}\) multiplets can be described with the quantum theory atomic line formation (Landi Degl'Innocenti & Landolfi 2004). In particular, the multi-term model atom is the most suitable for this application (see sections 7.5, 7.6, and 13.4 of Landi Degl'Innocenti & Landolfi 2004, for a detailed description of the problem). One important requirement for the applicability of the multi-term model atom is that the exciting radiation field must be spectrally flat1 over the wavelength range spanned by the multiplet components. For the 10830 multiplet this implies that the blue and red components (remember that the red component is a blend of two of the lines of this triplet), which are about 1 A
apart, must be excited by identical radiation fields (and likewise for D\({}_{3}\) and its components).
This assumption is very well satisfied if the optical thickness of the multiplet components is small (\(\tau<1\)) and the exciting illumination is that of the relatively flat continuum of the quiet photosphere. However, observations clearly indicate that the optical thickness of the 10830 multiplet can often exceed one (e.g. Diaz Basco et al. 2019a,b). For optically thick enough plasmas, RT effects within the region of formation of the He i multiplets lead to a spectrally non-flat radiation field. Due to the non-negligible separation in wavelength between the red and blue components of the 10830 multiplet, and the difference of their optical thicknesses, the radiation field becomes indeed non-flat and the multi-term model atom equations are no longer suitable. Even though the potential importance of these RT effects in the 10830 multiplet have been recognized before (see Trujillo Bueno & Asensio Ramos 2007), no detailed investigation of this problem has ever been conducted.
The so-called flat-spectrum condition or approximation is due to the fact that the theory of complete frequency redistribution (CRD, Landi Degl'Innocenti & Landolfi 2004) is based on the first-order perturbative expansion of the matter-radiation interaction. In order to relax this condition, it is necessary to use a higher-order theory, which allows considering coherent scattering processes and partial frequency redistribution (PRD) effects (Stenflo 1994; Bommier 1997a,b; Casini et al. 2014, or Bommier 2017). However, including PRD effects dramatically increases the computing time requirements, making it not the most desirable approach for a multiplet that can be successfully modeled by assuming complete frequency redistribution (Asensio Ramos et al. 2008). Assuming a multi-level model atom (see sections 7.1 and 7.2 of Landi Degl'Innocenti & Landolfi 2004) also naturally relaxes this assumption. However, the quantum interference between the upper levels of the blended red components of the 10830 multiplet needs to be taken into account to correctly model their polarization, which is not possible within this model.
In this paper, we propose a new approach to the 10830 multiplet formation that is more general than the multi-term model atom, at least for the magnetic fields strengths relevant for chromospheric and coronal spectropolarimetry. Our formulation allows us to treat separately the illumination of the red and blue components of the 10830 multiplet. The multi-term model atom is the limit case of our method, strictly valid in case of spectrally flat illumination and negligible optical thickness of the medium. In contrast to the multi-term approximation, our approach allows to solve the problems out of the local thermodynamic equilibrium approximation (NLTE) in plasmas of any optical thickness. As we show below, this approach leads to significant modification of the traditional results in 1D slab models. Moreover, NLTE RT plays an even more important role in the formation of the lines of the outer solar atmosphere if 3D effects are considered (Stepan et al. 2022)
In order to be able to consider NLTE models involving the 10830 line, we need to realize that the quantum interference between the upper level of the blue component and the other two levels in the term is not expected to have a significant impact for the typical magnetic fields found in the solar atmosphere. We thus derive a new set of statistical equilibrium equations (SEE) starting from the multi-term model atom of Landi Degl'Innocenti & Landolfi (2004) and explicitly removing the quantum interference between the upper level of the blue component and the rest of the levels, what allows us to introduce different pumping radiation fields for the blue and red components of the 10830 multiplet. We have implemented this new set of equations into a one-dimensional RT code. In Sect. 2, we describe the new set of SEE and some details about the RT code. In Sect. 3, we carry out a series of numerical experiments to study the impact of RT and differential illumination effects on the emergent Stokes profiles. Finally, we present our conclusions in Sect. 4.
## 2 Formulation of the problem
The theory of atomic line polarization summarized in the monograph by Landi Degl'Innocenti & Landolfi (2004) is formulated in the frame of the so-called complete frequency redistribution (CRD). This limit of atom-photon interactions, which implies a complete lack of correlation between the frequencies of the absorbed and emitted photons in scattering processes, has been immensely useful for inferring magnetic fields of the solar atmosphere during the last decades.
The multi-term model atom, described in Chapter 7 of Landi Degl'Innocenti & Landolfi (2004) is the most suitable to describe the 10830 multiplet as it accounts for quantum interference between states \(|\beta LS\,JM\rangle\) and \(|\beta LS\,J^{\prime}M^{\prime}\rangle\) of different \(J\) and \(J^{\prime}\) levels of the same \(\beta LS\) term. However, in order to ensure physical consistency when accounting for quantum interference between non-degenerate atomic levels, the incident radiation field must have a flat spectrum across a frequency range wider than the separation of those levels. In addition, the CRD theory is strictly valid if the incident radiation field is flat on a frequency interval much larger than the natural width of the atomic states. Due to the small natural width of the 10830 multiplet sublevels, this condition is automatically satisfied if the spectrum is flat across the whole multiplet.
When these conditions are satisfied, the absorption and stimulated emission within a spectral line depend on the frequency independent radiation field tensor
\[\overline{J}^{K}_{Q}=\int J^{K}_{Q}(\nu)\phi(\nu)\;d\nu\,, \tag{1}\]
where \(\phi(\nu)\) is the normalized line's absorption profile, and \(J^{K}_{Q}(\nu)\) is the radiation field tensor at each frequency \(\nu\). Note that the \(\phi(\nu)\) absorption profile is representative of the absorption in the whole multiplet and thus it has the shape of the absorptivity, and not that of a Voigt profile. Strictly speaking, this approach is only valid if all the multiplet components have comparable absorptivities and are illuminated by similar enough radiation fields, what is not necessarily true for the 10830 multiplet.
It is then clear that, if we want to introduce different radiation fields for the blue and red components in the SEE, we need to neglect quantum interference between the blue component's upper level and any of the red component's upper levels. Consequently, only the two red component's upper levels can be coherent. In this way we can still fulfill the validity condition of the CRD approximation, namely spectral flatness on a frequency interval much larger than the natural width of each transition, as well as the validity condition of the multi-term atom model, namely spectral flatness in a frequency range wider than the separation between levels that can be coherent.
Generally, the further in energy two levels are, the less significant quantum interference between them is. The separation between the \(J=0\) state, the upper level of the blue component of the 10830 multiplet, from the rest of the states is about 30 GHz or 0.1 meV, i.e., about four orders of magnitude larger than the natural width of the Zeeman states, which is about 1.6 MHz or \(10^{-5}\) meV. This energy separation remains very large even if the magnetic states are modified by a magnetic field with strength up
to \(\sim 5\) kG, when some of the Zeeman components of the upper levels of the red component crosses with the blue component's upper level (see Fig. 1). Therefore, for the typical magnetic fields found in the solar atmosphere, we can safely neglect quantum interference between the upper levels of the blue and red components, and thus we can consider different radiation fields for each component while ensuring that the CRD theory remains strictly valid.
Consequently, the only difference with respect to the standard multi-term SEE (see section 7.6 in Landi Degl'Innocenti & Landolfi 2004) is that, instead of considering a single radiation field for the whole multiplet, we distinguish between the radiation field in the red and blue components, forcing the quantum interference between magnetic states pertaining to different components to vanish. Therefore, instead of a single \(\overline{J}_{Q}^{K}\) radiation tensor common to all the multiplet sublevels, we now have \(\overline{J}_{Q}^{K}\)(red) or \(\overline{J}_{Q}^{K}\)(blue), resulting from the same average as in Eq. (1), but integrating over \(\phi(\nu)_{\rm red}\) and \(\phi(\nu)_{\rm blue}\)-the absorption profiles accounting only for the contributions for the red or the blue component-, respectively. In our approach we follow the standard way of derivation of the equations and we diagonalize the atomic Hamiltonian in the incomplete Paschen-Back effect regime. Consequently, the RT coefficients in the RT equation have exactly the same formal expression as the corresponding coefficients of Landi Degl'Innocenti & Landolfi (2004). These relatively minor changes in the SEE allow us to consider a much broader set of physical scenarios.
We have implemented these SEE in a 1D RT code, which solves the NLTE problem of the generation and transfer of polarized radiation.
## 3 Numerical experiments
In this section we show the result of some numerical experiments to illustrate how accounting for differential radiation between the red and blue components of the 10830 multiplet, as well as RT effects, can lead to strikingly different emergent Stokes profiles.
We compare the results obtained with our code with those obtained under the assumption of flat-spectrum and negligible impact of the RT effects on the atomic density matrix. For this physical scenario, the RT equations have the solution (see, e.g., Asensio Ramos et al. 2008):
\[\mathbf{I}=\left[\mathbf{1}+\psi_{\rm O}\mathbf{K}^{\prime}\right]^{-1}\left[(\mathbf{c}^{- \intercal}\mathbf{1}-\psi_{\rm M}\mathbf{K}^{\prime})\mathbf{I}_{\rm inc}+(\psi_{\rm M}+ \psi_{\rm O})\mathbf{S}\right]\,, \tag{2}\]
where \(\mathbf{1}\) is the unit matrix, \(\mathbf{K}^{\prime}=\mathbf{K}/\eta_{I}-\mathbf{1}\), with \(\mathbf{K}\) the propagation matrix and \(\eta_{I}\) the absorption coefficient for intensity, \(\mathbf{S}\) is the source function vector, and \(\mathbf{I}_{\rm inc}\) is the Stokes vector of the incident radiation (at the lower boundary of the slab). The coefficients \(\psi_{\rm O}\) and \(\psi_{\rm M}\) only depend on the optical thickness along the propagation direction at a particular frequency and angle, and their expression can be found in Kunasz & Auer (1988). Due to its simplicity and straightforward evaluation, the linear Eq. (2) is commonly used in practical applications to invert spectropolarimetric data of the 10830 multiplet.
As noted above, Eq. (2) is applicable if both \(\mathbf{K}^{\prime}\) and \(\mathbf{S}\) are constant along the ray of propagation. This is a good approximation if the optical thickness is below unity. Note that, while this approximation does include radiation transfer via Eq. (2), it is not self-consistent, as the radiation field is assumed fixed and constant throughout the whole extension of the slab. However, for larger optical thicknesses this approximation becomes unsuitable because RT starts playing a significant role (see Trujillo Bueno & Asensio Ramos 2007). The problem then becomes non-local and non-linear, and the notably different opacities between the red and blue components also lead to the non-fulfillment of the spectral flatness approximation.
Equation (2) is strictly valid for an optically thin slab illuminated with a spectrally flat incident radiation. We have checked that, in this limit, our calculations coincide with the result of Eq. (2) for magnetic fields from zero to several thousands of gauss (see Sect. 3.2).
### Impact on the radiation field anisotropy
In this experiment, we consider a slab with constant properties located 6 Mm above the solar surface, with its normal axis parallel to the solar radius. The optical thickness of the slab is \(\tau=2\) in the center of the red component of the 10830 multiplet. We solve the self-consistent RT transfer problem in this slab model.
In this first experiment we assume that the slab is unmagnetized (\(\mathbf{B}=0\)), and we analyze the wavelength variation of the \(J_{0}^{0}\) and \(J_{0}^{2}\) (mean intensity and anisotropy, respectively) components of the radiation field tensor at different optical depths from the top, \(\tau=0\), to the bottom, \(\tau=2\), boundaries (see
Figure 1: Energy of the magnetic sublevels of the lower (left panel) and upper (right panel) terms of the 10830 Å multiplet as a function of the magnetic field strength. The natural widths of the upper-term levels is about \(10^{-5}\) meV, well below the plotting resolution. The zero energy offsets in each panel correspond to the mean energies of the respective terms.
Fig. 2). In Eq. (2), the radiation field tensor is constant throughout the slab and spectrally flat (red dashed line, hereafter non-RT model). However, the radiation field tensor components in the self-consistent solution (solid curves, hereafter RT model) show a strong dependence with height due to the combination of RT effects and the differential absorption and emission in the red and blue components of the multiplet.
At the top of the slab (\(\tau=0\), dark blue curve) the mean intensity spectrum (left panel of Fig. 2) resembles the typical 10830 absorption profiles. This is easily understood mostly in terms of the absorption of the incident intensity in the slab's bottom boundary. At the bottom of the slab (\(\tau=2\), yellow curve), however, the mean intensity shows an excess with respect to the mean intensity of the incident continuum radiation. This is due to the radiation that is emitted from within the slab, traveling "downward".
Regarding the radiation field anisotropy (right panel of Fig. 2), we see a reduction with respect to the incident anisotropy, a reduction which is non-monotonic with the optical depth. This reduction is mainly a consequence of the horizontal RT within the slab: for inclined lines of sight, there is a larger amount of emitting material, which increases the negative contribution of the radiation coming from directions forming an angle with the slab's axis larger than the van Vleck angle (see, e.g., Trujillo Bueno 2001). For this reason, the largest differences between the non-RT and RT model's anisotropies are found at optical depths around \(\tau=1\), where the radiation is more likely to start escaping the slab along the vertical direction, while the more inclined rays are still optically thick.
From this experiment, it is clear that RT and the relaxation of the flat spectrum approximation can have a considerable impact on the radiation field tensors that, as we will show in subsequent experiments, can significantly impact the emergent Stokes parameters.
### Impact on the emergent Stokes profiles
Even with a relatively small optical thickness of \(\tau=2\), NLTE RT effects significantly modify the radiation field anisotropy within the slab. Since this quantity is crucial for the emitted linear scattering polarization we can expect RT effects to have a significant impact on the emergent Stokes profiles as well.
We consider the same physical scenario as in Sect. 3.1, but with a uniform and horizontal (perpendicular to the slab's axis) magnetic field of \(B=10\,\mathrm{G}\). After solving the self-consistent NLTE problem, we calculate the emergent Stokes profiles for a line of sight parallel to the slab's axis. Note that if the slab and the incident illumination at the bottom boundary are axially symmetric, as it would be the case if it was not for the chosen magnetic field, there would be no polarization when observing along the slab's axis due to the symmetry; the magnetic field is thus necessary to generate scattering linear polarization via the Hanle effect in the chosen model and line of sight. This would correspond, for example, to the observation of a filament close to the disk-center. By choosing the reference direction for the linear polarization parallel to the magnetic field vector, the only non-zero Stokes parameters are \(I\) and \(Q\).
In Fig. 3 we show the emergent Stokes profiles for three cases: i) the non-RT model (orange solid curve), ii) the RT model with the flat spectrum across the whole multiplet approximation (dashed blue curve, hereafter RT-flat model), and iii) the RT model (solid blue curve).
Regarding the intensity (left panel of Fig. 3), the impact of the approximations is relatively minor. Assuming or not flat spectrum across the whole multiplet does not have a significant impact if one solves the self-consistent problem, and fully neglecting RT effects only has some impact on the depth of the red component. This difference could lead to a slight error on the determination of the optical depth or the Doppler width (equivalently, the temperature) of the slab, but we do not expect such a difference to be significant.
More interesting is what happens to the polarization (right panel of Fig. 3). The flat spectrum approximation in the self-consistent (RT-flat model) solution results in a significant increase of the emergent linear polarization, changing the ratio between the polarization signals of the blue and red components. Moreover, if RT is also neglected (non-RT model), the calculated emergent linear polarization is even larger, resulting in a more than a factor there increase with respect to the RT model.
Consequently, both RT effects and the flat-spectrum approximation have a significant impact on the linear polarization profiles. This difference in the signal is directly related to the changes in the frequency independent anisotropy as defined in Eq. (1) with the suitable absorption profiles. In particular, the
Figure 2: Mean intensity (\(J_{0}^{0}\); left panel) and anisotropy (\(J_{0}^{2}\); right panel) normalized to the incident continuum mean intensity (\(J_{0}^{0}\)) at different optical depths \(\tau\), measured from the top boundary at the 10830 multiplet red component’s center. Color coded we have different optical depth layers of the fully consistent NLTE solution. The red dashed line corresponds to the incident radiation field (\(I_{\mathrm{inc}}\) in Eq. 2).
anisotropy in the red component, which determines the alignment of the red component's upper levels, is smaller in the RT model than in the non-RT model in the whole slab, and smaller than the anisotropy of the RT-flat model in the upper part of the slab (\(\tau\leq 1\)) from where most of the photons emerge. Curiously, the anisotropy of the blue component is significantly larger than that of the RT-flat model and of the red component in the RT model (although still smaller than the anisotropy in the non-RT model). However, the blue component's upper level is non-polarizable (\(J=0\)) and thus the polarization of this line is fully due to dichroism, that is, due to the atomic alignment of the lower level (Trujillo Bueno et al. 2002). It turns out that the impact of the red component on the lower level alignment is such that it is smaller when relaxing the flat spectrum approximation, and therefore the emergent linear polarization in the RT model is consistently smaller across the whole multiplet.
We can further study the impact of these approximations by comparing the fractional linear polarization signal at the peaks of the blue and red components for different magnetic field strengths and for several optical depths (see Fig. 3). For the chosen model (disk-center line of sight, axially symmetric slab model) the horizontal magnetic field breaks the axial symmetry and induces linear polarization. Up to a few gauss, in the Hanle regime, the polarization increases with the magnetic field. For larger magnetic field strengths, the Hanle effect is in saturation and the linear polarization is insensitive to changes in the magnetic field strength (plateau in the linear polarization in Fig. 3). For magnetic field strenghts in the hundreds the Zeeman effect starts affecting the linear polarization (end of the plateau toward the largest field strengths in Fig. 3). For small optical depths (e.g., \(\tau=0.01\), optically thin limit, see top-left panel) the three approaches produce, as expected, the same polarization signal for every magnetic field. However, for increasingly larger optical depths, RT effects induce a significant reduction of the anisotropy and thus of the polarization signal, which is more significant the larger is the optical depth. This behavior will saturate when the optical depth is large enough as to thermalize the bottom boundary of the slab. The relaxation of the flat-spectrum approximation also has an impact on the emergent linear polarization signals (compare the solid and dashed curves in Fig. 3), but the most significant reduction in the polarization emergent from the slab is due to RT effects (compare the dashed and dotted curves in the figure).
In summary, the NLTE solution leads, in this particular model, to a significant decrease of anisotropy within the slab and to the decrease of the 10830 polarization amplitude. Consequently, the interpretation of the observations based on the optically thin slab model with the flat-spectrum approximation, the non-RT model, could lead to significant errors in the determination of the slab's physical properties, especially the magnetic field vector. This error can be critical depending on the particular physical scenario: assume an observation of a solar filament with a horizontal magnetic field in the Hanle saturation regime (e.g., 20 gauss), for which we are able to delimit its height (if not, we would face a different problem in the almost complete degeneration between height and magnetic field strength as inversion parameters). The non-RT model would overestimate the anisotropy and the inversion algorithm would need to pick a magnetic field strength in the Hanle regime, where the linear polarization is still sensitive to the magnetic field strength, in order to find a smaller polarization signal that fits the observation for the overestimated anisotropy. We would then infer magnetic field strengths in the fractions of gauss (\(\mathcal{O}(10^{-1})\)), instead of a magnetic field in the saturation regime (\(\mathcal{O}(10^{1}\)-\(10^{2})\)). The opposite would happen in the prominence scenario, where a horizontal field depolarizes the zero-field scattering signal. In order to compensate for the excess in anisotropy from the non-RT assumption, the inferred magnetic field is increased, what could lead to the identification of field strengths of fractions of gauss (\(\mathcal{O}(10^{-1})\)) as magnetic fields in the saturation regime (\(\mathcal{O}(10^{1}\)-\(10^{2})\)). We emphasize, however, that a non-RT model would also be unable to correctly fit the linear polarization profiles altogether. In the non-RT model the ratio between the amplitudes of the red and blue components is fixed for each value of the signal of any of the two components. Because the ratio between these two signals with RT is different, there is not a combination of parameters (in a constant property slab) such that the emergent Stokes parameters from Eq. (2) fit the RT profiles.
Although our conclusions are not model dependent,2 their quantification is. While the non-RT slab model shows these is
Figure 3: Intensity \(I\) (left panel) and \(Q\) (right panel) line profiles normalized to the continuum intensity for both the constant-property slab model solution (orange) and fully consistent NLTE solution (blue) for disk-center observation of the \(\tau=2\) slab with horizontal magnetic filed with \(B=10\) G. The positive \(Q\) direction is parallel to the projection of the magnetic field vector onto the plane of the sky.
sues due to the non-RT assumption, the RT slab model finds itself in the opposite extreme, that is, where the optical depths tends to infinity in the horizontal direction. We must thus emphasize that, while our modeling exposes a potential problem in the inference of magnetic fields with the non-RT model, what we show in this paper is the worst case scenario. First, the reduction of the anisotropy calculated in the slab is an upper limit. Secondly, the observation of circular polarization can alleviate the problem by providing a constrain in the magnetic field longitudinal component (we have intentionally chosen a physical scenario without circular polarization in this paper), even though this cannot fully solve issues related to the direction of the magnetic field vector and it can even make it impossible to fit the four Stokes parameters simultaneously.
These findings are consistent with what is found in the observation and analysis of the 10830 multiplet in prominences and filaments. In general terms, as explained above, it can explain why prominences seem to show magnetic fields in the saturation regime, while quiet Sun filaments seem to show magnetic fields in the fractions or units of gauss, i.e., in the Hanle regime (see Trujillo Bueno & del Pino Alema 2022, and references therein). Regarding the particular case of filaments, there are, as far as we know, only a few filament observations with the dichroic blue component signal, and the inversion is usually unable to achieve a completely satisfactory fit. Trujillo Bueno et al. (2002) show the fit to a filament profile, demonstrating the dichroic origin of the blue component's linear polarizatiton signal; however, their fit does not match the blue component's intensity (see their Fig. 4). Lagg et al. (2004) show a fit to a profile in a flux emergence region, neglecting lower level polarization, unable to fit the blue component's linear polarization (see their Fig. 4 and 5). Later, Asensio Ramos et al. (2008) showed a fit to the same profiles including lower level polarization and, while it is improved, they are unable to simultaneously fit the linear polarization amplitude of both the blue and red components (see their Fig. 16, as well as Fig. 14 for an example with other filament observation). Diaz Baso et al. (2019a,b) found and studied this impossibility in the fitting, albeit their observations show peculiar polarization signals with the same sign in both red and blue components, which require additional complexity in the physical model beyond RT3. Other filament observations include those of Kuckein et al. (2012) and Xu et al. (2012), who found mostly Zeeman linear polarization profiles in filaments on top of active regions and thus could be fit (see Figs. 7-9 in Kuckein et al. 2012 and Fig. 4 in Xu et al. 2012). Curiously, Kuckein et al. (2009) posit that a reduction factor of 0.2 was needed in the anisotropy in order to explain their filament observations, a similar reduction to the one we find in our model for the red component's anisotropy at \(\tau=1\) (\(\sim 0.17\), see Fig. 2). All in all, observations of filaments in the 10830 multiplet are relatively scarce, but they tend to show that the non-RT model may not be enough to find a satisfactory fit to the observed linear polarization profiles. Consequently, in order to correctly interpret observations in filaments with relatively large optical depth, we think that RT must be taken into account, the flat spectrum approximation must be relaxed, and that a model more complex than a constant property slab is necessary to explain observations such as those by Diaz Baso et al. (2019a,b).
Footnote 3: We have not been able to find emergent Stokes profiles with the same sign signals with our code in a constant property slab, except for particular combinations of magnetic fields and velocities resulting in amplitudes much smaller than those observed
## 4 Conclusions
We have investigated the impact of RT effects on the polarization of the 10830 multiplet. In particular, we have studied the effect that relaxing the flat spectrum approximation, which requires to consider that both components are illuminated by identical radiation fields, has in the anisotropy and in the emergent linear scattering polarization. To this end we have modified the SEE for the multi-term atom, neglecting quantum interference terms between the blue component's upper level and the two red component's upper levels. Moreover, instead of calculating a singular average radiation field for the whole multiplet (Eq. 1) we compute the equivalent quantity for each of the components, where the absorption profile is constructed by including only contributions to the absorptivity from magnetic line components pertaining to each of the blue or red components of the 10830 multiplet. We have implemented this modified set of equations into a 1D RT code and calculated the 10830 multiplet emergent Stokes profiles in a constant property slab model in order to compare our results with those obtained with the usual modeling assumptions of these lines, namely, no RT and flat-spectrum across the whole multiplet (non-RT model).
In the optical thin limit, our self-consistent calculations coincide with those of the non-RT model, as expected. As we increase the optical depth of the slab, the results start to diverge, and for not too large optical depth we start observing significant differences in the results. First, allowing for RT within the slab significantly affects the radiation field. The mean intensity in the top (bottom) region of the slab shows a significant defect (excess) in the lines due to the absorption (emission) by the rest of the slab below (above). The radiation field anisotropy is instead diminished with respect to the non-RT case due to the significant contribution of the radiation propagating along the more inclined directions, which are optically thicker due to the geometry, as anticipated by Trujillo Bueno & Asensio Ramos (2007).
More important, especially for the diagnostic of the magnetic field, are the differences in the emergent Stokes profiles. For a slab of optical depth \(\tau=2\) we only find a small difference in the red component's intensity, which could slightly impact the determination of the optical depth and temperature (via the Doppler width). However, the difference is remarkable for the linear scattering polarization, of the order of a factor three in the signal of both components. Due to the larger anisotropy in the non-RT model, diminished when horizontal RT within the slab is accounted for, the linear polarization is significantly overestimated. This can undoubtedly lead to an underestimation of the magnetic field or the filament height for this particular disk-center filament configuration, a difference that can be of orders of magnitude in the magnetic field strength in the worst-case scenarios. In fact, filament observations showing scattering polarization signals in the blue component, as far as we know, cannot be usually completely fit for both the blue and red components. Moreover, some observations show signals with the same sign in both the blue and red components, something that we have not achieved to reproduce in a constant property slab even with RT, what could mean that the constant property slab model (both RT and non-RT) is too simplistic to model these observations.
The impact of the RT will be even more significant in full 3D geometry (e.g. Stepan & Trujillo Bueno 2013). First, a non-homogeneous volume of plasma with enough optical depth will not only reduce the anisotropy of the incoming radiation from the underlying disk, but also contribute to the breaking of axial symmetry, an additional source of linear polarization which in the commonly used slab model can only be accounted for by the
magnetic field. Secondly, recent 3D NLTE RT calculations in an academic two-level atom model indicate that, already for optical depths as small as \(\tau=1\), RT plays an important role in spectral line formation (see Fig. 9 of Stepan et al. 2022).
Last but not least, we note that another important multiplet of He i, namely the D\({}_{3}\) multiplet at 5876 A, is most likely optically thin in all the structures of the outer solar atmosphere and, therefore, it is less prone to be impacted by the effects discussed in this paper. However, since the lower term of D\({}_{3}\) is the upper term of 10830, both multiplets are coupled. Investigation on the impact of the 10830 transfer on the D\({}_{3}\) line remains one of our research topics for the near future.
The results presented in this paper expose a potential problem of the simplified non-RT modeling of such complex structures. While its simplifications allows for the implementation of extremely fast inversion codes, one needs to be careful when the plasma conditions are such that such simplifications are not truly fulfilled. This emphasizes the relevance of developing novel diagnostic techniques that can account for important physical ingredients such as the full three-dimensional geometry and RT effects. For this reason, we are developing an inversion code based on the ideas presented in Stepan et al. (2022), and will continue the investigation started in this paper with significantly more generality (with 3D RT) in a forthcoming publication.
###### Acknowledgements.
We are grateful to Luca Belluzzi, Javier Trujillo Bueno, and Andres Asensio Ramos for number of suggestions that helped us to improve the paper. We acknowledge the funding received from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (ERC Advanced Grant agreement No 742265). J.S. acknowledges the financial support of the grant 19-20632S of the Czech Grant Foundation (GACR) and the support from project RV06798515 of the Astronomical Institute of the Czech Academy of Sciences. T.P.A's participation in the publication is part of the Project RVC2021-034006-I, funded by MICINA/E/10.130395010010033, and the European Union "NeNevidenGemeinEU"/TRTRP. M.J.M.G.'s participation in this research has been supported by the project PGC-2018-102108-B-100 of the Spanish Ministry of Science and Innovation and by the financial support through the Ramon y Cajal fellowship. We also acknowledge the community effort devoted to the development of the following open-source packages that were used in this work: numpy (numpy.org, Harris et al. 2020), matplotlib(n matplotlib.org, Hunter 2007). We made the code publicly available in a github repository andrewa/He_1083_RT.
|
2307.05476 | Fisher-Weighted Merge of Contrastive Learning Models in Sequential
Recommendation | Along with the exponential growth of online platforms and services,
recommendation systems have become essential for identifying relevant items
based on user preferences. The domain of sequential recommendation aims to
capture evolving user preferences over time. To address dynamic preference,
various contrastive learning methods have been proposed to target data
sparsity, a challenge in recommendation systems due to the limited user-item
interactions. In this paper, we are the first to apply the Fisher-Merging
method to Sequential Recommendation, addressing and resolving practical
challenges associated with it. This approach ensures robust fine-tuning by
merging the parameters of multiple models, resulting in improved overall
performance. Through extensive experiments, we demonstrate the effectiveness of
our proposed methods, highlighting their potential to advance the
state-of-the-art in sequential learning and recommendation systems. | Jung Hyun Ryu, Jaeheyoung Jeon, Jewoong Cho, Myungjoo Kang 1 | 2023-07-05T05:58:56Z | http://arxiv.org/abs/2307.05476v1 | # Fisher-Weighted Merge of Contrastive Learning Models
###### Abstract
Along with the exponential growth of online platforms and services, recommendation systems have become essential for identifying relevant items based on user preferences. The domain of sequential recommendation aims to capture evolving user preferences over time. To address dynamic preference, various contrastive learning methods have been proposed to target data sparsity, a challenge in recommendation systems due to the limited user-item interactions. In this paper, we are the first to apply the Fisher-Merging method to Sequential Recommendation, addressing and resolving practical challenges associated with it. This approach ensures robust fine-tuning by merging the parameters of multiple models, resulting in improved overall performance. Through extensive experiments, we demonstrate the effectiveness of our proposed methods, highlighting their potential to advance the state-of-the-art in sequential learning and recommendation systems.
Machine Learning, ICML
## 1 Introduction
With the exponential growth of online platforms and services, a significant amount of data is being generated daily. Recommendation systems have become crucial in utilizing this data effectively. The system aim to identify relevant items based on user preferences and interests. As user preferences evolve over time, sequential recommendation has gained attention as a subfield in this area. We address the problem of sequential recommendation as follows.
Let \(\mathcal{U}\) be the set of users \(\mathcal{U}=\{u_{1},u_{2},\cdots,u_{|\mathcal{U}|}\}\), and \(\mathcal{V}\) be the set of items as \(\mathcal{V}=\{v_{1},v_{2},\cdots,v_{|\mathcal{V}|}\}\). The sequence of user-item interaction for \(u_{i}\) is a list with chronological order, \(s_{i}=[v_{1}^{u_{i}},v_{2}^{u_{i}},\cdots,v_{t}^{u_{i}},\cdots,v_{n_{u_{i}}}^{ u_{i}}]\). Here user \(u_{i}\in\mathcal{U}\), \(v_{t}^{u_{i}}\in\mathcal{V}\), and user \(u_{i}\) interact item \(v_{t}^{u_{i}}\) in time step \(t\). The length of sequence for user \(u_{i}\) is \(n_{u_{i}}\), and our object is to build a model predicting the item with which user is interact in the next time step, i.e.
\[p\left(v_{n_{u_{i}}+1}^{u_{i}}=v\,\Big{|}\,s_{i}\right). \tag{1}\]
The previous methodologies typically employ similar model structures but utilize various learning frameworks (Xie et al., 2022; Qiu et al., 2022). Prior research has shown that ensemble methods yield significant benefits when multiple learning frameworks are employed (Gontijo-Lopes et al., 2021).
We propose a practical and feasible method to ensemble the parameters of models trained with different contrastive learning techniques in a sequential recommendation. The purpose of this study is to effectively aggregate the obtained parameters \(\theta\) in various learning frameworks and hyperparameter settings, building on previous research and experiments.
Assuming the posterior distribution of parameters \(\theta_{m}\) for each \(m\)-th model, Sec 3.1, we achieved more effective ensemble results. This approach allowed us to capture the uncertainty associated with each model's parameter estimates and leverage this information to enhance the ensem
Figure 1: Overview of Parameter Merging. Sharing model architecture, each models differ in how the contrastive loss is constructed. We merge models by using a weighted sum, where the weights are determined based on the posterior distribution of each model’s parameters, assuming a Gaussian distribution.
ble process. By considering the posterior distributions, we were able to account for the variability in parameter values across different models and obtain a more robust and comprehensive ensemble outcome.
## 2 Related Works
Researchers have explored various ensemble methods, including bootstrapping, bagging, and boosting, to improve model performance (Ganaie et al., 2022; Breiman, 1996; 2001; Natekin and Knoll, 2013; Liu et al., 2014). Ovadia et al. (Ovadia et al., 2019) demonstrated the accuracy of ensembles even in the presence of distribution shift, while Mustafa et al. (Mustafa et al., 2020) proposed a method that combines fine-tuned subsets of pre-trained models to achieve high accuracy and resilience to distribution shift. Parameter merging is another technique to reduce model size and computational requirements (Houlsby et al., 2019). However, ensemble methods often require additional training, which can be computationally expensive and time-consuming.
### Diverse Learning Framework
Wenzel et al. (2020) and Zaidi et al. (2021) investigated the role of random hyperparamters and architectures in ensemble. Gontijo et al. (2021) demonstrated the ensemble effect across various training methodologies; initialization, hyperparameter, architecture, framework, and dataset levels. Diverse training methodologies exhibit different generalization capabilities, ultimately lead to uncorrelated errors. Models tend to specialize in subdomains within the data and highlights the crucial role of ensemble techniques in enhancing overall performance.
### Merging Methods
Model Soup(Wortsman et al., 2022) presents an effective approach for combining parameters without additional training. It demonstrates research findings that improve the performance of trained models by constructing a "recipe" composed of diverse models and averaging their parameters. The study introduces three methods for creating the recipe: the uniform soup, which averages the parameter values of all models; the greedy soup, which sequentially adds models based on their performance ranking; and the learned soup, which identifies the optimal model interpolation through training. These approaches contribute to enhancing the overall performance of the model without the need for additional training.
Fisher MergingWithin the scope of related works, parameter merging is interpreted as a process that maximizes the joint likelihood of model parameters' posteriors (Matena and Raffel, 2022). Previous study (Wortsman et al., 2022) consider averaging as a scenario where the posteriors of these models are assumed to follow an isotropic Gaussian distribution, and the joint likelihood is maximized accordingly. To refine this approach, efforts have been made to approximate the posterior of the model using Laplace approximation (Daxberger et al., 2021). In this case, the distribution of each model is modeled by assuming the mean as the observed, which can be interpreted as trained parameter and the variance as the Gaussian distribution's Fisher matrix. By employing this formulation, the joint likelihood is calculated.
### Sequential Recommendation System
SASRec (Kang and McAuley, 2018) employ Transformer layers to dynamically assign weights to previous items. BERT4Rec (Sun et al., 2019) demonstrate an improvement by incorporating user behavior information from both directions using a bidirectional Transformer. CL4SRec (Xie et al., 2022) employed three data augmentation techniques, namely item cropping, item masking, and item reordering, to create pairs for contrastive learning. DuoRec (Qiu et al., 2022) integrated two types of contrastive loss. Firstly, it incorporated unsupervised augmentation using dropout-based model-level augmentation to generate positive pairs. Secondly, it incorporated supervised positive sampling, which involves creating pairs by considering sequences with the same target item as positive samples.
## 3 Methodology
We perform model ensemble based on different types of loss functions. BERT4Rec (Sun et al., 2019), CL4SRec (Xie et al., 2022), and DuoRec (Qiu et al., 2022) share the basic structure of BERT4Rec (Sun et al., 2019). However, they differ in the sense of constructing positive pairs, a key component of their learning framework of constrastive learning.
Figure 1 represents the overview of parameter merging process. By sharing the structure of the model, which is parameterized with diverse learning frameworks, we can leverage ensemble methods to our advantage. Furthermore, inspired by previous studies demonstrating the effectiveness of ensemble models trained using various learning methods, we apply parameter merging techniques, namely Parameter Averaging and Fisher-weighted Parameter Merging, described in Section 3.1, to combine these models.
### Understanding Model Ensemble
We follow the work of Matena et al (2022). Consider a scenario where we have models with the same structure, \(\text{model}_{1},\text{model}_{2},\cdots,\text{model}_{M}\), with corresponding parame
ters \(\theta_{1},\theta_{2},\cdots,\theta_{M}\). Our objective is to find the parameter \(\theta^{*}\) that maximizes the joint likelihood of the posteriors of these parameters.
The posterior of \(\theta_{m}\) can be represented as \(p\left(\theta|\theta_{m}\right)\). Since obtaining this posterior directly is generally challenging, it can employ approximation methods such as Laplace approximation to make assumptions and seek the parameter \(\theta^{*}\)(MacKay, 1992; Daxberger et al., 2021). Let us interpret the process of finding \(\theta^{*}\) as maximizing the joint likelihood, \(\sum_{m}\log p\left(\theta\mid\theta_{m}\right)\). Assuming that \(p\left(\theta\mid\theta_{m}\right)\) follows a Gaussian distribution, we set the mean of this Gaussian distribution as the observed \(\theta_{m}\) and examine the procedure for averaging parameters and Fisher merging separately, depending on the method used to assume the variance.
Averaging ParametersAssume that the posterior \(p\left(\theta\mid\theta_{m}\right)\) follows a Gaussian distribution \(\mathcal{N}\left(\hat{\theta}_{m},I\right)\). Here, \(\hat{\theta}_{m}\) represents the parameters of the trained \(m\)-th model, and \(I\) denotes the identity matrix. In this case, the desired solution \(\theta^{*}\) can be obtained as the average of the parameters of the candidate models, as shown in eq.2:
\[\theta^{*}=\operatorname*{arg\,max}_{\theta}\sum_{m}\log p\left(\theta\mid \theta_{m},I\right)=\frac{1}{M}\sum_{m}\theta_{m}. \tag{2}\]
Fisher MergingLet us consider the posterior \(p\left(\theta\mid\theta_{m}\right)\) following a Gaussian distribution \(\mathcal{N}\left(\hat{\theta}_{m},H^{-1}\right)\). Here, \(\hat{\theta}_{m}\) represents the parameters of the trained \(m\)-th model, and \(H\) corresponds to the Hessian matrix of \(\theta_{m}\) obtained through the second-order Taylor expansion at the mode of the posterior. It has been established that the Hessian matrix in this distribution coincides with the Fisher information, but for computational efficiency, we only utilize the diagonal elements of the Fisher matrix (Kirkpatrick et al., 2017).
The desired solution \(\theta^{*}\) can be expressed as eq.3, capturing the essence of the Fisher likelihood :
\[\theta^{*}=\operatorname*{arg\,max}_{\theta}\sum_{m}\lambda_{m}\log p\left( \theta\mid\theta_{m},F_{m}\right), \tag{3}\]
where \(F_{m}=\mathbb{E}_{x}\mathbb{E}_{y\sim p_{\theta}\left(y\right|x\right)}\nabla_ {\theta}\log p_{\theta}\left(y|x\right)\nabla_{\theta}\log p_{\theta}\left(y |x\right)^{T}\). The closed-form solution for \(\theta^{*}\) can be obtained as shown in eq.4, which directly incorporates the Fisher matrix. In practice, we utilize an empirical estimate of the Fisher matrix, denoted as \(\hat{F}\), as shown eq.4(Kirkpatrick et al., 2017).
\[\theta^{*\left(j\right)}=\frac{\sum_{m}\lambda_{m}F_{m}^{\left(j\right)}\theta _{m}^{\left(j\right)}}{\sum_{m}\lambda_{m}F_{m}^{\left(j\right)}}, \tag{4}\]
where \(F_{m}=\frac{1}{N}\mathbb{E}_{y\sim p_{\theta}\left(y|x\right)}\left(\nabla_{ \theta}\log p_{\theta}\left(y\mid x\right)\right)^{2}\) and \(j=1,\cdots,\left|\theta\right|\), considering as element-wise multiplication.
### Applying Model Ensemble
By expressing the Fisher matrix we intend to compute in eq.4 in terms of recommendation factors, we can decompose it into the following components:
\[\mathbb{E}_{x_{i}}\mathbb{E}_{y\sim p_{\theta}\left(y|x_{i}\right) }\left(\nabla_{\theta}\log p_{\theta}\left(y\mid x_{i}\right)\right)^{2} \tag{5}\] \[=\frac{1}{|\mathcal{U}|}\sum_{i}^{|\mathcal{U}|}\sum_{j}^{| \mathcal{V}|}p_{\theta}\left(v_{j}\mid s_{i}\right)\left(\nabla_{\theta}\log p _{\theta}\left(v_{j}\mid s_{i}\right)\right)^{2}.\]
There are two computational challenges associated with the above equation. First, calculations need to be performed for each individual sample \(s_{i}\). Second, calculations need to be performed for each item \(v_{j}\) within a single sample. The reason why these points acts as a drawback in recommendation systems is due to the large number of users and items in the data. For instance, in the case of MovieLens-1M dataset (Harper and Konstan, 2015), there are about 6000 users and 3500 items. However, performing Fisher matrix calculations that require differentiation with respect to \(\theta\) for each user and item becomes a computational burden.
#### 3.2.1 Sampling sequences
Batch-wise ComputationTo address the first challenge of performing computations on individual samples, we reinterpret the equation and carry out the calculations on a batch basis. It should be noted that \(p_{\theta}\left(v_{j}|s_{i}\right)\) can vary for each sample \(s_{i}\). Therefore, we perform the sorting of \(p_{\theta}\left(v_{j}|s\right)\) to address this variation, where BS indicates batch size:
\[\sum_{i}^{|\mathcal{U}|}\sum_{j}^{|\mathcal{V}|}p_{\theta}\left(v_ {j}\mid s_{i}\right)\left(\nabla_{\theta}\log p_{\theta}\left(v_{j}\mid s_{i} \right)\right)^{2} \tag{6}\] \[=\sum_{\text{BS}_{k}}\sum_{j}^{|\mathcal{V}|}\left(\sum_{i}^{\text {BS}_{k}}p_{\theta}\left(v_{j}|s_{i}\right)\right)\left(\nabla_{\theta}\sum_{i }^{\text{BS}_{k}}\log p_{\theta}\left(v_{j}|s_{i}\right)\right)^{2}.\]
#### 3.2.2 Sampling items
To alleviate the computational burden associated with iterating over all \(j\) values, which scales with \(|\mathcal{V}|\), we employ a sampling-based approach within the methodology. This sampling strategy aims to reduce the computational cost while maintaining the representativeness of the calculations.
Random SamplingWe compute the eq.3.2 by randomly sampling \(j\) from the total number of items. This process was performed to calculate the Fisher matrix without any specific assumptions or prior knowledge.
Top-\(k\) SamplingThe probability which is output by the model can be interpreted as the preference or likelihood of the recommended items for a given sample. Based on this interpretation, we select a set of \(n\) items that are most likely to be of interest to the corresponding user, i.e. \(p_{\theta}\left(v_{j}\mid s_{i}\right)\). Subsequently, we compute the Fisher matrix with these selected items as the focal points. By focusing on this subset of items that are expected to be of highest interest, we aim to capture the relevant information for optimizing the model's performance effectively.
\[\begin{split}&\sum_{j}^{\left|\mathcal{V}\right|}p_{\theta}\left(v_{j} \mid s\right)\left(\nabla_{\theta}\log p_{\theta}\left(v_{j}\mid s\right) \right)^{2}\\ &\approx\sum_{j}^{\text{top-}k}p_{\theta}\left(v_{j}\mid s\right) \left(\nabla_{\theta}\log p_{\theta}\left(v_{j}\mid s\right)\right)^{2}.\end{split} \tag{7}\]
Model-based SamplingTo select a subset of items for further analysis, we randomly sampled items based on their conditional probability \(p_{\theta}(v_{j}|s_{i})\) using a weighted random selection process. The selection probability of each item was determined by its associated probability stored in the model's output. By selecting items with higher probabilities, we focused on a specific number of items that were more likely to align with the user's preferences or interests. This allowed us to analyze and evaluate the subset of items based on their associated probabilities obtained from the model's output. With \(N\) denotes the sample size, his approximation can be represented as:
\[\begin{split}&\mathbb{E}_{y\sim p_{\theta}\left(y\mid x\right)} \left(\nabla_{\theta}\log p_{\theta}\left(y\mid x\right)\right)^{2}\\ &\approx\frac{1}{N}\sum_{v_{j}\sim p_{\theta}\left(v_{j}\mid s \right)}^{N}\left(\nabla_{\theta}\log p_{\theta}\left(v_{j}\mid s\right) \right)^{2}.\end{split} \tag{8}\]
Calculate with target itemWe compute the Fisher matrix based on the target item, disregarding other items with limited direct relevance. By employing this approach, we focus solely on the target item and its associated information to calculate the Fisher matrix. Our rationale behind this decision is to prioritize the target item's impact on the model's optimization process, as it is directly linked to the specific objective or task at hand. Consequently, we exclude items with minimal direct relevance to ensure a more targeted and meaningful computation of the Fisher matrix.
\[p_{\theta}\left(v_{j}^{*}\mid s\right)\left(\nabla_{\theta}\log p_{\theta} \left(v_{j}^{*}\mid s\right)\right)^{2}, \tag{9}\]
where \(v_{j}^{*}\) is the target item.
## 4 Experiments
We use MovieLens-1M dataset Harper and Konstan (2015) for experiments. For each user, we have sequential data consisting of movies purchased in chronological order. We adopt next-item prediction task (i.e. leave-one-out evaluation), following previous works Sun et al. (2019); Xie et al. (2022); Qiu et al. (2022). The last movie is considered as the test set, and the validation data is used to predict the preceding movies. During training, we adopt a masked language modeling approach similar to BERT Devlin et al. (2018), where we mask certain movies in the sequentially ordered list and task the model with predicting them.
The evaluation method used in this study is the Normalized Discounted Cumulative Gain at 10 (NDCG@10), which is a ranking-based evaluation approach He et al. (2017). It ranks the top 10 items predicted by the model based on their perceived preference and considers the actual ranking of the preferred items. A higher NDCG value, closer to 1, indicates better performance. Different NDCG values can be obtained depending on the selection of items, such as from the full item pool, a random set of 100 items, or the top 100 most popular items.
### Results of Model Merging
Examine the results through Table 1 and Table 2. Table 1 presents the results obtained by training models, namely BET4Rec Sun et al. (2019), CL4SRec Xie et al. (2022), and DuoRec Qiu et al. (2022). We merge these models using Fisher methods. While Table 1 demonstrates the results of models trained solely from scratch. Table 2 represents the results of fine-tune setting. We train the baseline model without contrastive loss for 20 epochs, which is the convergence point of the baseline experiment without any additional contrastive loss, similar to BERT4Rec. Following this, each model; BERT4Rec Sun et al. (2019), CL4SRec Xie et al. (2022), DuoRec Qiu et al. (2022), underwent fine-tuning according to their respective methods, and the results were merged using Fisher methods. In both conditions, we fine-tuned additional epoch after merging process.
Fisher merge fails to improve the performance of individual models in baseline setting. When Fisher merge is applied during the fine-tuning setting, it leads to improved performance compared to individual models. This finding aligns with previously reported phenomena Ganaie et al. (2022) where individual models tend to achieve higher performance than merged model in the baseline setting. However, the results of Fisher merge in the fine-tuning setting show comparable performance with the individual models in baseline setting, while the individual recipe models of fine-tuning setting do not exceed. Also, the results indicate that even for models that have not been sufficiently trained such as CL4SRec in our setting, merging parameters resulted in comparable performance to other models, demonstrating robustness.
### The Validity of Batch-wise Computation
We performed batch-wise computations with the aim of implementing an efficient Fisher matrix calculation. Compared to computing on individual samples, grouping samples into batches allowed us to achieve computational efficiency.
The following Figure 4 in Appendix A.3 illustrates the method for minimizing errors when performing calculations on a batch basis. The figure demonstrates that within a batch containing 10 samples, denoted as \(s_{i}\), there is a phenomenon where the probabilities of item \(v_{j}\) decrease in a similar manner. By sorting the samples \(s_{i}\) based on the probability of \(v_{j}\), even when grouping them into batches, it is possible to minimize the error described by the eq.6. Furthermore, the figure illustrates the rationale behind top-k sampling. For the top-k items, the probabilities hold meaningful information, whereas for the remaining items, the probabilities are nearly zero or close to it.
### Effect of Sampling Methods and Size
To investigate the effect of sampling methods, we conduct experiments by varying the number of sampled samples and the sampling techniques employed. Specifically, we consider three sample sizes: \(n=10,n=30\) and \(n=50\), and four different sampling methods: random sampling, top-k sampling, model-based sampling, and calculate with target item. The results of these experiments can be observed in Table 3. The table provides insights into the performance of each sampling method under different sample sizes, allowing us to analyze their respective effects on the task at hand. Note that this result is calculated on batched data. To examine the results of parameter merging, we conducted experiments in fine-tuning setting, explained in 2.
The experiments revealed effective ensemble results, particularly showcasing the robust performance of CL4SRec (Xie et al., 2022). Despite having significantly lower performance compared to other models during the parameter merging process, the model with poor performance exhibited robust performance in the Fisher merge results. Regarding the sampling methods, top-k sampling demonstrated the best performance. This can be attributed to the concentration of probabilities assigned to specific items by the model, effectively approximating the Fisher criterion sought in the evaluation. Also, the model-based sampling method exhibits a more pronounced improvement in performance as the sampling size increases compared to other models. We interpret these results as being rooted in the direct interpretation of the equation defined for Fisher merging. Interestingly, despite the fact that calculating Fisher matrix on target item has a single sample, the method demonstrated sampling effectiveness by achieving good performance even with a small sample size. These findings shed light on the interpretation of experimental results in the context of deep learning research.
### Computational Cost
Figure 2 demonstrates computational cost in terms of time consumed during calculating Fisher matrix for single model. The concept of parameter merging involves additional computation on the existing parameters. Therefore, it is important to ensure efficiency in this process. To achieve efficiency, considerations such as calculating the Fisher ma
\begin{table}
\begin{tabular}{c c|c c c} \hline \hline model & pos & full & random & popular \\ \hline baseline & & **0.1398** & **0.5651** & **0.0482** \\ cl4srec & & 0.0955 & 0.043 & 0.0429 \\ duorec & sup & 0.1348 & 0.5575 & 0.044 \\ & unsup & 0.1301 & 0.5592 & 0.0438 \\ & - & 0.1382 & 0.5588 & 0.0464 \\ \hline fisher & & 0.1289 & 0.5495 & 0.0472 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results of parameter merging; Fisher-merge on Baseline Settings. The recipes used for merging were trained for the same number of epochs. ’POS.’ refers to the method of constructing positive pair, ‘sup’ to supervised augmentation and ‘-’ to supervised and unsupervised augmentation in subsection 2.3
\begin{table}
\begin{tabular}{c c|c c c} \hline \hline model & pos & full & random & popular \\ \hline baseline & & 0.135 & 0.5573 & 0.0426 \\ cl4srec & & 0.0585 & 0.0513 & **0.0466** \\ duorec & sup & 0.1346 & 0.5547 & 0.0454 \\ & unsup & 0.1358 & 0.5594 & 0.0445 \\ & - & 0.1351 & 0.554 & 0.0423 \\ \hline fisher & & **0.1386** & **0.5618** & 0.0428 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results of parameter merging; Fisher-merge on fine-tune Settings. The recipes used for merging were trained on baseline model (without contrastive loss) and fine-tuned on each model.
Figure 2: Measured Time Consumed for Each Sampling Method and Size. We sample items from MovieLens-1M dataset. Time is measured in batch-wise setting, where batch size is 256.
trix in batch units and performing sampling are necessary. It is observed that, except for the calculation on the target item, the computational complexity increases linearly with the sampling size. As for the calculation on the target item, the sampling size remains fixed at 1 since each sequence has a single target item. Thus, our research is significant as it approximates the Fisher matrix calculation with a much smaller number of items (around 3000) compared to calculating it on the entire item set.
### Visualization of Merged Weights
We present a visual illustration to aid in the intuitive understanding of the merged weights. Figure 3 represents the fine-tuning setting of 2, where the three centroids correspond to the weights of individual models. The plane visualized in 3 encompasses these three weights. The scattered points, projected onto the plane, depict 100 samples drawn from \(\mathcal{N}(\theta_{m},F_{m})\). It is observed that the baseline weight exhibits the largest variance. This can be attributed to the experimental setup where the baseline is pre-trained and then fine-tuned with CL4SRec (Xie et al., 2022) and DuoRec (Qiu et al., 2022). The weights obtained through uniform merging are represented as the average of the three centroid points, while the weights obtained through Fisher merging take into account the variances of these recipe weights. It can be seen that the weights obtained through Fisher merging considered posterior and variance with Laplace approximation and provides nice initial point for fine-tune.
## 5 Conclusion
We apply ensemble technique, Fisher merging, for sequential models, enabling robust fine-tuning through parameter merging. Our experimental results demonstrate the effectiveness of these proposed methods in improving recommendation performance. These contributions have the potential to advance the field of sequential learning and recommendation systems, offering valuable insights for future research and practical applications.
\begin{table}
\begin{tabular}{c c|c c|c c|c} \hline \hline & sample size & \multicolumn{2}{c|}{FULL} & \multicolumn{2}{c|}{RANDOM} & \multicolumn{2}{c}{POPULAR} \\ \hline & NDCG10 & NDCG20 & NDCG10 & NDCG20 & NDCG10 & NDCG20 \\ \hline baseline & 0.135 & 0.1601 & 0.5573 & 0.5786 & 0.0426 & 0.0706 \\ CL4SRec & 0.0585 & 0.0751 & 0.0513 & 0.043 & **0.0466** & 0.0701 \\ DuoRec (sup.) & 0.1346 & 0.1591 & 0.5547 & 0.58 & 0.0454 & 0.068 \\ DuoRec (unsup.) & 0.1358 & 0.1609 & 0.5594 & 0.5782 & 0.0445 & 0.0742 \\ DuoRec (sup.\&unsup.) & 0.1351 & 0.1599 & 0.554 & 0.5732 & 0.0423 & 0.0724 \\ \hline random sampling & 10 & 0.1379 & 0.1638 & 0.5606 & 0.5825 & 0.0457 & 0.0691 \\ & 30 & 0.1366 & 0.1624 & 0.5584 & 0.58 & 0.0477 & **0.0726** \\ & 50 & 0.1386 & 0.1636 & 0.5598 & 0.5813 & 0.0419 & 0.0419 \\ \hline top-k sampling & 10 & 0.1364 & 0.1624 & 0.5602 & 0.5817 & 0.0446 & 0.0689 \\ & 30 & 0.1373 & 0.1616 & **0.5637** & **0.5835** & 0.0457 & 0.0708 \\ & 50 & **0.1387** & 0.1635 & 0.5592 & 0.5807 & 0.0424 & 0.0672 \\ \hline model-based sampling & 10 & 0.1358 & 0.1619 & 0.5564 & 0.5782 & 0.044 & 0.0696 \\ & 30 & 0.1385 & **0.1646** & 0.5579 & 0.5784 & 0.0446 & 0.0689 \\ & 50 & 0.138 & 0.1632 & 0.5605 & 0.5814 & 0.0465 & 0.0719 \\ \hline calculate on target item & 1 & 0.1386 & 0.1628 & 0.5618 & 0.5806 & 0.0428 & 0.0725 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Effect of Sampling Method and Sampling Size. We merge models in settings of Table 2, the fine-tune setting. We merged models with 4 sampling methods; random sampling, top-k sampling, model-based sampling, and calculate on target item, on 3 different sampling size; n=10, n=30 and n=50. **Bold** represents the best variant in each evaluation setting, and underlines indicates the second best variation.
Figure 3: Visualization for Weights of Merged Models. Based on the plane containing 64-dimension parameters of three model, we visualised its weight, 100 samples from each posteriors and merged parameters
## Acknowledgements
This work was support by the NRF grant [2012R1A2C3010887] and the MSIT/IITP [1711117093, 2021-0-00077, 2021-0-01343, Artificial Intelligence Graduate School Program(SNU)].
|
2304.02891 | ViralVectors: Compact and Scalable Alignment-free Virome Feature
Generation | The amount of sequencing data for SARS-CoV-2 is several orders of magnitude
larger than any virus. This will continue to grow geometrically for SARS-CoV-2,
and other viruses, as many countries heavily finance genomic surveillance
efforts. Hence, we need methods for processing large amounts of sequence data
to allow for effective yet timely decision-making. Such data will come from
heterogeneous sources: aligned, unaligned, or even unassembled raw nucleotide
or amino acid sequencing reads pertaining to the whole genome or regions (e.g.,
spike) of interest. In this work, we propose \emph{ViralVectors}, a compact
feature vector generation from virome sequencing data that allows effective
downstream analysis. Such generation is based on \emph{minimizers}, a type of
lightweight "signature" of a sequence, used traditionally in assembly and read
mapping -- to our knowledge, the first use minimizers in this way. We validate
our approach on different types of sequencing data: (a) 2.5M SARS-CoV-2 spike
sequences (to show scalability); (b) 3K Coronaviridae spike sequences (to show
robustness to more genomic variability); and (c) 4K raw WGS reads sets taken
from nasal-swab PCR tests (to show the ability to process unassembled reads).
Our results show that ViralVectors outperforms current benchmarks in most
classification and clustering tasks. | Sarwan Ali, Prakash Chourasia, Zahra Tayebi, Babatunde Bello, Murray Patterson | 2023-04-06T06:46:17Z | http://arxiv.org/abs/2304.02891v2 | # ViralVectors: Compact and Scalable Alignment-free Virome Feature Generation
###### Abstract
The amount of sequencing data for SARS-CoV-2 is several orders of magnitude larger than any virus. This will continue to grow geometrically for SARS-CoV-2, and other viruses, as many countries heavily finance genomic surveillance efforts. Hence, we need methods for processing large amounts of sequence data to allow for effective yet timely decision-making. Such data will come from heterogeneous sources: aligned, unaligned, or even unassembled raw nucleotide or amino acid sequencing reads pertaining to the whole genome or regions (e.g., spike) of interest. In this work, we propose _ViralVectors_, a compact feature vector generation from virome sequencing data that allows effective downstream analysis. Such generation is based on _minimizers_, a type of lightweight "signature" of a sequence, used traditionally in assembly and read mapping -- to our knowledge, the first use minimizers in this way. We validate our approach on different types of sequencing data: (a) 2.5M SARS-CoV-2 spike sequences (to show scalability); (b) 3K Coronavirus spike sequences (to show robustness to more genomic variability); and (c) 4K raw WGS reads sets taken from nasal-swab PCR tests (to show the ability to
process unassembled reads). Our results show that ViralVectors outperforms current benchmarks in most classification and clustering tasks.
## 1 Introduction
The concept of _genomic surveillance_ has existed for at least a decade [21], however the ongoing COVID-19 pandemic has made this an almost household term. Because such a pandemic became global at a time when sequencing technologies are quickly advancing [55], the number of SARS-CoV-2 viral genomes (viromes) are available on public databases such as GISAID [22] is orders of magnitude greater than any sequenced virus in history. Not only are such volumes of data posing problems for the current algorithms used to determine the dynamics of a virus from sequencing information (_e.g._, [23]), many countries have committed extensive budgets to vastly increase sequencing infrastructure for genomic surveillance efforts due to the pandemic. This means that the number of virome (and other molecular) sequences available in the near future will be again orders of magnitude greater than the current number of SARS-CoV-2 sequences. This inundation of sequencing data will be mostly in the form of raw sequencing reads from heterogeneous short and long-read sequencing technologies because even mapping and assembly pipelines will be overwhelmed by its sheer amount.
For this, we will need ways to swiftly extract meaningful information from sequencing data for decision-making. Furthermore, such approaches will need to be scalable to huge numbers of sequences -- the number of SARS-CoV-2 sequences already accessible is already in the millions [22]. These technologies will have to be both specific and sensitive enough to detect a wide range of viruses [37]. Finally, such approaches must be able to extract such information efficiently from a variety of heterogeneous genomic or proteomic data sources with varying levels of refinement, ranging from multiply aligned consensus sequences to raw unassembled sequencing reads [52]. In this work, we offer a solution to such a problem in the form of a method we call _ViralVectors_, which generates a compact and scalable feature vector representation of viral genome (virome) data, which can be sourced from aligned, unaligned, or unassembled raw sequencing reads. Such a representation captures the necessary information from the virome yet is lightweight enough to allow quick extraction, and fast performance of downstream machine learning techniques, such as classification and clustering. We show that such a method obtains accuracy and speeds which are comparable to current benchmarks on a variety of different datasets, including: (a) a dataset of 2.5 million consensus SARS-CoV-2 spike protein sequences to demonstrate its scalability to millions of sequences; (b) a set of 3.3 thousand spike protein sequences from different genera and species of the Coronavirus family to demonstrate its robustness in the presence of a larger degree of genomic variability; and (c) a set of raw whole-genome sequencing reads sets from the samples of a nasal-swap PCR
test of 4.3 thousand different COVID-19 patients, to demonstrate its ability to process even unassembled raw sequencing reads.
The key concept that allows us to have such a compact and scalable feature vector generation is that of a _minimizer_[41], a form of lightweight "signature" of a sequence, which is obtained by sampling the sequence. The notion of minimizer is close to that of a \(k\)-mer [14], but it is even more lightweight -- minimizers are sampled from the \(k\)-mers, in fact (see Figure 1). More formally, given a sequence, the first step is to take mers (substrings) of length \(k\) (_i.e._, \(k\)-mers). Then an \(m\)-mer is extracted from each \(k\)-mer (where \(m<k\)), where the \(m\)-mer is lexicographically minimum in both forward and reverse sorted order of the \(k\)-mers. Minimizers, similarly to \(k\)-mers, have had a history of success in the domain of _de novo_ assembly [18], and even read mapping [31], with the "seed-and-extend" approach -- it has even had success in quickly counting \(k\)-mers [43]. In this work, we use these "seeds" directly in designing a compact feature vector representation from the minimizers, which is then used as input to typical machine learning algorithms for classification and clustering purposes.
Some effort has been made in the literature to classify and cluster biological sequences [1; 2; 5; 8; 29; 47]. Although existing methods successfully achieve higher classification accuracy, it is unclear whether these approaches are robust and scalable on larger datasets (millions of sequences). A kernel-based approach is proposed [20] for sequence classification using \(k\)-mers. Although
Figure 1: Example of \(k\)-mers (\(k=10\)) and minimizers (\(m=3\)) of the amino acid sequence “MDPEGRKMLSVBSLRDSY”. For a given \(k\)-mer, its minimizer is the \(m\)-mer that is lexicographically minimum among forward and reverse (sorted) order of all \(m\)-mers within this \(k\)-mer.
their _approximate kernel_ based method is fast in terms of computing the pairwise distance between two sequences, generating the gram (kernel) matrix is a memory-extensive operation. Storing an \(n\times n\) dimensional matrix (where \(n\) is the number of sequences) in memory is practically not possible with millions of sequences. A one-hot encoding-based approach is proposed, although researchers [29] successfully classify coronavirus hosts using one-hot encoding, their method is also not scalable on "Big Data" [2; 8]. In this paper, we propose ViralVectors, a compact and scalable feature vector generation method tailored to biological sequences, which can be used as input for any machine learning algorithm for classification and clustering purposes. We show that our proposed feature vector generation approach is general and can be applied to different types of biological sequences. ViralVectors is not only scalable but also achieves higher predictive performances as compared to traditional one-hot and \(k\)-mers-based feature embedding methods. Our contributions in this paper are as follows:
1. We propose an embedding approach called ViralVectors, that outperforms the baseline feature embedding methods in terms of predictive accuracy.
2. We show the scalability of ViralVectors on larger datasets by using \(\approx 2.5\) million sequences from the GISAID website.
3. We show that the models proposed in the literature to obtain feature embeddings [2; 8; 29] are not robust on these larger datasets.
4. We show that our proposed model is general and can be applied on different types of sequences.
5. We perform clustering on ViralVectors-based embedding and show that the resultant clustering is better than the traditional Pangolin tool-based clustering and traditional one-hot and \(k\)-mers based embedding methods for clustering sequences.
6. We show the effectiveness of our compact feature vector representation by performing classification and clustering algorithms on the short reads data extracted from NCBI.
7. Using t-SNE plots, we show that the ViralVectors-based embedding may preserve the overall structure of the data while storing less information than one-hot and \(k\)-mers-based embeddings.
8. We perform statistical analysis to understand the behavior of data and predictive models.
The rest of the paper is organized as follows: Section 2 contains the related work for the given research problem. Our proposed ViralVectors-based embedding is explained in detail in Section 3. Section 4 shows the dataset detail and experimental setup information. Our results are given in Section 5. The statistical analysis of the data and different feature vectors are given in Section 6. Finally, we conclude our paper in Section 7.
## 2 Related Work
There exist several machine learning approaches based on \(k\)-mers for classification and clustering [2; 8; 9; 45], as well as more classical algorithms for sequence classification [52]. There is also a rich literature of alignment-free sequence comparison techniques [4; 11; 12; 13; 37]. Finally, there have been some recent theoretical and practical developments on minimizers [33; 56]. Although these methods are proven to be useful in respective studies, it is not clear if they can be extended on larger datasets without compromising on the predictive performance of proposed models [7; 36]. Authors in [34] propose a new set of amino acid descriptors called principal components score Vectors of Hydrophobic, Steric, and Electronic properties (VHSE) to store the chemical information relating to biological activities. A position-specific scoring matrix based approach for Protein secondary structure prediction is proposed in [26], which uses two-stage neural network. Authors in [19] compare classical encoding matrices such as one-hot encoding, VHSE8, and BLOSUM62 for end-to-end learning of amino acid embeddings for different machine learning tasks.
In [45], authors used their approach primarily on HIV, but also on experiments with dengue, influenza A, hepatitis B, and hepatitis C. A similar approach that uses mismatch kernels with support vector machines is proposed in [30] for classifying protein sequences. String kernels are also commonly used for the classification of biological sequences, nucleotides, as well as amino acid sequences [49].
Among the theoretical work on minimizers [56], the relationship has tight coupling between universal hitting sets and minimizers schemes, where minimizers schemes with low density (_i.e._, efficient schemes) correspond to universal hitting sets of small size. Local schemes are a generalization of minimizers schemes, which can be used as replacements for minimizers schemes with the possibility of being much more efficient. This suggests even further possible future improvements to our feature vector generation.
Another issue that might affect the efficiency of underlying classification and clustering algorithms is data dimensionality. A typical technique to minimize data dimensionality is feature selection and dimensionality reduction. Several methods (supervised and unsupervised) have been proposed in the literature such as ridge regression [24], lasso regression [48], and principal component analysis (PCA) [51], etc., to get the low dimensional feature vector representations. These methods not only improve the runtime of underlying classification and clustering algorithms but also improve the predictive performance of the algorithms. Authors in [2] performs clustering on SARS-CoV-2 spike sequences and show that clustering performance could be improved by using the lasso and ridge regression. However, the major problem with all these methods is that they are not scalable on larger datasets, hence they cannot be applied in real-world settings where we can have millions of sequences. Authors in [8] use an approximate kernel method for spike sequence classification. But,
since the kernel computation is memory intensive, their proposed model does not scale to more than 7000 sequences.
## 3 Proposed Approach
In this section, we discuss our proposed method, called ViralVectors, which computes minimizers from the viral genome (virome) sequences. From these minimizers, we can then generate fixed-length feature vectors.
In the literature, it has been proven that \(k\)-mers-based frequency vectors perform better than baselines such as one-hot-encoding (OHE) [2; 8]. However, a major problem with the \(k\)-mers-based approach is that for long sequences, there can be a large number of \(k\)-mers that are common to all sequences [53]. Supporting such \(k\)-mers in the frequency vector does not contribute much towards the predictive capability of the downstream classification algorithms, while, at the same time, these "redundant" \(k\)-mers contribute heavily to the runtime. This is because, for each \(k\)-mer, we need to find the location (also called bin) in the frequency vector that is associated with it. This bin searching can take as much time as the length of the frequency vector in the worst case (e.g., when the bin for a \(k\)-mer is the last one in the frequency vector). Performing the bin search operation for such \(k\)-mers that are common to all (or most) of the sequences may not be an efficient approach. This hints at the need to have more compact numerical feature vector representations of the amino acids that not only preserve the quality of the downstream predictions but also reduce the runtime of this bin-searching.
ViralVectors is a compact feature vector generation that resolves some of the problems mentioned above by using the notion of _minimizer_[41]. For a given \(k\)-mer, a minimizer is an \(m\)-mer (\(m<k\)) that is lexicographically smallest both in forward and reverse order of the \(k\)-mer. Instead of storing the \(k\)-mers themselves, ViralVectors stores the minimizers from these \(k\)-mers, as in Figure 1. Since \(m<k\), we are ignoring most of the amino acids in the \(k\)-mers and only preserving a fraction of the \(m\)-mers, which saves time on bin searching.
See Algorithm 1 for the pseudocode of this minimizer generation. Here, it considers a sequence \(s\) and computes the first \(k\)-mer, then slides a window over that \(k\)-mer to find the set of \(m\)-mers. Next, it will compare all the \(m\)-mers in the set from the first \(k\)-mer to find the minimum lexicographical (in forward and reverse order) \(m\)-mer and will save that in the set of minimizers (else clause starting on line 15). In the next iterations when the algorithm is producing the \(m\)-mers out of each \(k\)-mers it only needs to compare the minimum \(m\)-mer from the last iteration to the last produced \(m\)-mer of each \(k\)-mer. If it was smaller than the current minimum \(m\)-mer, it will add to the minimizers set otherwise it will continue (if the clause starts on line 7). Note that this else clause starting on line 15 is invoked in two cases: (1) when the algorithm is on its first iteration (idx = 0), and (2) when the current minimizer is at the front of the queue (idx = 1). Because this else clause does not get called too often on
average, in the average case, the complexity of computing minimizers with this algorithm is \(O(|s|)\), even though the worst case is \(O(k\cdot|s|)\), as mentioned in [31]. One can verify that the minimizers of Figure 1 are produced by Algorithm 1. To compute the minimizers from long reads, we use a \(k=9\) and an \(m=3\) (selected using standard validation set approach [17]).
```
Input: Sequence \(s\) and integers \(k\) and \(m\) Output: Minimizers
1FunctionComputeMinimizer(\(s\), \(k\), \(m\))
2minimizers = \(\emptyset\)
3queue = [] // pmaintain queue of all \(m\)-mers in current window of size \(k\)idx = 0 // \(\triangleright\)index in queue of the current minimizer
4for\(i\gets 1\) to \(|s|-k+1\)do
5kmer = \(s[i:i+k]\) // pcurrent window of size \(k\)
6ifidx \(>1\)then
7queue.dequeue // \(\triangleright\)discard \(m\)-mer from the front
8mmer = \(s[i+k-m:i+k]\) // \(\triangleright\)new \(m\)-mer to add
9idx \(\leftarrow\)idx \(-1\) // \(\triangleright\)shift index of current minimizer
10mmer = min(mmer, reverse(mmer))
11// \(\triangleright\)lexicographically smallest forward/reverse
12queue.enqueue(mmer) // \(\triangleright\)add new \(m\)-mer to the back
13ifmmer \(<\)queue[idx]then
14idx = \(k-m\) // \(\triangleright\)check/update minimizer with new \(m\)-mer
15
16else
17queue = [] // breset the queue, start from scratch
18idx = 0
19for\(j\gets 1\) to \(k-m+1\)do
20mmer = kmer[\(j:j+m\)] // \(\triangleright\)compute each \(m\)-mer
21mmer = min(mmer, reverse(mmer))
22queue.enqueue(mmer)
23ifmmer \(<\)queue[idx]then
24idx = \(j\) // \(\triangleright\)keep track of (index of) current minimizer
25
26
27
28
29minimizers \(\leftarrow\) minimizers \(\cup\) queue[idx] // \(\triangleright\)add current minimizer
30
31return minimizers
```
**Algorithm 1**Minimizer Computation
After generating the minimizers for each sequence, we generate the numerical feature vector representation (e.g., ViralVectors) that contains the frequency/count of the minimizers within each sequence. The length of the feature vectors for the minimizer is the same as with \(k\)-mers. The pseudocode to generate the frequency vector is given in Algorithm 2.
```
Input: Sequence \(s\) and integers \(k\) and \(m\) Output: Minimizers
1FunctionComputeMinimizer(\(s\), \(k\), \(m\))
2minimizers = \(\emptyset\)
3queue = [] // pmaintain queue of all \(m\)-mers in current window of size \(k\)idx = 0
4for\(i\gets 1\) to \(|s|-k+1\)do
5kmer = \(s[i:i+k]\) // \(\triangleright\)current window of size \(k\)
6ifidx \(>1\)then
7queue.dequeue // \(\triangleright\)discard \(m\)-mer from the front
8mmer = \(s[i+k-m:i+k]\) // \(\triangleright\)new \(m\)-mer to add
9idx \(\leftarrow\)idx \(-1\) // \(\triangleright\)shift index of current minimizer
10mmer = min(mmer, reverse(mmer))
11// \(\triangleright\)lexicographically smallest forward/reverse
12queue.enqueue(mmer) // \(\triangleright\)add new \(m\)-mer to the back
13ifmmer \(<\)queue[idx]then
14idx = \(k-m\) // \(\triangleright\)check/update minimizer with new \(m\)-mer
15
16else
17queue = [] // breset the queue, start from scratch
18idx = 0
19for\(j\gets 1\) to \(k-m+1\)do
20mmer = kmer[\(j:j+m\)] // \(\triangleright\)compute each \(m\)-mer
21mmer = min(mmer, reverse(mmer))
22queue.enqueue(mmer)
23ifmmer \(<\)queue[idx]then
24idx = \(j\) // \(\triangleright\)keep track of (index of) current minimizer
25
26
27
28
29minimizers \(\leftarrow\) minimizers \(\cup\) queue[idx] // \(\triangleright\)add current minimizer
31return minimizers
```
**Algorithm 2**Minimizer Computation
Traditional machine learning-based models, such as Support Vector Machine (SVM) and Naive Bayes are proven to perform efficiently on smaller data [8; 29]. However, they are not very scalable on millions of sequences [2]. For this purpose, it is required to reduce the dimensions of the ViralVectors (minimizers-based feature vector) and \(k\)-mers based frequency vectors so
that the overall model is scalable on "Big Data". Traditional methods for dimensionality reduction, such as principal component analysis, ridge regression, lasso regression, etc., are very expensive in terms of runtime and are not scalable on bigger datasets. Therefore, the scalability of machine learning algorithms is a major issue that we can face in real-world scenarios.
To deal with the scalability issue, one option is to use kernel-based algorithms that compute a gram matrix (similarity matrix) which can later be used as an input for kernel-based classifiers such as SVM. However, using the exact algorithm to compute the pair-wise distance between sequences can be very expensive. To make the kernels faster, we can use the so-called kernel trick.
Definition 1 (Kernel Trick): It is used to generate features for an algorithm that depends on the inner product between only the pairs of input vectors. It avoids the need to map the input data (explicitly) to a high-dimensional feature space.
The Kernel Trick depends on the following statement: _Any positive definite function f(x,y), where \(x,y\in\mathcal{R}^{d}\), defines a lifting \(\phi\) and an inner product. This is done to quickly compute the inner product between the lifted data points_[40]. More formally: \(\langle\phi(x),\phi(y)\rangle=f(x,y)\). The major problem with the kernel approach is that in the case of large training data, it suffers from large initial computational and storage costs. To deal with this drawback, we are using an approximate algorithm called Random Fourier Features (RFF) [40] in this paper. The RFF maps the input data to a low-dimensional (randomized) feature space (Euclidean inner product space). More formally: \(a:\mathcal{R}^{d}\rightarrow\mathcal{R}^{D}\). In this way, we approximate the inner product between a pair of transformed points. More formally:
\[f(x,y)=\langle\phi(x),\phi(y)\rangle\approx a(x)^{\prime}a(y) \tag{1}\]
In Equation (1), \(a\) is the low dimensional representation (unlike the lifting \(\phi\)). In this way, we can transform the original feature vectors with \(a\) that acts as the approximate low-dimensional representation for the original feature vector. This low-dimensional feature embedding can be used as an input for classification, clustering, and regression tasks. Note that we apply RFF on both \(k\)-mers and ViralVectors-based embeddings to make them scalable for multi-million
sequences data. The dimensions of the approximate representation (from RFF) are taken as \(500\) (decided using standard validation set approach [17]).
## 4 Experimental Setup
In this section, we first discuss the datasets that we are using in the experiments. After that, we discuss the classification and clustering algorithms used in the experiments. In the end, we give detail about the evaluation metrics for each algorithm. All experiments are conducted using an Intel(R) Xeon(R) CPU E7-4850 v4 @ 2.10GHz having Ubuntu 64 bit OS (16.04.7 LTS Xenial Xerus) with 3023 GB memory. Implementation of ViralVectors, Spike2Vec, and OHE is done in Python. For the classification algorithms, we use \(10\%\) data for training and \(90\%\) for testing [8]. The purpose of using a smaller training dataset is to evaluate the performance gain we can achieve while using minimal data for training. From the \(90\%\) testing set, we use \(10\%\) as a validation set (for hyperparameters tuning) while \(80\%\) as a held-out testing set. All the hyperparameters, including \(k\) and \(m\), are tuned using this \(10\%\) validation set. This training and testing set splitting process is repeated \(5\) times, and we then report average results.
### Dataset Statistics
In this paper, we use three different datasets. The first dataset that we are using a set of the full-length consensus spike protein (amino-acid) sequences from GISAID [6], which is the largest known database of SARS-CoV-2 sequences. In this data, we are using the spike protein sequences of COVID-19 viral samples from all around the world (see Figure 2c). We collected a total \(2,519,386\) spike protein sequences having \(1327\) variants in total. Figure 2c contains the distribution of the well-represented (22) COVID-19 variants in our GISAID dataset, which comprised \(1,995,195\) sequences (after preprocessing) in total (out of \(\approx 2.5\) million sequences).
The second data source that we are using is retrieved from the NIAD Virus Pathogen Database and Analysis Resource (ViPR) [3; 39], which contains full-length spike protein sequences of different genera and species under the Coronavirus family, and the goal is to predict which host it is most likely to affect (humans, bats, camels, etc.) -- something which can be done fairly reliably using the spike sequence alone [3; 29]. The distribution of the affected hosts of this ViPR dataset is given in Figure 2a.
The third data source that we are using is a collection of raw whole-genome sequencing reads sets from nasal-swab PCR tests of COVID-19-infected humans, which are collected from the NCBI website 1. We collected \(4,387\) such sets of reads in total. The distribution of the variants (on a per-sample basis) in the NCBI short reads sets are given in Figure 2b. Note, that for this last
dataset, in order to assign a variant label to each sample (the first two sets of sequences have variants identified), we needed to align the corresponding set of reads to the reference genome and call the state-of-the-art Pango tool [38]. Note, however, that since ViralVectors is an alignment-free approach, we obtain a fixed-length feature vector directly from the reads themselves.
The SARS-CoV-2 reference genome sequence (INSDC accession number GCA_009858895.3, sequence MN9089047) used in this study was obtained from the Ensemble COVID-19 browser database [25]. It is a complete genome of 29903 bps, a reference assembly of the viral RNA genome isolates of the first cases in Wuhan-HU-1, China [54] and has been reportedly used as the standard reference widely [16].
### Data Visualization
To see if there is any natural clustering in the data, we computed the 2D representation of the feature vectors using the t-distributed stochastic neighbor embedding (t-SNE) approach [50] and plot the 2D numerical data using scatter plots. The t-SNE plots for the ViPR dataset are given in Figure 3. We can observe that most of the hosts formed separate (sometimes multiple) clusters. This means that ViPR data are well separated. Another important point to note here is that in the case of ViralVectors (minimizer-based frequency vectors), the overall structure of the data remains the same while using only a fraction of information as compared to OHE. Note that the time complexity of t-SNE is \(O(n^{2})\). Therefore, it cannot be applied easily on 2.5 million GISAID sequences.
### Classification and Clustering Algorithms
After generating the numerical feature vector representation, the next step is to evaluate the quality of those feature vectors. For this purpose, we use different classification and clustering algorithms.
For the classification tasks, we use different machine learning (ML) algorithms. We use Naive Bayes (NB), Logistic Regression (LR), Ridge Classifier (RC), Multi-layer Perceptron (MLP), K-Nearest Neighbors (KNN) "with k = 5, which is decided using standard validation set approach [17]", Random Forest (RF), Logistic Regression (LR), and Decision Tree (DT).
We also use a model with a sequential constructor that is part of the Keras package (also called Keras classifier). It contains a fully connected network with 1 hidden layer with a number of neurons equal to the length of the feature vector. The activation function for input layers is "rectifier" while we use the "softmax" activation function for the output layers. We also use the efficient Adam gradient descent optimization approach with "sparse categorical cross entropy" loss function because we are dealing with multi-class classification problems. It computes the cross entropy loss between the labels and predictions. The batch size for the experiments is 100 while the number of epochs is
taken as 10 for the training of our DL model. Note that we are using "sparse categorical cross entropy" rather than simple "categorical cross entropy" because we are using integer labels rather than the one-hot representation of labels.
For clustering analysis, the goal is to group the data into subgroups that share some degree of similarity. For clustering purposes, we are using the \(k\)-means algorithm. We used the Elbow method to select the optimum number of clusters for the \(k\)-means [2]. This method for the different numbers of clusters (ranging from 2 to 100) is performing clustering to see the trade-off between the runtime and the sum of squared error (distortion score). For the GISAID data, we take 22 as an optimal number of clusters. For the ViPR data, we use
Figure 2: (a) Host distribution in the ViPR dataset, (b) Variant distribution in the NCBI short read dataset, and (c) Variants distribution in the GISAID dataset.
18 as an optimal number of clusters. Similarly, for the NCBI short-reads data, we use 7 as the optimal number of clusters.
### Evaluation Metrics
To evaluate the performance of ViralVectors, we perform classification and clustering. We report average accuracy, precision, recall, weighted F1, macro F1, and ROC-AUC. For metrics designed for binary classification, we apply the one-vs-rest approach to use them for multi-class classification. We also show the runtime of different classification algorithms. We ran experiments 5 times for the classification task and reported average results. To evaluate the clustering method, we use F1 (weighted), Silhouette Coefficient [42], Calinski-Harabasz Score [10], and Davies-Bouldin Score [15].
1. The Silhouette Coefficient refers to an approach that is used for the validation and interpretation of consistency within clusters of a given data. Its value range from \(-1\) to \(1\) while \(1\) is best and \(-1\) is worst clustering.
2. The Calinski-Harabasz Score is the ratio between the within-cluster dispersion and the between-cluster dispersion (a higher score is better).
Figure 4: t-SNE plots for the GISAID data drawn using ViralVectors with (a) original variants as labels, and (b) labels from \(k\)-means.
Figure 3: t-SNE plots for ViPR dataset using (a) OHE, (b) Spike2Vec, and (c) ViralVectors.
3. The Davies-Bouldin Score computes the average similarity between clusters. In this metric, the similarity is a measure, which compares the distance between clusters with the size of the clusters themselves (a lower score is better in this case, as it means that clusters are well separated from each other).
### Baseline Models
We use different baseline and recent state-of-the-art (SOTA) methods (designed for SARS-CoV-2 sequences) to compare the results with ViralVectors. The baseline model that we are using is One Hot Embedding (OHE) [29] while the recent SOTA methods are Spike2Vec [6], PWM2Vec [3], and Pango Tool [38].
#### 4.5.1 One Hot Embedding (OHE) [29]
Since most machine learning methods do not work with biological sequence-based feature vectors, it is important to convert them into a numerical representation. A traditional method to convert sequential information into numerical representation is called one-hot embedding [8; 29]. Given a finite set of symbols in a sequence (_e.g._, spike sequence), we call this set as an alphabet, denoted by \(\Sigma\). In the GISAID amino acid sequences, for example, we have 21 unique characters "_ACDEFGHIKLMNPQRSTVWXY_" (_i.e._, amino acids). To design a fixed-length feature vector representation, we generate a length of 21 binary vector for each amino acid, which contains a value of 1 for the position of that specific character and zero everywhere else. In the end, we concatenate all these vectors to get a final feature vector representation for a given sequence. In GISAID amino acid sequences, since the length of each spike amino acid sequence is 1273, the length of each OHE-based vector is \(1273\times 21=26,733\) (more detail on the dataset can be found in Section 4.1). For the ViPR data, since the length of each spike amino acid sequence (after alignment) is 3498 (and the length of unique characters is 24), therefore, the length of the OHE vector is \(3498\times 24=83,952\). In the case of NCBI raw short reads sequencing data, the OHE does not apply, since we have variable-length unmapped reads rather than a single fixed-length sequence. After generating the feature vectors, we can give these vectors as input to machine learning algorithms for classification and clustering purposes.
Remark 1: Note that one problem with OHE is that it required all sequences in a data to be of fixed-length [2; 8].
#### 4.5.2 Spike2Vec [6]
Since OHE does not work with variable-length sequences, a popular alignment-free method is using \(k\)-mers to preserve the order of amino acids and then
generating a fixed-length feature vector that contains the frequency of each \(k\)-mer in a virome sequence. In this setting, the first step is to compute the substrings (called mers) of length \(k\), where \(k\) is the user-defined parameter. The \(k\)-mers are generated using a sliding window approach with the increment of 1 (see Figure 1). The total number of possible \(k\)-mers that can be generated from a virome sequence is "N - k + 1", where \(N\) is the length of the sequence.
Fixed-length Representation:Since each virome sequence can have a different number of \(k\)-mers, it is important to generate fixed-length numerical representation so that classification and clustering algorithms could be applied. For this purpose, we design a feature vector of length \(|\Sigma|^{k}\) (where \(\Sigma\) is the alphabet and \(k\) is user-defined parameter for \(k\)-mers) that contains the frequency/count of each \(k\)-mer within a sequence. In this paper, we are taking \(k=3\) for all experiments unless specifically mentioned otherwise (decided using standard validation set approach [17]). In the GISAID dataset, since the total number of alphabets is 21, the length of the Spike2Vec-based feature vector is \(21^{3}=9261\). For the ViPR-dataset, the length of the Spike2Vec-based vector is \(25^{3}=15625\), and for NCBI short reads data, the length of Spike2Vec-based vector is \(24^{3}=13842\).
#### 4.5.3 PWM2Vec [3]
When using a Spike2Vec method, the frequency vectors obtained are comparatively low dimension but still are high dimensional. Moreover, while generating the frequency vectors, matching the \(k\)-mers to the appropriate location/bin in the vector (bin matching) can be computationally expensive. To solve these issues, PWM2Vec [3] can be used. It is a recently proposed method for producing a fixed-length numerical feature vector based using the well-known position-weight matrix notion [46]. PWM2Vec creates a PWM from the sequence's \(k\)-mers, and the final feature vector contains the score of each \(k\)-mer in the PWM. This enables the method to use \(k\)-mers ability to collect localization information while also capturing the significance of each amino acid's position in the sequence (information that is lost in computing \(k\)-mer frequency vector). By combining these pieces of data in this way, a compact and broad feature embedding can be created that can be used for a variety of downstream machine-learning tasks.
#### 4.5.4 Pango Tool [38]
For clustering purposes, we also use the state-of-the-art clustering benchmark called Pango tool [38]. Since the Pango tool takes multiply aligned sequences as input, we needed to align each read set to the reference genome, call (genomic) variants, and introduce these variants into the reference sequence to generate a consensus sequence that represents this particular sample -- the pipeline is available as a Snakefile [35] in our shared code repository above. The SARS-CoV-2 reference genome sequence (INSDC accession \(GCA\_009858895.3\)
sequence MN9089047) used in this study is obtained from Ensemble COVID-19 browser database, Ensemble COVID-19 [27; 28]. It is a complete genome of 29903 bps. The genome the reference assembly of the viral RNA genome Isolates of the first cases Wuhan-HU-1, China [54] and has been reportedly used as the standard reference widely [44].
## 5 Results and Discussion
In this section, we show the results for different classifications and clustering algorithms on all feature vector embedding approaches for all datasets.
### Classification Results
We start by showing results for the classification of the GISAID dataset. Table 3 shows the results for different embedding methods and classification algorithms (naive Bayes, logistic regression, ridge classifier) on the GISAID dataset for the classification of variants. We can observe that the Keras classifier with ViralVectors-based embedding (minimizer with RFF) outperforms the other embedding methods for all evaluation metrics. However, in terms of runtime, Ridge Classifier with ViralVectors is performing better than all other methods.
To show the generalizability of our proposed feature embedding (ViralVectors), we use the same feature embeddings with countries and continents information separately as a class label and performed classification using the same experimental settings (as done in Table 3). The results for country classification and continent classification are given in Table 1 and Table 2. We can observe that in both scenarios, DL based classifier with ViralVectors-based feature embedding outperforms all other methods for all evaluation metrics. Note that we computed results for only 3 classifiers for the GISAID dataset due to the high computation cost of other classifiers, such as MLP and KNN. Since they were taking very long to compute results on this \(\approx 2.5\) million spike sequence data, we only used the classifiers that were best in terms of runtime. Table 4 shows the classification results for the ViPR dataset. Since the size of the dataset is smaller in this case, we use all the classifiers that were not able to compute results for the GISAID dataset. We can observe that ViralVectors-based embedding with logistic regression classifier outperforms the other two embedding approaches for all but one evaluation metric.
We can see that for the ViPR dataset, PWM2Vec is giving better clustering results in terms of all but one internal clustering evaluation metric. For the NCBI short reads data, although Pangolin is better in terms of Silhouette Coefficient (higher value is better), the ViralVectors-based feature embedding performs better in terms of Calinski-Harabasz Score (higher value is better) and Davies-Bouldin Score (a lower value is better).
We also use F1 (weighted) to further evaluate the clustering performance of \(k\)-means on the GISAID dataset. Based on the F1 score, since we do not have the ground truth clustering labels, we assign a label to every cluster based on the majority variant in that cluster. The F1 scores for the top 5 variants are shown in Table 6.
We can observe that ViralVectors-based feature embedding is showing the highest F1 score as compared to the other embedding methods. Note that the F1 score is on the lower side for all embedding methods in the case of the Beta variant. This is because of the lower proportion of the Beta variant in the dataset as given in Figure 2c. Since the number of sequences corresponding to the Beta variant as a label is fewer, the underlying clustering algorithm is unable to capture all the patterns in the sequences.
To visually evaluate the performance of \(k\)-means clustering, we compute the 2D numerical representation for a subset of GISAID data using the t-SNE algorithm. For each corresponding sequence, we color the actual variants
\begin{table}
\begin{tabular}{c l c c c c c|c} \hline \hline \multicolumn{1}{c}{\begin{tabular}{c} Embed. \\ Method \\ \end{tabular} } & \multicolumn{1}{c}{ML Algo.} & Acc. & Prec. & Recall & \begin{tabular}{c} F1 \\ weigh. \\ \end{tabular} & \begin{tabular}{c} F1 \\ Macro \\ \end{tabular} & \begin{tabular}{c} ROC- \\ AUC \\ \end{tabular} &
\begin{tabular}{c} Train. \\ runtime \\ (sec.) \\ \end{tabular} \\ \hline \hline \multirow{6}{*}{OHE} & NB & 0.11 & 0.44 & 0.11 & 0.11 & 0.10 & 0.55 & 1308.4 \\ & LR & 0.40 & 0.46 & 0.40 & 0.33 & 0.15 & 0.55 & 2361.8 \\ & RC & 0.40 & 0.38 & 0.40 & 0.31 & 0.11 & 0.54 & 746.4 \\ & Keras Classifier & 0.49 & 0.53 & 0.49 & 0.43 & 0.24 & 0.6 & 28914.8 \\ \cline{2-8} & NB & 0.13 & 0.41 & 0.13 & 0.15 & 0.10 & 0.55 & 1315.3 \\ & LR & 0.40 & 0.45 & 0.40 & 0.33 & 0.16 & 0.55 & 2736.8 \\ & RC & 0.39 & 0.37 & 0.39 & 0.31 & 0.11 & 0.54 & 779.4 \\ & Keras Classifier & 0.50 & 0.54 & 0.50 & 0.45 & 0.28 & 0.59 & 10383.6 \\ \cline{2-8} & NB & 0.14 & 0.42 & 0.14 & 0.16 & 0.10 & 0.54 & 601.5 \\ & LR & 0.40 & 0.46 & 0.40 & 0.34 & 0.16 & 0.56 & 860.7 \\ & RC & 0.41 & 0.38 & 0.39 & 0.32 & 0.12 & 0.55 & **140.4** \\ & Keras Classifier & 0.50 & 0.55 & 0.50 & 0.46 & 0.29 & 0.60 & 466.8 \\ \cline{2-8} & NB & 0.13 & 0.43 & 0.13 & 0.17 & 0.11 & 0.56 & 2683.72 \\ & LR & 0.43 & 0.48 & 0.41 & 0.35 & 0.17 & 0.57 & 4706.32 \\ & RC & 0.40 & 0.39 & 0.40 & 0.32 & 0.14 & 0.54 & 1492.79 \\ & Keras Classifier & **0.51** & **0.56** & **0.51** & **0.47** & **0.31** & **0.63** & 13616.17 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Country Classification Results (10% training set and 90% testing set) for 27 countries (2384646 spike sequences) in GISAID dataset. The best values are shown in bold.
(true labels) for that sequence in Figure 4 (a) and compare it with the labels obtained after applying \(k\)-means clustering (on the same 2D t-SNE based representation) in Figure 4 (b). We can observe that with the \(k\)-means, most of the variants are forming separate clusters. One interesting insight is that some variants form more than one cluster. This means that they may be going away from that original variant and developing a new variant, which may be at some initial stage.
We also show the contingency tables for all datasets and embedding methods after applying \(k\)-means. The contingency tables for the GISAID data are shown in Table 7, Table 8, Table 9 for OHE, Spike2Vec, and ViralVectors, respectively.
\begin{table}
\begin{tabular}{c l c c c c c|c|c} \hline \hline Embed. & \multirow{2}{*}{ML Algo.} & \multirow{2}{*}{Acc.} & \multirow{2}{*}{Prec.} & \multirow{2}{*}{Recall} & \multicolumn{2}{c}{F1} & \multicolumn{2}{c}{F1} & \multicolumn{2}{c|}{ROC-} & \multicolumn{1}{c}{Train.} \\ Method & & & & & & weigh. & Macro & AUC & runtime \\ \hline \hline \multirow{8}{*}{OHE [29]} & NB & 0.96 & 0.96 & 0.96 & 0.95 & 0.60 & 0.80 & 74.26 \\ & MLP & 0.95 & 0.95 & 0.95 & 0.95 & 0.50 & 0.78 & 88.76 \\ & KNN & 0.92 & 0.90 & 0.92 & 0.90 & 0.31 & 0.66 & 164.42 \\ & RF & 0.96 & 0.96 & 0.96 & 0.95 & 0.61 & 0.81 & 2.76 \\ & LR & 0.96 & 0.96 & 0.95 & 0.94 & 0.62 & 0.82 & 4.80 \\ & DT & 0.94 & 0.94 & 0.94 & 0.94 & 0.48 & 0.82 & 2.17 \\ \hline \multirow{8}{*}{Spike2Vec [6]} & NB & 0.95 & 0.95 & 0.95 & 0.95 & 0.42 & 0.71 & 5.45 \\ & MLP & 0.94 & 0.93 & 0.94 & 0.94 & 0.41 & 0.73 & 8.65 \\ & KNN & 0.92 & 0.91 & 0.92 & 0.90 & 0.31 & 0.65 & 1.07 \\ & RF & 0.95 & 0.94 & 0.95 & 0.95 & 0.46 & 0.72 & 0.42 \\ & LR & 0.95 & 0.94 & 0.95 & 0.95 & 0.47 & 0.73 & 0.81 \\ & DT & 0.93 & 0.92 & 0.93 & 0.93 & 0.38 & 0.74 & 0.26 \\ \hline \multirow{8}{*}{PWMZVec [3]} & NB & 0.90 & 0.93 & 0.90 & 0.91 & 0.51 & 0.78 & 1.27 \\ & MLP & 0.94 & 0.95 & 0.95 & 0.95 & 0.52 & 0.79 & 13.32 \\ & KNN & 0.93 & 0.93 & 0.93 & 0.92 & 0.51 & 0.74 & 6.33 \\ & RF & 0.93 & 0.94 & 0.94 & 0.94 & 0.63 & 0.82 & 3.09 \\ & LR & 0.95 & 0.94 & 0.95 & 0.95 & 0.62 & 0.81 & 26.77 \\ & DT & 0.94 & 0.95 & 0.95 & 0.95 & 0.50 & 0.81 & 1.95 \\ \hline \multirow{8}{*}{ViralVectors} & NB & 0.95 & 0.94 & 0.95 & 0.94 & 0.43 & 0.71 & 5.35 \\ & MLP & 0.95 & 0.93 & 0.94 & 0.93 & 0.44 & 0.72 & 7.28 \\ & KNN & 0.90 & 0.88 & 0.90 & 0.88 & 0.25 & 0.63 & 1.05 \\ & RF & 0.95 & 0.95 & 0.95 & 0.95 & 0.64 & 0.82 & 0.49 \\ & LR & 0.97 & **0.97** & **0.97** & **0.97** & **0.65** & **0.83** & 0.44 \\ & DT & 0.95 & 0.92 & 0.92 & 0.92 & 0.38 & 0.70 & **0.24** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Host Classification Results on ViPR data (10% training and 90% testing) for 3348 sequences. The best values are shown in bold.
\begin{table}
\begin{tabular}{l l c c c c c|c} \hline \hline Embed. & \multirow{2}{*}{ML Algo.} & \multirow{2}{*}{Acc.} & \multirow{2}{*}{Prec.} & \multirow{2}{*}{Recall} & \multicolumn{2}{c}{F1} & \multicolumn{2}{c|}{F1} & \multicolumn{2}{c|}{F1} & \multicolumn{2}{c|}{ROC-} & \multicolumn{1}{c}{Train.} \\ Method & Algo. & & & & weigh. & Macro & AUC & runtime \\ \hline \hline \multirow{8}{*}{OHE [29]} & NB & 0.30 & 0.58 & 0.30 & 0.38 & 0.18 & 0.59 & 2164.5 \\ & LR & 0.57 & 0.50 & 0.57 & 0.49 & 0.19 & 0.57 & 2907.5 \\ & RC & 0.56 & 0.48 & 0.56 & 0.48 & 0.17 & 0.56 & 1709.2 \\ & Keras Classifier & 0.61 & 0.58 & 0.61 & 0.56 & 0.24 & 0.61 & 28971.5 \\ \hline \multirow{8}{*}{Spike2Vec [6]} & NB & 0.42 & 0.79 & 0.42 & 0.52 & 0.39 & 0.68 & 2056.0 \\ & LR & 0.68 & 0.69 & 0.68 & 0.65 & 0.49 & 0.69 & 2429.1 \\ \cline{1-1} & RC & 0.67 & 0.68 & 0.67 & 0.63 & 0.44 & 0.67 & 1294.2 \\ \cline{1-1} & Keras Classifier & 0.86 & 0.87 & 0.86 & 0.83 & 0.69 & 0.83 & 13296.2 \\ \hline \multirow{8}{*}{PWMZVec [3]} & NB & 0.43 & 0.79 & 0.43 & 0.53 & 0.40 & 0.68 & 590.13 \\ \cline{1-1} & LR & 0.69 & 0.69 & 0.69 & 0.66 & 0.50 & 0.69 & 858.06 \\ \cline{1-1} & RC & 0.70 & 0.70 & 0.70 & 0.66 & 0.48 & 0.69 & **138.74** \\ \cline{1-1} & Keras Classifier & 0.80 & 0.78 & 0.80 & 0.78 & 0.47 & 0.74 & 460.28 \\ \hline \multirow{8}{*}{ViralVectors} & NB & 0.46 & 0.81 & 0.46 & 0.55 & 0.42 & 0.71 & 2014.5 \\ \cline{1-1} & LR & 0.71 & 0.70 & 0.71 & 0.67 & 0.52 & 0.71 & 2328.4 \\ \cline{1-1} & RC & 0.71 & 0.70 & 0.71 & 0.66 & 0.49 & 0.70 & 1102.3 \\ \cline{1-1} & Keras Classifier & **0.87** & **0.88** & **0.87** & **0.85** & **0.71** & **0.85** & 11234.1 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Variants Classification Results for GISAID data (10% training and 90% testing) for top 22 variants (1995195 spike sequences). The best values are shown in bold.
## 6 Statistical Analysis
We use information gain (IG) to evaluate the importance of different amino acids in the prediction of class labels (hosts). The IG is defined as follows: \(IG(Class,position)=H(Class)-H(Class|position)\). The value of H is the following: \(H=\sum_{i\in Class}-p_{i}\log p_{i}\), where \(H\) is the entropy, and \(p_{i}\) is the probability of the class \(i\). Figure 5 (d) shows the IG values (for the ViPR dataset) for different amino acids for the labels. We can observe that most of
\begin{table}
\begin{tabular}{l l l l l l l l l l l l l l l l l l l l} \hline \hline & & \multicolumn{10}{c}{F1 Score (Weighted) for Different Variants} \\ \cline{2-13} Methods & Alpha & Beta & Delta & Gamma & Epsilon & & & & & & & & & & & & & & & & & & & \\ \hline \hline OHE [29] & 0.041 & 0.041 & 0.544 & 0.643 & 0.057 \\ Spike2Vec [6] & 0.997 & 0.034 & 0.854 & 0.968 & 0.221 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 \\ PWM2Vec [3] & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 \\ \hline \hline OHE [29] & 0.041 & 0.041 & 0.041 & 0.544 & 0.643 & 0.057 \\ Spike2Vec [6] & 0.997 & 0.034 & 0.854 & 0.968 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 \\ PWM2Vec [3] & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 & 0.998 \\ \hline \hline \end{tabular}
\end{table}
Table 6: F1 score by applying the \(k\)-means clustering algorithm on all 1327 variants (2519386 spike sequences) in the GISAID dataset. The best values are shown in bold.
\begin{table}
\begin{tabular}{l l l l l l l l l l l l l l l l l l l l l l l l l} \hline \hline & & & \multicolumn{10}{c}{F1 Score (Weighted) for Different Variants} \\ \cline{2-13} Methods & Alpha & Beta & Delta & Gamma & Epsilon \\ \hline \hline OHE [29] & 0.041 & 0.041 & 0.544 & 0.643 & 0.057 \\ Spike2Vec [6] & 0.997 & 0.034 & 0.854 & 0.968 & 0.221 \\ PWM2Vec [3] & 0.998 & 0.043 & 0.859 & 0.969 & 0.237 \\ ViralVectors & **0.999** & **0.056** & **0.867** & **0.970** & **0.246** & **0.246** & **0.246** & **0.246** & **0.246** & **0.246** & **0.246** & **0.246** & **0.246** & **0.246** & **0.246** & **0.246** & **0.246** & **0.246** \\ \hline \hline \end{tabular}
\end{table}
Table 7: Contingency tables of variants vs clusters after applying k-means on the OHE-based feature embedding on GISAID data.
\begin{table}
\begin{tabular}{l l l l l l l l l l l l l l l l l l l l l l l l l l l} \hline \hline &
the amino acids have IG on the higher side, which means that the predictive accuracy for the machine learning models will be higher since the majority of the amino acids have IG on the higher side.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c c c c c c c c c c c} \hline \hline & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 \\ \hline \hline bat & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 351 & 0 & 0 & 0 & 0 & 0 & 0 \\ avian & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 366 & 0 & 0 \\ bovine & 0 & 0 & 0 & 0 & 0 & 259 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ camel & 0 & 154 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ canine & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 17 & 0 & 0 & 0 \\ feline & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 5 & 8 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ cattle & 0 & 11 & 4 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 102 & 0 \\ dolphin & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 53 & 0 \\ equine & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 135 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ fish & 57 & 20 & 4 & 0 & 27 & 0 & 1 & 7 & 0 & 2 & 0 & 3 & 0 & 2 & 3 & 1 & 1 & 0 \\ hedgehog & 0 & 63 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ human & 0 & 0 & 5 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 3 & 0 \\ pangolin & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 199 & 2 & 0 & 0 & 0 & 0 & 0 & 0 \\ python & 0 & 117 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ rat & 47 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 45 & 0 & 0 & 0 & 0 & 0 & 0 \\ swine & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 25 & 0 & 0 & 0 & 0 & 0 & 0 \\ turtle & 23 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 12 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ wascal & 0 & 0 & 0 & 79 & 1 & 5 & 1 & 0 & 0 & 0 & 5 & 0 & 2 & 28 & 0 & 0 & 5 & 5 & 0 & 0 \\ \hline \hline \end{tabular}
\end{table}
Table 10: Contingency tables of variants vs clusters after applying k-means on the OHE-based feature embedding on ViPR data.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c c c c c c c c c} \hline \hline
the amino acids are contributing towards the prediction of class labels. This type of analysis helps us to understand the importance of different features in the data, and we can eventually ignore or remove the uninformative features from the data in order to improve the predictive performance.
### SHAP Analysis
We also use SHAP analysis [32] to understand how significant each factor is in determining the final label prediction of the model outputs. For this purpose, SHAP analysis runs a large number of predictions and compares a variable's impact against the other features. The SHAP analysis (for the ViPR dataset)
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c c c c c} \hline \hline & & & & & & & & & & & & & & & & & & & & & & & \\ \cline{2-15} Host & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 \\ \hline \hline bat & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 291 & 0 & 0 & 0 & 0 & 0 \\ avian & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 139 & 0 & 0 & 0 & 0 & 0 \\ bovine & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 237 & 0 & 0 \\ camel & 0 & 118 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ canine & 0 & 0 & 79 & 1 & 5 & 1 & 0 & 0 & 0 & 5 & 0 & 2 & 25 & 0 & 6 & 5 & 0 & 0 \\ feline & 0 & 0 & 0 & 0 & 0 & 259 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ cattle & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 28 & 0 & 0 & 0 & 0 & 0 \\ dolphin & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 102 & 0 & 0 \\ equine & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ fish & 0 & 214 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ hedgehog & 9 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 53 & 0 & 0 \\ human & 0 & 0 & 0 & 0 & 0 & 0 & 4 & 8 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 28 & 0 & 0 \\ pangolin & 90 & 33 & 6 & 5 & 0 & 6 & 1 & 9 & 0 & 7 & 0 & 2 & 13 & 2 & 0 & 2 & 19 & 4 & 1 & 6 \\ python & 34 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 45 & 0 & 0 & 0 & 0 & 0 & 0 \\ rat & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 20 & 0 & 0 & 0 & 0 & 0 \\ swine & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 45 & 0 & 0 & 0 & 0 & 0 & 0 \\ turtle & 3 & 0 & 0 & 0 & 0 & 27 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ weasel & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 129 & 0 & 0 \\ \hline \hline
for different feature embeddings (Spike2Vec, PWM2Vec, and ViralVectors) are given in Figure 5. Note that for OHE, we were getting memory errors because of the high dimensionality of the feature vectors, which is why we have not included the SHAP analysis figure for OHE. We can observe that for the human label, the majority of the top contributing amino acids are taking part, which shows that humans are easier to classify as compared to the other labels. For Spike2Vec and Viral Vectors, the label Bat is the second most important host;
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & \multicolumn{6}{c}{k-means (Cluster IDs)} \\ \cline{2-7} Variant & 0 & 1 & 2 & 3 & 4 & 6 & 7 \\ \hline \hline A.2 & 50 & 150 & 0 & 0 & 86 & 4 & 0 \\ B.1 & 45 & 739 & 2 & 0 & 260 & 15 & 3 \\ B.1.1 & 25 & 126 & 0 & 1 & 73 & 2 & 3 \\ B.1.1.7 & 52 & 596 & 1 & 0 & 547 & 6 & 0 \\ B.1.177 & 54 & 513 & 0 & 0 & 452 & 0 & 0 \\ B.1.2 & 6 & 167 & 0 & 0 & 51 & 1 & 0 \\ B.1.617.2 & 26 & 141 & 31 & 0 & 114 & 45 & 0 \\ \hline \hline \end{tabular}
\end{table}
Table 15: Contingency tables of variants vs clusters after applying k-means on the ViralVectors-based feature embedding on NCBI short read data.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & \multicolumn{6}{c}{k-means (Cluster IDs)} \\ \cline{2-7} Variant & 0 & 1 & 2 & 3 & 4 & 6 & 7 \\ \hline \hline A.2 & 91 & 0 & 14 & 0 & 185 & 0 & 0 \\ B.1 & 209 & 0 & 23 & 0 & 827 & 2 & 3 \\ B.1.1 & 70 & 1 & 13 & 0 & 142 & 3 & 1 \\ B.1.1.7 & 562 & 0 & 14 & 1 & 625 & 0 & 0 \\ B.1.177 & 425 & 0 & 7 & 0 & 587 & 0 & 0 \\ B.1.2 & 18 & 0 & 3 & 0 & 204 & 0 & 0 \\ B.1.617.2 & 93 & 3 & 28 & 30 & 162 & 0 & 41 \\ \hline \hline \end{tabular}
\end{table}
Table 13: Contingency tables of variants vs clusters after applying k-means on the OHE-based feature embedding on NCBI short read data.
for PWM2Vec, the label Swine is the second most important host. This type of analysis can help us to decide which labels we should focus more on to increase the predictive performance of the underlying machine learning classifiers. The code for SHAP analysis is available online 2.
Footnote 2: [https://github.com/slundberg/shap](https://github.com/slundberg/shap)
Figure 5: SHAP Analysis (for ViPR dataset) for top amino acids using different embedding methods (a) Spike2Vec, (b) PWM2Vec, and (c) ViralVectors. Figure (d) shows Information Gain values for the ViPR dataset.
## 7 Conclusion
We propose an efficient, scalable, and compact feature embedding method, called ViralVectors, that can encode the sequential information of viromes into fixed-length vectors. Results for different datasets on multiple classifications and clustering algorithms show that ViralVectors is not only scalable to millions of sequences but is also, a general approach that can be applied in any setting, and outperforms the traditional methods in most cases. One possible extension for the future is to extract multiple minimizers from each short read in the case of the NCBI data and then compare it with the ViralVectors-based embedding in which we are taking just one minimizer from each short read, regardless of its size. Another possible extension is to propose an approximate algorithm to generate the frequency vector to further reduce the computational overhead.
|
2303.09963 | Microstrip Patch Antenna Design at 10 GHz for X Band Applications | Microstrip patch antennas are used in satellite imaging systems, wireless
communication equipment, military radios, GPS (Global Positioning System) and
GSM (Global System for Mobile Communications) applications. Its advantages are
its small size and light weight, thin structure, low power consumption, use in
dual frequency applications, and patching in various geometric shapes.
Developing technology has facilitated and accelerated the production of
microstrip antennas. In this study, microstrip antenna design operating at 10
GHz frequency for X band applications has been made. X band is used for air
traffic control, weather traffic control, vessel traffic control, defense
tracking and vehicle speed detection, terrestrial communications and
networking, space communications and amateur radio. HFSS program was used in
antenna design. AWR program was used to find transmission line parameters. In
addition, MATLAB program was used to calculate some parameters. First of all,
information is given about the working principle of the antenna, the selected
dielectric layer and the working frequency. Schematic drawings of the designed
antenna were made from above and from the side. S11 reflection coefficient
magnitude graphs are drawn below and above the operating frequency. The
radiation pattern is drawn for the E-plane and H-plane at the operating
frequency. 3-D (dimensional) plot of antenna gain at operating frequency is
drawn. The simulations performed have shown that the designed antenna works
successfully. | Mehmet Karahan, Mertcan Inal, Alperen Dilmen, Furkan Lacinkaya, Ahmet Nuri Akay, Cosku Kasnakoglu | 2023-03-17T13:36:36Z | http://arxiv.org/abs/2303.09963v5 | # Microstrip Patch Antenna Design at 10
###### Abstract
Microstrip patch antennas are used in satellite imaging systems, wireless communication equipment, military radios, GPS (Global Positioning System) and GSM (Global System for Mobile Communications) applications. Its advantages are its small size and light weight, thin structure, low power consumption, use in dual frequency applications, and patching in various geometric shapes. Developing technology has facilitated and accelerated the production of microstrip antennas. In this study, microstrip antenna design operating at 10 GHz frequency for X band applications has been made. X band is used for air traffic control, weather traffic control, vessel traffic control, defense tracking and vehicle speed detection, terrestrial communications and networking, space communications and amateur radio. HFSS program was used in antenna design. AWR program was used to find transmission line parameters. In addition, MATLAB program was used to calculate some parameters. First of all, information is given about the working principle of the antenna, the selected dielectric layer and the working frequency. Schematic drawings of the designed antenna were made from above and from the side. S11 characteristic graphs are drawn below and above the operating frequency. The radiation pattern is drawn for the E-plane and H-plane at the operating frequency. 3-D (dimensional) plot of antenna gain at operating frequency is drawn. The simulations performed have shown that the designed antenna works successfully.
antenna radiation patterns, antenna measurements, gain measurement, slot antennas, wireless communication
## 1 Introduction
Microstrip patch antennas are low profile antennas. They are used in low profile applications at frequencies above 100 MHz (Singh & Tripathi, 2011). A metal patch mounted at a ground level with a dielectric material in between creates a microstrip (Deepa et al., 2022). The patch on the upper surface is made of conductive materials such as copper or gold (Bisht et al., 2014). The geometric shape of the conductor to be used may vary according to the design features. Square, rectangle, ellipse, ring etc. can be used in shapes (Shome et al., 2019). Microstrip antennas could be used in different applications such as aircrafts, spacecrafts, satellites, missiles, mobile radios, and wireless communications (Mishra, 2016). Microstrip patch antennas can also be used in unmanned aerial vehicles due to its miniaturized dimensions (Karahan & Kasnakoglu, 2021).
In this research, a microstrip patch antenna design operating at 10 GHz frequency was carried out. A computer program called HFSS was used for antenna design. AWR program
was used to obtain transmission line parameters. The MATLAB program was used to make the necessary mathematical calculations. At certain intervals of the antenna's operating frequency, s11 characteristic graphs were drawn. S11 is a measure of how much power is reflected back at the antenna port due to mismatch from the transmission line (Iqbal et al., 2021). Antenna's radiation pattern is drawn for the E-plane and H-plane at the operating 10 GHz frequency. 3-D plot of antenna gain at operating 10 GHz frequency is shown. The obtained simulation results proved that the designed microstrip patch antenna works successfully.
A typical microstrip patch antenna is shown in Figure 1. The dielectric ground with the patches is not magnetic. The small dielectric constant of the dielectric ground causes the fringe areas to increase, which affects the radiation. In general, when designing the antenna, it is preferred that the dielectric constant is between 2.2 and 12 (Hashim et al., 2022). The length L, width W and thickness H are effective in characterizing this type of antenna.
### Patch antenna excitation
Transmission line feeding, coaxial cable feeding or inset (embedded) feeding can be used for patch excitation. In this research, inset feeding was used due to space constraints. In this method, \(Zin(R)\) is pulled to the desired location by starting from the input impedance (\(Zin(0)\)) when there is no inset (Figure 2). Its formula is given in equation 1.
\[\mathrm{Z_{in}(R)=cos^{2}(\pi R/L)Z_{in}(0)} \tag{1}\]
### Working principle
Excitation of the conductive patch, on the other hand, causes an electromagnetic wave movement from the edges of the patch to the ground. Waves reflected from the ground propagate into space. The areas formed on the edges of the conductive patch are called fringing areas and this phenomenon is called fringing effect (Figure 3). The radiation of the
Figure 1: A typical microstrip patch antenna
Figure 2: Schematic representation of inset feeding.
antenna occurs as a result of this event. Waves perpendicular to the patch dampen each other and do not radiate, waves fringing from the corners make the radiation.
## 2 Specifying Selected Frequency and Dielectric Layer
It was stated in the design specifications that the communication system uses certain frequencies in the 10-12 GHz range, so 10 GHz within this range was chosen as the center frequency. In the dielectric layer, RO4003 material was chosen because of its high frequency performance, low loss and widespread use in microstrip antenna designs (Khan & Nema, 2012). The dielectric constant of this material is 3.4 and its tangent loss is 0.002.
## 3 Design Procedure
In this research, HFFS program was used for simulation and modeling purposes. AWR program is used to organize some graphs and find transmission line parameters. In addition, MATLAB program was used for some calculations.
First of all, the dimensions of the antenna were determined. The operating frequency of the antenna is determined by L (length). The center frequency is calculated approximately as in equation 2, where c is the speed of light.
\[\mathrm{f_{c}}\approx\mathrm{c/2L}\sqrt{s_{r}}=1/2\mathrm{L}\sqrt{\mu_{0}s_{0} s_{r}} \tag{2}\]
Equation 3 is obtained by subtracting L from equation 2.
\[\mathrm{L}\approx\mathrm{c/2f_{c}}\sqrt{s_{r}} \tag{3}\]
When the equations were solved using the MATLAB program, L = 7.96 was found. Another parameter of the antenna, w (patch width), is determined by the following formula:
\[\mathrm{w}=(\mathrm{c/2f_{c}})/(\sqrt{2}/(\mathrm{s_{r}}+1)) \tag{4}\]
When the equations were solved with the MATLAB program, w = 9.94 mm.
Equation 5 was used to calculate the h (height). When this equation was solved with MATLAB, it was found that h = 0.96.
\[(0.0606\,\lambda)/(\sqrt{s_{r}}) \tag{5}\]
As \(\varepsilon r\) (dielectric constant) decreases, the effective length of the antenna also changes due to the increase in fringing areas. There may be deviations in \(fc\) (center frequency) due to these
Figure 3: Schematic representation of fringing areas.
changes. Therefore, the antenna effective length (L\({}_{\rm eff}\)), normalized extension in length (\(\Delta\)L) and effective dielectric constant (\(\varepsilon r_{\it eff}\)) are additionally calculated below:
\[\varepsilon r_{\it eff}=(\varepsilon r+1)/2+(\varepsilon r\cdot 1)/2[1+1.2h/w]^{-1/2} \tag{6}\]
\[\Delta\rm L=0.142h[(s_{\rm reff}+0.3)(w/h+0.264)]/[(\ s_{\rm reff}\ \mbox{- }0.258)(w/h+0.8)] \tag{7}\]
\[\rm L_{\rm eff}=L+\Delta\rm L \tag{8}\]
When the above equations are solved with MATLAB program, \(\varepsilon r_{\rm eff}=3.1417\), \(\Delta\rm L=0.4513\)\(mm\), \(\rm L_{\rm eff}=8.8582\)\(mm\) results are found.
In inset (embedded) feeding, the following equation was obtained by using the equation (1) and starting from the input impedance (\(Z_{\it in}(0)\))\(=204.75\Omega\).
\[\rm R=cos^{-1}(\frac{Z_{\it in}(R)}{Z_{\it in}(0)})\frac{L}{\pi} \tag{9}\]
Then, using the MATLAB program, R\(=2.6689\) mm was calculated. The width value at the embedded feed was calculated as \(\rm w=0.3313\) mm. For the design of the microstrip line, parameters such as line length, line width, line height were found by using the Microstrip section of the AWR program. These parameters are shown in Figure 4.
## 4 Top and Side Schematic Drawings of the Designed Antenna
The designed antenna is shown schematically in Figure 5, showing the design parameters and dimensions. It is seen that the total volume rule (1.6cm x 1.6 cm x 1 mm) given in the design specifications is followed here. Of the values calculated in Chapter 3, all but L remained the same. The reason for the change of L is the change of \(Leff\) due to the fringing areas, as emphasized earlier. L was found by modifying it with the HFSS program to provide the center frequency as its graph is given in the next sections. The patch and ground plane
Figure 4: Calculation of microstrip line parameters in AWR.
parts shown in the figure are taken as PEC (perfect electrical conductor), and the thickness of the conductive surfaces is neglected.
## 5 Simulations
In this section, s11 characteristic graphs, antenna input impedance graphs, radiation patterns for E plane and H plane and antenna gain graphs are drawn.
Plotting the \(S\)11 characteristic and showing the bandwidth by frequency in the range of 500 MHz below and above the operating frequency
When \(S11\) is plotted between 500 MHz below and above the determined operating frequency of 10 GHz, as seen in Figure 6, the operating frequency of the antenna has changed due to the fringing areas. Fringing areas cause the effective length to change as mentioned before. Therefore, in HFSS, L length was manually changed and an L providing 10 GHz was obtained (Figure 7). After obtaining Figure 7, the bandwidth at -10 dB is found as in equation 10.
\[f_{2}-f_{1}=(10.1307-9.8475)\ (GHz)=0.2832\ GHz \tag{10}\]
The bandwidth was calculated as in equation 11. This complies with the requirement of design specifications that the -10dB bandwidth (BW) of the desired antenna should be at least 1.6%.
\[\text{BW\%}=[(\text{f}_{2}-\text{f}_{1})/10]100=2.83\% \tag{11}\]
Figure 5: Schematic views of the designed antenna. a) XY (above) view b) YZ (side) view c) XZ (side) view.
### Plotting antenna input impedance in the range 250 MHz below and above operating frequency
In Figure 8, the real graph, the imaginal graph and the magnitude graph of the antenna input impedance are plotted between 9.75 GHz and 10.25 GHz. As can be seen, at 10 GHz, the real impedance is \(64\Omega\) and the imaginal impedance is very close to zero. In general, it can be seen that the antenna input impedance is around \(50\Omega\) between 9.75-10.15 GHz.
Figure 6: 511 characteristic according to the frequency in the range of 500 MHz below and above the operating frequency for the antenna according to section 3 and showing the bandwidth.
Figure 7: Plotting the S11 characteristic according to the frequency in the range 500 MHz below and above the operating frequency for the manually found \(L\) and showing the bandwidth.
Figure 9 shows the impedance graph of the antenna's input port. For 9.75 GHz and 10.25 GHz, the real impedance is 50\(\Omega\) and the imaginal impedance is 0 at all frequency values.
### Plotting the radiation pattern for the E-Plane and the H-Plane at 10 GHz operating frequency
Considering the direction of the electric field and the radiation direction, the E plane is the YZ plane, i.e. \(\phi=\pi/2\) plane, and the H plane is the \(\phi=0\) plane. Considering these, radiation patterns at 10 GHz are drawn for E and H planes, respectively, in Figure 10 and Figure 11.
Figure 8: Antenna input impedance graph in the 9.75 GHz and 10.25 GHz range.
Figure 10: Radiation pattern for the E plane at operating frequency.
Figure 9: Impedance graph of antenna input port in the range of 9.75 GHz and 10.25 GHz.
### 3D plotting of antenna gain at operating frequency
Antenna gain at 10 GHz is plotted in 3D in Figure 12. As can be seen, the antenna gain is higher than 5 dB.
Figure 11: Radiation pattern for the H plane at operating frequency.
Figure 12: 3D antenna gain at operating frequency.
### Plotting antenna gain at 250 MHz above and below operating frequency
Antenna gain in the 9.75 and 10.25 GHz range is plotted in Figure 13 depending on \(\theta\). As can be seen, the antenna gain is equal to 6.8 dB when \(\theta=0\).
### Parameters of the designed antenna
The parameters of the designed antenna are given in Table 1.
Figure 13: The graph of antenna gain connected to \(\theta\).
## 6 Conclusion
In this study, the design of a microstrip patch antenna at 10 GHz frequency for X band applications is explained. First of all, the usage areas, structure and working principles of the microstrip patch antenna are explained. HFSS, AWS and MATLAB programs were used in the antenna design. The equations used in this design are explained one by one. Using MATLAB program, these equations were solved and the values of the parameters were found. The schematic drawings of the antenna are given from the top and from the side. In the simulation section, S11 characteristic graphics, input impedance graphics, E and H plane radiation patterns and antenna gain graphics were drawn. The parameters used in the antenna design are presented in a table. Simulation results show that the antenna works as desired and meets the X Band design criteria.
|
2305.04111 | Efficient and Degree-Guided Graph Generation via Discrete Diffusion
Modeling | Diffusion-based generative graph models have been proven effective in
generating high-quality small graphs. However, they need to be more scalable
for generating large graphs containing thousands of nodes desiring graph
statistics. In this work, we propose EDGE, a new diffusion-based generative
graph model that addresses generative tasks with large graphs. To improve
computation efficiency, we encourage graph sparsity by using a discrete
diffusion process that randomly removes edges at each time step and finally
obtains an empty graph. EDGE only focuses on a portion of nodes in the graph at
each denoising step. It makes much fewer edge predictions than previous
diffusion-based models. Moreover, EDGE admits explicitly modeling the node
degrees of the graphs, further improving the model performance. The empirical
study shows that EDGE is much more efficient than competing methods and can
generate large graphs with thousands of nodes. It also outperforms baseline
models in generation quality: graphs generated by our approach have more
similar graph statistics to those of the training graphs. | Xiaohui Chen, Jiaxing He, Xu Han, Li-Ping Liu | 2023-05-06T18:32:27Z | http://arxiv.org/abs/2305.04111v4 | # Efficient and Degree-Guided Graph Generation via Discrete Diffusion Modeling
###### Abstract
Diffusion-based graph generative models are effective in generating high-quality small graphs. However, it is hard to scale them to large graphs that contain thousands of nodes. In this work, we propose EDGE, a new diffusion-based graph generative model that addresses generative tasks for large graphs. The model is developed by reversing a discrete diffusion process that randomly removes edges until obtaining an empty graph. It leverages graph sparsity in the diffusion process to improve computational efficiency. In particular, EDGE only focuses on a small portion of graph nodes and only adds edges between these nodes. Without compromising modeling ability, it makes much fewer edge predictions than previous diffusion-based generative models. Furthermore, EDGE can explicitly model the node degrees of training graphs and then gain performance improvement in capturing graph statistics. The empirical study shows that EDGE is much more efficient than competing methods and can generate large graphs with thousands of nodes. It also outperforms baseline models in generation quality: graphs generated by the proposed model have graph statistics more similar to those of training graphs.
Machine Learning, Graph Graph Generation, Edge Learning, Edge Learning, Edge Learning, Edge Learning, Edge Learning
## 1 Introduction
There is a long history of using random graph models (Newman et al., 2002) to model large graphs. Traditional models such as Erdos-Renyi (ER) model (Erdos et al., 1960), Stochastic-Block Model (SBM) (Holland et al., 1983), and Exponential-family Random Graph Models (Lusher et al., 2013) are often used to model existing graph data and focus on prescribed graph structures. Besides modeling existing data, one interesting problem is to generate new graphs to simulate existing ones (Ying and Wu, 2009), which has applications such as network data sharing. In generative tasks (Chakrabarti and Faloutsos, 2006), traditional models often fall short in describing complex structures. A promising direction is to use deep neural models to generate large graphs.
There are only a few deep generative models designed for generating large graphs: NetGAN (Bojchevski et al., 2018) and CELL (Rendsburg et al., 2020) are two examples. However, recent research (Chanpuriya et al., 2021) shows that these two models are edge-independent models and have a theoretical limitation: they cannot reproduce several important statistics (e.g. triangle counts and clustering coefficient) in their generated graphs unless they memorize the training graph. A list of other models (Chanpuriya et al., 2021) including Variational Graph Autoencoders (VGAE) (Kipf and Welling, 2016) and GraphVAE (Simonovsky and Komodakis, 2018) are also edge-independent models and share the same limitation.
Diffusion-based generative models (Liu et al., 2019; Niu et al., 2020; Jo et al., 2022; Chen et al., 2022) have gained success in modeling small graphs. These models generate a graph in multiple steps and are NOT edge-independent because edges generated in later steps depend on previously generated edges. They are more flexible than one-shot models (Kipf and Welling, 2016; Madhawa et al., 2019; Lippe and Gavves, 2020), which directly predict an adjacency matrix in one step. They also have an advantage over auto-regressive graph models (You et al., 2018; Liao et al., 2019), as diffusion-based models are invariant to node permutations and do not have long-term memory issues. However, diffusion-based models are only designed for tasks with small graphs (usually with less than one hundred nodes).
This work aims to scale diffusion-based generative models to large graphs. The major issue of a diffusion-based model is that it must compute a latent vector or a probability for each node pair in a graph at each diffusion step (Niu et al., 2020; Jo et al., 2022) - the computation cost is \(O(TN^{2})\) if the model generates a graph with \(N\) nodes using \(T\) steps. The learning task becomes challenging when \(N\) is large. At the same time, large graphs increase the difficulties for a model to capture global graph statistics such as clustering coefficients. As a result, the model performance degrades when the training graphs' sizes scale up.
We propose _Efficient and Degree-guided graph GEnerative model_ (EDGE) based on a discrete diffusion process. The development of EDGE has three innovations. First, we encourage the sparsity of graphs in the diffusion process by setting the empty graph as the convergent "distribution".
Then the diffusion process only removes edges and can be viewed as an edge-removal process. The increased sparsity in graphs in the process dramatically reduces the computation - this is because the message-passing neural network (MPNN) (Kipf and Welling, 2016a) used in the generative model needs to run on these graphs, and their runtime is linear in the number of edges. Second, the generative model, which is the reverse of the edge-removal process, only predicts edges for a small portion of "active nodes" that have edge changes in the original edge-removal process. This strategy decreases the number of predictions of MPNN and also its computation time. More importantly, this new design is naturally derived from the aforementioned edge-removal process without modifying its forward transition probabilities. Third, we model the node degrees of training graphs explicitly. By characterizing the node degrees, the statistics of the generated graphs are much closer to training graphs. While other diffusion-based graph models struggle to even train or sample on large graphs, our approach can efficiently generate large graphs with desired statistical properties. We summarize our contributions as follows:
* we use empty graphs as the convergent distribution in a discrete diffusion process to reduce computation;
* we propose a new generative process that only predicts edges between a fraction of nodes in graphs;
* we explicitly model node degrees in the probabilistic framework to improve graph statistics of generated graphs; and
* we conduct an extensive empirical study and show that our method can efficiently generate large graphs with desired statistics.
## 2 Background
This work considers graph generative models that sample adjacency matrices to generate graphs. Let \(\mathcal{A}^{N}\) denote the space of adjacency matrices of size \(N\). We consider simple graphs without self-loops or multi-edges, so an adjacency matrix \(\mathbf{A}\in\mathcal{A}^{N}\) is a binary symmetric matrix with a zero diagonal. A generative model defines a distribution over \(\mathcal{A}^{N}\).
In this work, we construct a generative model based on a discrete diffusion process (Austin et al., 2021; Hoogeboom et al., 2021; Vignac et al., 2022). Let \(\mathbf{A}^{0}\) denote a graph from the data, then the diffusion process defined by \(q(\mathbf{A}^{t}|\mathbf{A}^{t-1})\) corrupts \(\mathbf{A}^{0}\) in \(T\) steps and forms a trajectory \((\mathbf{A}^{0},\mathbf{A}^{1},\dots,\mathbf{A}^{T})\). We treat \((\mathbf{A}^{1},\dots,\mathbf{A}^{T})\) as latent variables, then \(q(\mathbf{A}^{1},\dots,\mathbf{A}^{T}|\mathbf{A}^{0})=\prod_{t=1}^{T}q( \mathbf{A}^{t}|\mathbf{A}^{t-1})\). As \(T\to\infty\), \(q(\mathbf{A}^{T})\) approaches a convergent distribution, which is often a simple one with easy samples. We often choose a large enough \(T\) so that \(q(\mathbf{A}^{T})\) is a good approximation of the convergent distribution.
We model these trajectories with a denoising model \(p_{\theta}(\mathbf{A}^{t-1}|\mathbf{A}^{t})\) parameterized by \(\theta\), then the model has a joint \(p_{\theta}(\mathbf{A}^{0:T})=p(\mathbf{A}^{T})\prod_{t=1}^{T}p_{\theta}( \mathbf{A}^{t-1}|\mathbf{A}^{t})\) and a marginal \(p_{\theta}(\mathbf{A}^{0})\) that describes the data distribution. Here \(p(\mathbf{A}^{T})\) is the convergent distribution in \(q\).
Usually \(q(\mathbf{A}^{t}|\mathbf{A}^{t-1})\) needs easy probability calculations. One choice is to treat each edge independently, and
\[q(\mathbf{A}^{t}|\mathbf{A}^{t-1}) =\prod_{i,j:i<j}\mathcal{B}(\mathbf{A}^{t}_{i,j};(1-\beta_{t}) \mathbf{A}^{t-1}_{i,j}+\beta_{t}p) \tag{1}\] \[:=\mathcal{B}(\mathbf{A}^{t};(1-\beta_{t})\mathbf{A}^{t-1}+\beta _{t}p).\]
Here \(\mathcal{B}(x;\mu)\) represents the Bernoulli distribution over \(x\) with probability \(\mu\). We also use \(\mathcal{B}(\mathbf{A};\mu)\) to represent the probability of independent Bernoulli variables arranged in a matrix. The diffusion rate \(\beta_{t}\) determines the probability of resampling the entry \(\mathbf{A}^{t}_{i,j}\) from a Bernoulli distribution with probability \(p\), instead of keeping the entry \(\mathbf{A}^{t-1}_{i,j}\).
This diffusion process requires two special properties for model fitting. First, we can sample \(\mathbf{A}^{t}\) at any time step \(t\) directly from \(\mathbf{A}^{0}\). Let \(\alpha_{t}=1-\beta_{t}\) and \(\bar{\alpha}_{t}=\prod_{\tau=1}^{t}\alpha_{\tau}\),
\[q(\mathbf{A}^{t}|\mathbf{A}^{0})=\mathcal{B}(\mathbf{A}^{t};\bar{\alpha}_{t} \mathbf{A}^{0}+(1-\bar{\alpha}_{t})p). \tag{2}\]
The diffusion rates \(\beta_{t}\)-s are defined in a way such that \(\bar{\alpha}_{T}\) is almost \(0\), then \(\mathbf{A}^{T}\) is almost independent from \(\mathbf{A}^{0}\), i.e., \(q(\mathbf{A}^{T}|\mathbf{A}^{0})\approx p(\mathbf{A}^{T})\equiv\mathcal{B}( \mathbf{A}^{T};p)\). The configuration of \(\beta_{t}\)-s is called _noise scheduling_. In the context of graph generation, \(p(\mathbf{A}^{T})\) is the Erdos-Renyi graph model \(G(N,p)\)(Erdos et al., 1960), with \(p\) being the probability of forming an edge between two nodes.
Second, we can compute the posterior of the forward transition when conditioning on \(\mathbf{A}^{0}\):
\[q(\mathbf{A}^{t-1}|\mathbf{A}^{t},\mathbf{A}^{0})=\frac{q(\mathbf{A}^{t}| \mathbf{A}^{t-1})q(\mathbf{A}^{t-1}|\mathbf{A}^{0})}{q(\mathbf{A}^{t}| \mathbf{A}^{0})}. \tag{3}\]
Since all the terms on the right-hand side are known, the posterior can be computed analytically.
The generative model \(p_{\theta}(\mathbf{A}^{0:T})\) is trained by maximizing a variational lower bound of \(\log p_{\theta}(\mathbf{A}^{0})\)(Ho et al., 2020; Hoogeboom et al., 2021; Austin et al., 2021). In an intuitive understanding, \(p_{\theta}(\mathbf{A}^{t-1}|\mathbf{A}^{t})\) is learned to match the posterior of the forward transition\(q(\mathbf{A}^{t-1}|\mathbf{A}^{t},\mathbf{A}^{0})\).
During generation, we sample \(\mathbf{A}^{T}\sim p(\mathbf{A}^{T})\) and then "de-noise" it iteratively with \(p_{\theta}(\mathbf{A}^{t-1}|\mathbf{A}^{t})\) to get an \(\mathbf{A}^{0}\) sample.
## 3 Method
### Diffuse graphs to empty graphs - a motivation
With the main purpose of computation efficiency, we advocate setting \(p=0\) and using \(G(N,0)\) as the convergent distribution. This configuration improves the sparsity of the adjacency matrices in diffusion trajectories, thus reducing
computation. We consider the amount of computation in the denoising model \(p_{\theta}(\mathbf{A}^{t-1}|\mathbf{A}^{t})\) from two aspects: the computation on the input \(\mathbf{A}^{t}\) and the number of entries to be predicted in the output \(\mathbf{A}^{t-1}\).
We first consider the computation on the input side. We assume that the denoising model \(p_{\theta}(\mathbf{A}^{t-1}|\mathbf{A}^{t})\) is constructed with an MPNN. Suppose the input graph \(\mathbf{A}^{t}\) has \(M^{t}\) edges, then a typical MPNN needs to perform \(O(M^{t})\) message-passing operations to compute node vectors - here we treat hidden sizes and the number of network layers as constants. The total number of message-passing operations over the trajectory is \(O(\sum_{t=1}^{T}M^{t})\). After some calculations, we show that
\[\sum_{t=1}^{T}M^{t}=M^{0}\sum_{t=1}^{T}\bar{\alpha}_{t}+\frac{N(N-1)p}{2}\sum_ {t=1}^{T}1-\bar{\alpha}_{t}. \tag{4}\]
By setting \(p=0\), we eliminate the second term and reduce the number of edges in graphs in the diffusion trajectory by a significant factor, then the MPNN will have much fewer message-passing operations.
We then analyze the number of entries we need to predict in the output \(\mathbf{A}^{t-1}\). When \(p=0\), the forward process is an edge-removal process, and the degree of a node is non-increasing for any forward transition. A node with a degree change from \(t-1\) to \(t\) is considered "active". When a node is inactive at \(t-1\), all edges incident to this node is kept at \(t\). Figure 1 shows the average number of active nodes for each forward transition. We observe that active nodes only take a small fraction of the total when the convergent distribution is \(G(N,0)\).
While a previous diffusion-based model makes predictions for all node pairs, the observation above indicates that we can save computation by making predictions only for pairs of active nodes. In particular, the denoising model can first infer which nodes are active in each step and then only predict edges between active nodes. Below we will develop such a model and only consider the diffusion process with \(p=0\).
### A diffusion-based model that explicitly models active nodes
We treat the "active nodes" as latent variables \(\mathbf{s}^{1:T}\) and incorporate them into both the forward and reverse processes. Let \(\mathbf{d}^{t}=\mathrm{deg}(\mathbf{A}^{t})\) be the node degree vector of \(\mathbf{A}^{t}\), then \(\mathbf{s}^{t}:=\mathbb{1}[\mathbf{d}^{t-1}\neq\mathbf{d}^{t}]\) is a binary vector indicating whether nodes are active (having degree change from \(t-1\) to \(t\)) or not from \(t-1\) to \(t\). In the following, we redefine the forward and reverse processes.
Forward process.With latent variables \(\mathbf{s}^{1:T}\), we show that the forward process can be rewritten into the following decomposition:
\[q(\mathbf{A}^{1:T},\mathbf{s}^{1:T}|\mathbf{A}^{0})\!=\!\prod_{t=1}^{T}q( \mathbf{A}^{t}|\mathbf{A}^{t-1})q(\mathbf{s}^{t}|\mathbf{A}^{t-1},\mathbf{A} ^{t}). \tag{5}\]
The forward process does not change by including \(\mathbf{s}^{1:T}\) since the value of \(\mathbf{s}^{t}\) is determined by \(\mathbf{A}^{t-1}\) and \(\mathbf{A}^{t}\). This allows us to use still the forward transition \(q(\mathbf{A}^{t}|\mathbf{A}^{t-1})\) to draw the entire sequence.
Reverse process.We decompose the denoising model as follows:
\[p_{\theta}(\mathbf{A}^{0:T},\mathbf{s}^{1:T})\!=\!p(\mathbf{A}^{T})\prod_{t=1 }^{T}\!\!p_{\theta}(\mathbf{A}^{t-1}|\mathbf{A}^{t}_{,}\mathbf{s}^{t})p_{ \theta}(\mathbf{s}^{t}|\mathbf{A}^{t}). \tag{6}\]
Here both \(p_{\theta}(\mathbf{A}^{t-1}|\mathbf{A}^{t},\mathbf{s}^{t})\) and \(p_{\theta}(\mathbf{s}^{t}|\mathbf{A}^{t})\) are learnable distributions. Intuitively, the denoising model first predicts which nodes are active (\(\mathbf{s}^{t}\)) and then generates edges between them to obtain \(\mathbf{A}^{t-1}\). Since we only predict edges between active nodes indicated by \(\mathbf{s}^{t}\), all edges that incident inactive nodes are carried from \(\mathbf{A}^{t}\) to \(\mathbf{A}^{t-1}\) directly.
Our EDGE model is specified by (6). The generative framework is demonstrated in Figure 2.
Figure 1: Dynamics of a discrete diffusion process with \(p=0\) and “active” nodes in the process on the Cora dataset: (a) the diffusion process with \(p=0\) is an edge-removal process. The reverse of it is a generative procedure that constructs a graph by gradually adding edges to an empty graph. (b) under linear noise scheduling, the number of “active” nodes (that have their edges removed at a time step) is less than one-tenth of the total number of nodes.
### Learning the reverse process
We optimize the model parameters \(\theta\) by maximizing the variational lower bound (VLB) of \(\log p(\mathbf{A}^{0})\). Following Sohl-Dickstein et al. (2015); Ho et al. (2020), the VLB is:
\[\mathcal{L}(\mathbf{A}^{0};\theta)=\mathbb{E}_{q}\Big{[}\log\frac{ p_{\theta}(\mathbf{A}^{0:T},\mathbf{s}^{1:T})}{q(\mathbf{A}^{1:T},\mathbf{s}^{1:T} |\mathbf{A}^{0})}\Big{]} \tag{7}\] \[=\mathbb{E}_{q}\Bigg{[}\log\frac{p(\mathbf{A}^{T})}{q(\mathbf{A} ^{T}|\mathbf{A}^{0})}+\underbrace{\log p_{\theta}(\mathbf{A}^{0}|\mathbf{A}^ {1},\mathbf{s}^{1})}_{\text{reconstruction term }\mathcal{L}_{\text{rec}}}+\] \[\sum_{t=2}^{T}\underbrace{\log\frac{p_{\theta}(\mathbf{A}^{t-1} |\mathbf{A}^{t},\mathbf{s}^{t})}{q(\mathbf{A}^{t-1}|\mathbf{A}^{t},\mathbf{s} ^{t},\mathbf{A}^{0})}}_{\text{edge prediction term }\mathcal{L}_{\text{alg}}(t)}+\sum_{t=1}^{T} \underbrace{\log\frac{p_{\theta}(\mathbf{s}^{t}|\mathbf{A}^{t})}{q(\mathbf{s} ^{t}|\mathbf{A}^{t},\mathbf{A}^{0})}}_{\text{node selection term }\mathcal{L}_{\text{ mode}}(t)}\Bigg{]}.\]
Appendix B.1 shows detailed derivation. The first term contains no learnable parameters. The second term measures the reconstruction likelihood. For the edge prediction term \(\mathcal{L}_{\text{edge}}(t)\), unlike Sohl-Dickstein et al. (2015); Ho et al. (2020), the posterior \(q(\mathbf{A}^{t-1}|\mathbf{A}^{t},\mathbf{s}^{t},\mathbf{A}^{0})\) is hard to compute, and there is not a closed-form for this term. Since the entropy \(\mathbb{H}[q(\mathbf{A}^{t-1}|\mathbf{A}^{t},\mathbf{s}^{t},\mathbf{A}^{0})]\) is a constant, we only optimize the cross entropy term in \(\mathcal{L}_{\text{edge}}(t)\) via Monte Carlo estimates. We leave the work of variance reduction to the future.
For the node selection term \(\mathcal{L}_{\text{node}}(t)\), we show that \(q(\mathbf{s}^{t}|\mathbf{A}^{t},\mathbf{A}^{0})\) has closed-form expression. In particular, we first derive the posterior of the node degree distribution \(q(\mathbf{d}^{t}|\mathbf{A}^{t},\mathbf{A}^{0})\) as follows:
\[q(\mathbf{d}^{t-1}|\mathbf{A}^{t},\mathbf{A}^{0})=q(\mathbf{d}^{ t-1}|\mathbf{d}^{t},\mathbf{d}^{0})=\prod_{i=1}^{N}q(\mathbf{d}_{i}^{t-1}| \mathbf{d}_{i}^{t},\mathbf{d}_{i}^{0}),\] \[\text{where }q(\mathbf{d}_{i}^{t-1}|\mathbf{d}_{i}^{t},\mathbf{d}_{i}^{0} )=\operatorname{Bin}(k=\Delta_{i}^{t},n=\Delta_{i}^{0},p=\gamma_{t}),\] \[\text{with }\Delta_{i}^{t}=\mathbf{d}_{i}^{t-1}-\mathbf{d}_{i}^{t}, \ \Delta_{i}^{0}=\mathbf{d}_{i}^{0}-\mathbf{d}_{i}^{t},\ \gamma_{t}=\frac{\beta_{t}\bar{\alpha}_{t-1}}{1-\bar{ \alpha}_{t}}. \tag{8}\]
Here \(\operatorname{Bin}(k;n,p)\) is a binomial distribution parameterized by \(n\) and \(p\). Intuitively, a node degree \(\mathbf{d}_{i}^{t-1}\) is only relevant to the node's degrees \(\mathbf{d}_{i}^{0}\) and \(\mathbf{d}_{i}^{t}\) at steps \(0\) and \(t\). The actual edges do not affect the degree probability since each edge is added or removed independently. We provide formal proof and discuss the forward node degree distribution in Appendix A.2.
Since \(\mathbf{s}_{i}^{t}=1[\mathbf{d}_{i}^{t-1}\neq\mathbf{d}_{i}^{t}]\), we can compute the probability \(q(\mathbf{s}_{i}^{t}=1|\mathbf{d}_{i}^{t},\mathbf{d}_{i}^{0})\), which is \(1-q(\mathbf{d}_{i}^{t-1}=\mathbf{d}_{i}^{t}|\mathbf{d}_{i}^{t},\mathbf{d}_{i} ^{0})\). Finally, we obtain the closed-form posterior:
\[q(\mathbf{s}^{t}|\mathbf{d}^{t},\mathbf{d}^{0})=\prod_{i=1}^{N} q(\mathbf{s}_{i}^{t}|\mathbf{d}_{i}^{t},\mathbf{d}_{i}^{0}),\text{ where } \tag{9}\] \[q(\mathbf{s}_{i}^{t}|\mathbf{d}_{i}^{t},\mathbf{d}_{i}^{0})= \mathcal{B}\big{(}\mathbf{s}_{i}^{t};1-(1-\gamma_{t})^{\Delta_{i}^{0}} \big{)}.\]
The KL divergence \(\mathcal{L}_{\text{node}}(t)\) turns out to be comparisons between Bernoulli distributions.
### Degree-guided graph generation
A graph's node degrees are often strongly correlated to its other statistics, so it is important for a generative model to capture the node degrees of training graphs. Here we directly incorporate degree information in the proposed generative model.
We explicitly model node degrees \(\mathbf{d}^{0}\) of a graph \(\mathbf{A}^{0}\) as a random variable, then the forward process becomes
\[q(\mathbf{A}^{1:T}|\mathbf{A}^{0})=q(\mathbf{A}^{1:T}|\mathbf{A}^{0})q( \mathbf{d}^{0}|\mathbf{A}^{0}). \tag{10}\]
Here \(q(\mathbf{d}^{0}|\mathbf{A}^{0})=1\) because \(\mathbf{d}^{0}\) is determined by \(\mathbf{A}^{0}\). We also include \(\mathbf{d}^{0}\) into the generative model \(p(\mathbf{A}^{0},\mathbf{d}^{0})\). If the model guarantees that \(\mathbf{d}^{0}\) is the node degrees of \(\mathbf{A}^{0}\), then \(p_{\theta}(\mathbf{A}^{0})=p_{\theta}(\mathbf{A}^{0},\mathbf{d}^{0})\) still models graph data \(\mathbf{A}^{0}\). Even if \(p_{\theta}(\mathbf{A}^{0},\mathbf{d}^{0})\) allows \(\mathbf{d}^{0}\) to differ a little from the true node degrees, it is still valid to maximize the likelihood \(p_{\theta}(\mathbf{A}^{0},\mathbf{d}^{0}=\mathbf{A}^{0}\mathbf{1})\) because model training will encourage the model to generate \(\mathbf{A}^{0}\) and \(\mathbf{d}^{0}\) to match each other. We decompose the model by:
\[p_{\theta}(\mathbf{A}^{0},\mathbf{d}^{0})=p_{\theta}(\mathbf{d}^{0})p_{\theta}( \mathbf{A}^{0}|\mathbf{d}^{0}). \tag{11}\]
Figure 2: Forward and reverse processes. For the forward process, \(\mathbf{A}^{t}\) is sampled from \(q(\mathbf{A}^{t}|\mathbf{A}^{t-1})\), then \(\mathbf{s}^{t}\) is deterministically generated given \(\mathbf{A}^{t-1}\) and \(\mathbf{A}^{t}\). For the reverse process, \(\mathbf{s}^{t}\) is first sampled from a node selection distribution \(p_{\theta}(\mathbf{s}^{t}|\mathbf{A}^{t})\), then \(\mathbf{A}^{t-1}\) is sampled from the parameterized distribution \(p_{\theta}(\mathbf{A}^{t-1}|\mathbf{A}^{t},\mathbf{s}^{t})\).
With this decomposition, we first sample arbitrary node degrees \(\mathbf{d}^{0}\) from \(p_{\theta}(\mathbf{d}^{0})\), then generate a graph with the degree constraint (See Alg. 1). Correspondingly, the denoising model becomes
\[p_{\theta}(\mathbf{A}^{0:T},\mathbf{s}^{1:T},\mathbf{d}^{0})=p_{ \theta}(\mathbf{d}^{0})p_{\theta}(\mathbf{A}^{0:T},\mathbf{s}^{1:T}|\mathbf{d} ^{0}). \tag{12}\]
We separate the optimizations for the node degree model \(p_{\theta}(\mathbf{d}^{0})\) and the graph denoising model \(p_{\theta}(\mathbf{A}^{0:T},\mathbf{s}^{1:T}|\mathbf{d}^{0})\). The entire training objective is
\[\mathcal{L}(\mathbf{A}^{0},\!\mathbf{d}^{0};\!\theta)\!=\!\mathbb{E}_{q}\! \left[\frac{\log p_{\theta}(\mathbf{d}^{0})}{\mathcal{L}(\mathbf{d}^{0}; \theta)}\!+\!\underbrace{\log\frac{p_{\theta}(\mathbf{A}^{0:T},\mathbf{s}^{1 :T}|\mathbf{d}^{0})}{q(\mathbf{A}^{1:T},\mathbf{s}^{1:T}|\mathbf{A}^{0})}}_{ \mathcal{L}(\mathbf{A}^{0}|\mathbf{d}^{0};\theta)}\right]\!.\]
(See Appendix B.2 for detailed derivation.) For \(\mathcal{L}(\mathbf{d}^{0};\theta)\), we treat the learning of node degree distribution as a sequence modeling task. The decomposition of \(\mathcal{L}(\mathbf{A}^{0}|\mathbf{d}^{0};\theta)\) remains the same as Eqn. (7), except that all terms related to the graph denoising model are now conditioning on \(\mathbf{d}^{0}\). In particular, for the node selection distribution, we consider a special parameterization by setting \(p_{\theta}(\mathbf{s}^{t}|\mathbf{A}^{t},\mathbf{d}^{0}):=q(\mathbf{s}^{t}| \mathbf{d}^{t},\mathbf{d}^{0})\) in Eqn. (9). Note that now the node selection distribution contains no learnable parameters. Moreover, since the KL divergence \(\mathcal{L}_{\text{node}}(t)\) is now zero, we can further simplify the \(\mathcal{L}(\mathbf{A}^{0}|\mathbf{d}^{0};\theta)\) into
\[\mathcal{L}(\mathbf{A}^{0}|\mathbf{d}^{0};\!\theta)\!=\!\mathbb{E}_{q} \!\left[\log\!\frac{p(\mathbf{A}^{T})}{q(\mathbf{A}^{T}|\mathbf{A}^{0})} \!+\!\log p_{\theta}(\mathbf{A}^{0}|\mathbf{A}^{1},\mathbf{s}^{1},\mathbf{d} ^{0})\right.\] \[\left.+\sum_{t=2}^{T}\log\frac{p_{\theta}(\mathbf{A}^{t-1}| \mathbf{A}^{t},\mathbf{s}^{t},\mathbf{d}^{0})}{q(\mathbf{A}^{t-1}|\mathbf{A} ^{t},\mathbf{s}^{t},\mathbf{A}^{0})}\right]\!. \tag{13}\]
In our framework, the node degree constraint \(\mathbf{d}^{0}\) is mainly imposed on \(p_{\theta}(\mathbf{s}^{t}|\mathbf{A}^{t},\mathbf{d}^{0})\) - only nodes with a degree below the specified degree \(\mathbf{d}^{0}\) may be selected to participate in the edge prediction. On the other hand, though the exact probability \(p_{\theta}(\mathbf{A}^{t-1}|\mathbf{A}^{t},\mathbf{s}^{t},\mathbf{d}^{0})\) includes information about the maximum number of edges (\(\mathbf{d}^{0}-\mathbf{d}^{t}\)) that can be added to nodes, this can be not easy to track during the edge formation. Here we consider simply augmenting the inputs to the neural network with \(\mathbf{d}^{0}\). In practice, we found that the specified node degrees \(\mathbf{d}^{0}\) can accurately control the actual node degrees of the generated graphs.
Degree-guided generation turns out to be very useful in modeling large graphs. We reason that the \(\mathbf{d}^{0}\) significantly reduces the possible trajectories a graph can evolve along, thus reducing the modeling complexity.
```
Input: Empty graph \(\mathbf{A}^{T}\), graph model \(p_{\theta}(\mathbf{A}^{t-1}|\mathbf{A}^{t},\mathbf{s}^{t})\), degree sequence model \(p_{\theta}(\mathbf{d}^{0})\), and diffusion steps \(T\). Output: Generated graph \(\mathbf{A}^{0}\) Draw \(\mathbf{d}^{0}\sim p_{\theta}(\mathbf{d}^{0})\) for\(t=T,\ldots,1\)do Draw \(\mathbf{s}^{t}\sim q(\mathbf{s}^{t}|\mathrm{deg}(\mathbf{A}^{t}),\mathbf{d}^{0})\). Draw \(\mathbf{A}^{t-1}\sim p_{\theta}(\mathbf{A}^{t-1}|\mathbf{A}^{t},\mathbf{s}^{t})\). endfor
```
**Algorithm 1** Degree-guided graph generation
### Implementation
We briefly describe the implementation of \(p_{\theta}(\mathbf{s}^{t}|\mathbf{A}^{t})\), \(p_{\theta}(\mathbf{A}^{t-1}|\mathbf{A}^{t},\mathbf{s}^{t})\), and \(p_{\theta}(\mathbf{d}^{0})\). Note we use the same network architecture for \(p_{\theta}(\mathbf{A}^{t-1}|\mathbf{A}^{t},\mathbf{s}^{t})\) and \(p_{\theta}(\mathbf{A}^{t-1}|\mathbf{A}^{t},\mathbf{s}^{t},\mathbf{d}^{0})\), except the inputs to the latter includes \(\mathbf{d}^{0}\). We treat \(p_{\theta}(\mathbf{s}^{t}|\mathbf{A}^{t})\) as a node classification problem and \(p_{\theta}(\mathbf{A}^{t-1}|\mathbf{A}^{t},\mathbf{s}^{t})\) as an link prediction problem. Both components share the same MPNN that takes \(\mathbf{A}^{t}\) as the input and computes node representations \(\mathbf{Z}^{t}\in\mathbb{R}^{N\times d_{\text{h}}}\) for all nodes. The hidden dimension \(d_{\text{h}}\) is a hyper-parameter here. Then a network head uses \(\mathbf{Z}^{t}\) to predict \(\mathbf{s}^{t}\), and another one uses \(\mathbf{Z}^{t}[\mathbf{s}^{t}]\) to predict links between active nodes indicated by \(\mathbf{s}^{t}\). For the node degree model \(p_{\theta}(\mathbf{d}^{0})\), if there are multiple graphs in the dataset, we use a recurrent neural network (RNN) to fit the histogram of node degrees. If there is only one graph with node degrees \(\mathbf{d}^{0}\), then we set \(p_{\theta}(\mathbf{d}^{0})=1\) directly. Implementation details are in Appendix C.
### Model analysis
Complexity analysis.Let integer \(M\) represent the number of edges in a graph, and \(K\) be the maximum number of active nodes during the reverse process. In each generation step \(t\), the MPNN needs \(O(M)\) operations to compute node representations, \(O(N)\) operations to predict \(\mathbf{s}^{t}\), and \(O(K^{2})\) operations to predict links between \(K\) active nodes. The factor \(K\) is relevant to noise scheduling: we find that \(K\) is smaller than \(N\) by at least one order of magnitude when the noise scheduling is linear. In a total of \(T\) generation steps, the overall running time \(O\big{(}T\max(K^{2},M)\big{)}\). As a comparison, previous diffusion-based models need running time \(O(TN^{2})\) because they need to make \(O(N^{2})\) link predictions at each time step.
Expressivity analysis.EDGE modifies a graph for multiple iterations to generate a sample. In each iteration, it adds new edges to the graph based on the graph structure in the prior iteration. Therefore, EDGE is NOT an edge-independent model and does not have the limitation analyzed by Chanpuriya et al. (2021), thus it has a theoretical advantage over previous one-shot generative models.
The ability of EDGE might be affected by the underlying MPNN, which may not be able to distinguish different graph structures due to expressivity issues (Xu et al., 2018). However, this issue can be overcome by choosing more expressive GNNs (Sato, 2020). We defer such discussion to future work.
## 4 Related Work
Edge-independent models, which assume edges are formed independently with some probabilities, are prevalent in probabilistic models for large networks. These models include classical models such as ER graph models (Erdos et al., 1960), SBMs (Holland et al., 1983), and recent neural models such as variational graph auto-encoders (Kipf and Welling, 2016; Mehta et al., 2019; Li et al., 2020; Chen et al., 2022), NetGAN and its variant (Bojchevski et al., 2018; Rendsburg et al., 2020). Recent works show that these models can not reproduce desiring statistics of the target network, such as triangle counts, clustering coefficient, and square counts (Seshadhri et al., 2020; Chanpuriya et al., 2021).
Deep auto-regressive (AR) graph models (Li et al., 2018; You et al., 2018; Liao et al., 2019; Zang and Wang, 2020; Han et al., 2023) generate graph edges by sequentially filling up an adjacency matrix to generate a graph. These algorithms are slow because they need to make \(N^{2}\) predictions. Dai et al. (2020) proposes a method to leverage graph sparsity and predict only non-zero entries in the adjacency matrix. Long-term memory is a typical issue of these sequential models, so it is hard for them to model global graph properties. Moreover, these models are not invariant with respect to node orders of training graphs, and special techniques (Chen et al., 2021; Han et al., 2023) are often needed to train these models.
Diffusion-based generative models are shown to be powerful in generating high-quality graphs (Niu et al., 2020; Liu et al., 2019; Jo et al., 2022; Haefeli et al., 2022; Chen et al., 2022b; Vignac et al., 2022; Kong et al.). By "tailoring" a graph with multiple steps, these models can model edge correlations. They overcome the limitations of auto-regressive modes as well. However, all previous diffusion-based models focus on generation tasks with small graphs. This work aims to scale diffusion-based models to large graphs.
## 5 Experiments
We empirically evaluate our proposed approach from two perspectives: whether it can capture statistics of training graphs and whether it can generate graphs efficiently.
### Experimental setup
Datasets.We conduct experiments on both generic graph datasets and large networks. The generic graph datasets consist of multiple graphs of varying sizes. Here we consider Community and Ego datasets (You et al., 2018), all of which contain graphs with hundreds of nodes. We also consider four real-world networks, Polblogs (Adamic and Glance, 2005), Cora (Sen et al., 2008), Road-Minnesota (Rossi and Ahmed, 2015), and PPI (Stark et al., 2010). Each of these networks contains thousands of nodes. We also use the QM9 dataset (Ramakrishnan et al., 2014) to demonstrate that EDGE can be easily extended to generate graphs with attributes. The statistics of the datasets are shown in Table 1.
Baselines.For generic graphs, We compare EDGE to six recent deep generative graph models, which include two auto-regressive graph models, GraphRNN (You et al., 2018) and GRAN (Liao et al., 2019), three diffusion-based models, GDSS (Jo et al., 2022), DiscDDPM (Haefeli et al., 2022) and DiGress (Vignac et al., 2022), and one flow-based model, GraphCNF (Lippe and Gavves, 2020). For large networks, we follow Chanpuriya et al. (2021) and use six edge-independent models, which include VGAE (Kipf and Welling, 2016), CELL (Rendsburg et al., 2020), TSVD (Seshadhri et al., 2020), and three methods proposed by Chanpuriya et al. (2021) (CCOP, HDOP, Linear). We also include GraphRNN as a baseline because it is still affordable to train it on large networks. For the QM9 dataset, We compare EDGE against GDSS (Jo et al., 2022) and DiGress (Vignac et al., 2022). The implementation of our model is available at github.com/tufts-ml/graph-generation-EDGE.
Evaluation.We examine the generated generic graphs with both structure-based and neural-based metrics. For structured-based metrics, we evaluate the Maximum Mean Discrepancy (MMD) (Gretton et al., 2012) between test graphs and generated graphs in terms of degrees, clustering coefficients, and orbit counts (You et al., 2018). For neural-based metrics, we evaluate the FID and the MMD RBF metrics proposed by Thompson et al. (2022). All implementations of the evaluation are provided by Thompson et al. (2022). For all these metrics, the smaller, the better.
For each large network, we follow Chanpuriya et al. (2021) and evaluate how well the graph statistics of the generated network can match ground truths, which are statistics computed from training data. We consider the following statistics: power-law exponent of the degree sequence (PLE); normalized triangle counts (NTC); global clustering coefficient (CC) (Chanpuriya et al., 2021); characteristic path length (CPL); and assortativity coefficient (AC) (Newman, 2002). We also report the edge overlap ratio (EO) between the generated network and the original one to check to which degree a model memorizes the graph. A graph generated by a good model should have statistics similar to true values
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \#nodes & \#edges & \#graphs & feature \\ \hline Community & [60, 160] & [231, 1,965] & 510 & \\ Ego & [50, 399] & [57, 1,071] & 757 & \\ QM9 & [1,9] & [0, 28] & 133,885 & ✓ \\ Polblogs & 1,222 & 16,714 & 1 & \\ Cora & 2,485 & 5,069 & 1 & \\ Road-MN & 2,640 & 6,604 & 1 & \\ PPI & 3,852 & 37,841 & 1 & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Dataset statistics
computed from the training graph. At the same time, it should have a small EO with the training network, which means that the model should not simply memorize the input data.
For the QM9 dataset, we evaluate the Validity, Uniqueness, Frechet ChemNet Distance (Preuer et al., 2018) and Scaffold similarity (Bemis and Murcko, 1996) on the samples generated from baselines and our proposed method. We use molsets library (Polykovskiy et al., 2020) to implement the evaluation.
### Evaluation of sample quality
Generic graph generation.Table 2 summarizes the evaluation of generated graphs on the Community and Ego datasets. Best performances are in bold, and second-best performances are underscored. EDGE outperforms all baselines on 8 out of 10 metrics. For the other two metrics, EDGE only performs slightly worse than the best. We hypothesize that EDGE gains advantages by modeling node degrees because they are informative to the graph structure.
Large network generation.Unlike edge-independent models, the edge overlap ratios in the GraphRNN and our approach are not tunable. To make a fair comparison, we report the performance of the edge-independent models that have a similar or higher EO than GraphRNN and EDGE. Table 3 shows the statistics of the network itself (labeled as "True") and statistics computed from generated graphs. The statistics nearest to true values are considered as best performances, which are in bold. Second-best performances are underscored.
The proposed approach shows superior performances on all four networks. The improvements on Polblogs and PPI networks are clear. On the Road-Minnesota dataset, EDGE has a much smaller EO than edge-independent models, but its performances in terms of capturing graph statistics are similar to those models. On the Cora dataset, EDGE also has an EO much smaller than edge-independent models, but it slightly improves over these models. Road-Minnesota and Cora networks are both sparse networks - the message-passing neural model may not work at its full strength. We notice that GraphRNN can not even compete with edge-independent models. We also visualize the generated graphs of Polblogs in Figure 4.
### Efficiency
We compare the sampling efficiency of EDGE against other deep generative graph models. We record the average time for a model to sample one graph to make a consistent comparison over all datasets. The average sampling time for each dataset is averaged over 128 runs. Figure 3 shows the relationship between sampling time and graph sizes. Except for GraphRNN, all baseline neural models can only generate graphs for Community and Ego datasets, which contain 110 and 144 nodes on average. Our approach runs only slower than GraphCNF on the Community dataset by 0.5s. On large graphs, our model has a clear advantage in terms of running time. Note that our model spends less time on an Ego graph than a Community graph, though an Ego graph, on average, contains more nodes than a Community graph. This is because the computation of our model scales with the number of edges, and Ego graphs are often sparser than Community graphs.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{Community} & \multicolumn{4}{c}{Ego} \\ & \multicolumn{2}{c}{Structure-based (MMD)} & \multicolumn{2}{c}{Neural-based} & \multicolumn{2}{c}{Structure-based (MMD)} & \multicolumn{2}{c}{Neural-based} \\ & Deg. & Clus. & Orb. & FID & RBF MMD & Deg. & Clus. & Orb. & FID & RBF MMD \\ \hline GRNN & 0.1440 & 0.0535 & **0.0198** & 8.3869 & 0.1591 & 0.0768 & 1.1456 & 0.1087 & 90.5655 & 0.6827 \\ GRAN & 0.1022 & 0.0894 & **0.0198** & 64.1145 & 0.0749 & 0.5778 & 0.3360 & **0.0406** & 489.9598 & 0.2633 \\ \hline GraphCNF & 0.1129 & 1.2882 & **0.0197** & 29.1526 & 0.1341 & 0.1010 & 0.7654 & 0.0820 & 18.7929 & 0.0896 \\ GDSS & 0.0535 & 0.2072 & **0.0196** & 6.5531 & 0.0443 & 0.8189 & 0.6032 & 0.3315 & 60.6100 & 0.4331 \\ DiscDDPM & 0.1238 & 0.6549 & 0.0246 & 8.6321 & 0.0840 & 0.4613 & 0.1681 & 0.0633 & 42.7994 & 0.1561 \\ DiGress & 0.0409 & **0.0167** & 0.0298 & 3.4261 & 0.0460 & 0.0708 & **0.0092** & 0.1205 & 18.6794 & **0.0489** \\ \hline EDGE & **0.0175** & 0.0689 & **0.0198** & **2.2378** & **0.0227** & **0.0579** & 0.1773 & **0.0519** & **15.7614** & **0.0658** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Generation performance on generic graphs. We used unpaired t-tests to compare the results; the numbers in bold indicate the method is better at the 5% significance level, and the second-best method is underlined. We provide standard deviation in Appendix F.
Figure 3: Sampling speed comparison over different models.
### Generative performance on QM9 dataset
We further investigate EDGE's ability of generated graphs with node and edge attributes. To include node attributes, we first extend the basic EDGE model with a hierarchical generation process that can also sample node attributes. We put the details of this extension in Appendix E. We evaluate the extended EDGE model on the QM9 dataset and compare it with other neural baselines. The results in Table 4 show that the extended EDGE model has a performance comparable with that of DiGress. Note that DiGress is specially designed for molecule generation, and our model runs much faster than DiGress.
### Ablation studies
Diffusion variants.The random variables \(\mathbf{s}^{1:T}\) and \(\mathbf{d}^{0}\) play important roles in EDGE's good performances, and we verify that through an ablation study on the Polblogs dataset. We use four diffusion configurations: 1) setting \(G(N,0.5)\) as the convergent distribution and directly using an MPNN as the denoising model \(p_{\theta}(\mathbf{A}^{t-1}|\mathbf{A}^{t})\); 2) setting \(G(N,0)\) as the convergent distribution and directly using an MPNN as the denoising model (without modeling active nodes and degree guidance); 3) the EDGE model without degree guidance, and 4) the EDGE model. Table 5 shows the performances of the four models. If we set the convergent distribution to \(G(N,0.5)\), we can not even train such as model since it requires an excessively large amount of GPU memory. This justifies our use of \(G(N,0)\) as the convergent distribution. The introduction of \(\mathbf{s}^{1:T}\) (Section 3.2) significantly improves the sampling speed. Finally, the EDGE approach, which explicitly models node degrees \(\mathbf{d}^{0}\) and generates graphs with degree guidance, further improves the generative performance.
Diffusion steps vs. model performance.In EDGE, the number of diffusion steps \(T\) decides how many nodes would actively participate in the edge prediction. Here we investigate how it affects the model performance under linear
\begin{table}
\begin{tabular}{l
We can see that the proposed noise scheduling.
Specifically, we train our model on three large networks with \(T\in\{64,128,256,512,1024\}\) and report the model performance in Table 6. Unlike traditional diffusion models in which more diffusion steps usually yield better performance, a large \(T\) for our model does not always improve the performance. For instance, \(T=64\) gives the best performance in the Cora and Road-Minnesota datasets. Our explanation for this observation is the high level of sparsity in training graphs. If we have a large \(T\), the total number of generation steps, the model can only identify a few active nodes and predict edges between them in each time step. The model faces a highly imbalanced classification problem, which may lead to poor model convergence. Such an issue is not observed for relatively denser graphs, e.g. Polblogs and PPI datasets, which require a relatively large \(T\) to guarantee good model performances. When \(T\) is large enough (\(T=128\) for Polblogs and \(T=256\) for PPI), further increasing \(T\) does not improve the model performance.
## 6 Conclusion
In this work, we propose EDGE, a generative graph model based on a discrete diffusion process. By leveraging the sparsity in the diffusion process, EDGE significantly improves the computation efficiency and scales to graphs with thousands of nodes. By explicitly modeling node degrees, EDGE improves its ability in capturing important statistics of training graphs. Our extensive empirical study shows that EDGE has superior performance in benchmark graph generation in terms of both computational efficiency and generation quality.
## Acknowledgment
We thank anonymous reviewers for their valuable feedback. Xiaohui Chen and Li-Ping Liu are partially supported by the National Science Foundation under Grant No. 2239869.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & & EO & PLE & NTC & CC & CPL & AC \\ \hline \multirow{4}{*}{\begin{tabular}{c} \multirow{-2}{*}{\begin{tabular}{c} \end{tabular} \\ \end{tabular} } & True & 100 & 1.414 & 1 & 0.226 & 2.738 & -0.221 \\ & 64 & 1.8 & 1.380 & 1.148 & 0.235 & 2.800 & -0.202 \\ & 128\({}^{*}\) & 14.9 & 1.386 & 1.030 & 0.238 & 2.747 & -0.238 \\ & 256\({}^{*}\) & 16.5 & 1.398 & 0.977 & 0.217 & 2.647 & -0.214 \\ & 512\({}^{*}\) & 15.0 & 1.398 & 0.923 & 0.218 & 2.635 & -0.268 \\ & 1024\({}^{*}\) & 16.5 & 1.400 & 0.991 & 0.219 & 2.665 & -0.246 \\ \hline \multirow{4}{*}{\begin{tabular}{c} \multirow{-2}{*}{\begin{tabular}{c} \end{tabular} \\ \end{tabular} } & True & 100 & 1.885 & 1 & 0.090 & 6.311 & -0.071 \\ & 64\({}^{*}\) & 0.9 & 1.755 & 0.446 & 0.034 & 4.995 & -0.046 \\ & 128 & 1.1 & 1.747 & 0.555 & 0.042 & 5.017 & -0.050 \\ & 256 & 0.8 & 1.753 & 0.360 & 0.027 & 4.818 & -0.041 \\ & 512 & 0.8 & 1.753 & 0.360 & 0.027 & 4.818 & -0.042 \\ & 1024 & 0.9 & 1.762 & 0.348 & 0.027 & 4.778 & -0.034 \\ \hline \multirow{4}{*}{\begin{tabular}{c} \multirow{-2}{*}{\begin{tabular}{c} \end{tabular} \\ \end{tabular} } & True & 100 & 2.147 & 1 & 0.028 & 35.349 & -0.187 \\ & 64\({}^{*}\) & 0.8 & 1.910 & 0.962 & 0.011 & 9.125 & -0.063 \\ & 128 & 1.2 & 1.803 & 1.232 & 0.041 & 6.501 & -0.030 \\ & 256 & 0.8 & 1.953 & 1.057 & 0.014 & 7.471 & -0.005 \\ & 512 & 1.3 & 1.965 & 1.472 & 0.020 & 7.710 & -0.006 \\ & 1024 & 1.2 & 1.983 & 2.491 & 0.035 & 7.906 & -0.034 \\ \hline \multirow{4}{*}{\begin{tabular}{c} \multirow{-2}{*}{
\begin{tabular}{c} \end{tabular} \\ \end{tabular} } } & True & 100 & 1.462 & 1 & 0.092 & 3.095 & -0.099 \\ & 64\({}^{*}\) & 7.4 & 1.421 & 2.455 & -0.116 & 3.498 & -0.116 \\ \cline{1-1} & 128 & 6.2 & 1.419 & 1.503 & 0.126 & 3.384 & -0.147 \\ \cline{1-1} & 256\({}^{*}\) & 7.5 & 1.449 & 0.981 & 0.091 & 3.028 & -0.107 \\ \cline{1-1} & 512\({}^{*}\) & 7.0 & 1.438 & 1.101 & 0.099 & 3.244 & -0.107 \\ \cline{1-1} & 1024\({}^{*}\) & 7.1 & 1.441 & 0.925 & 0.074 & 3.150 & -0.101 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Large diffusion steps \(T\) does not necessarily improve model performance. Good diffusion steps are labeled with “*”.
Figure 4: Visualization of samples for the Polblogs dataset. We observe that only CELL, TSVD, and EDGE can learn the basic structure of the ground-truth network, while other baselines fail. The network sampled from EDGE appears to be more similar to the training graph. |
2308.12957 | Semi-analytical Framework for Modeling Strong Coupling of Quantum
Emitters in Electromagnetic Resonators | We present a semi-analytical framework for studying interactions between
quantum emitters and general electromagnetic resonators. The method relies on
the Lippmann-Schwinger equation to calculate the complex resonance frequencies
of the coupled system based only on a single calculation for the
electromagnetic resonator without the quantum emitter and with no fitting
parameters. This is in stark contrast to standard approaches in the literature,
in which the properties of the coupled system are fitted from calculated
spectra. As an application example, we consider a recent dielectric cavity
design featuring deep subwavelength confinement of light. We find the expected
anti-crossing of the emitter and cavity resonance frequencies, and comparing to
independent reference calculations, we find an extraordinary quantitative
agreement with a relative error below one part in ten thousand. In order to
unambiguously connect with the Jaynes-Cummings model, we derive an explicit
expression relating the classical description of the emitter, as modeled by a
spherical inclusion with a Lorentzian material response, to the dipole moment
of the corresponding quantum optical model. The combined framework therefore
enables classical calculations to be used for evaluating the coupling strength
entering quantum optical theories in a transparent way. | Mohammad Abutoama, George Kountouris, Jesper Mørk, Philip Trøst Kristensen | 2023-08-24T17:53:13Z | http://arxiv.org/abs/2308.12957v1 | Semi-analytical Framework for Modeling Strong Coupling of Quantum Emitters in Electromagnetic Resonators
###### Abstract
We present a semi-analytical framework for studying interactions between quantum emitters and general electromagnetic resonators. The method relies on the Lippmann-Schwinger equation to calculate the complex resonance frequencies of the coupled system based only on a single calculation for the electromagnetic resonator without the quantum emitter and with no fitting parameters. This is in stark contrast to standard approaches in the literature, in which the properties of the coupled system are fitted from calculated spectra. As an application example, we consider a recent dielectric cavity design featuring deep subwavelength confinement of light. We find the expected anti-crossing of the emitter and cavity resonance frequencies, and comparing to independent reference calculations, we find an extraordinary quantitative agreement with a relative error below one part in ten thousand. In order to unambiguously connect with the Jaynes-Cummings model, we derive an explicit expression relating the classical description of the emitter, as modeled by a spherical inclusion with a Lorentzian material response, to the dipole moment of the corresponding quantum optical model. The combined framework therefore enables classical calculations to be used for evaluating the coupling strength entering quantum optical theories in a transparent way.
## Introduction
Efficient light-matter interaction at the nanoscale is of high interest for many important applications such as single-photon sources [1-7] for quantum information and communication technology [8-11], which often rely on the effective interfacing of quantum emitters (QEs) with optical cavities. When a QE is placed in a nanostructured environment, such as an optical cavity for example, the light-matter interaction can be modified from that in free space. For relatively weak coupling, the spontaneous emission rate is found to be enhanced by the so-called Purcell effect [12]. In the strong coupling limit, on the other hand, the dynamics are characterized by coherent energy exchange between the QE and the cavity field, which can be observed in the frequency domain through a splitting of the spectrum into two peaks. This is illustrated in Fig. 1, which shows the absorption cross-section spectrum of a strongly coupled system consisting of a QE in the center of an optical cavity.
Light-matter interaction in a QE-optical cavity hybrid system can be theoretically studied using different frameworks and under different approximations. In particular, a two-level system is known to behave like a Lorentz oscillator in the weak excitation limit [13] of linear response, and the use of a small volume of material with a Lorentzian frequency response is a popular method of modeling QE-optical cavity hybrid systems by mapping the problem onto the model system of two coupled harmonic oscillators [14-20]. In this physically appealing approach, the dynamics of the coupled system can be seen as arising from the coherent exchange of energy between the QE and the optical cavity, and the approach therefore effectively provides a connection between classical electrodynamics and the Jaynes-Cummings model of cavity quantum electrodynamics. Despite the considerable success and use of this approach, the mapping of the continuous classical electromagnetic field problem onto the discrete oscillators suffers from the lack of an underlying theory, which means that phenomenological fitting parameters in practice are required to account for the experimental results. To explicitly define the discrete oscillators and thereby alleviate the use of fitting parameters, we note that it is by now understood, that the general response of electromagnetic resonators can be conveniently and quite naturally described using the natural resonances of the electromagnetic system, the so-called quasi-normal modes (QNMs) [21-24] of the system, which are also known as resonant states [25-26]. From this point of view, the introduction of a Lorentz oscillator material within or near the electromagnetic resonator leads to the formation of an additional QNM [27-28], which is the origin of the additional peak in the spectrum, cf. Fig. 1. The coupling strength of the coupled system can thus be analyzed by investigating the complex frequency spectrum.
In this work, we theoretically examine the interaction between a single QE and an optical cavity and present a semi-analytical approach to calculate the coupling strength based only on the QNM of the bare cavity. The model relies on the Lippmann-Schwinger (LS) equation [29] to calculate the QNM frequencies of the coupled system with no fitting parameters. The proposed calculation scheme is valid for general electromagnetic resonators, be they dielectric,
Figure 1: **(a)** General schematic of a QE-optical cavity hybrid system. **(b)** Top: Relative absorption cross-section of the bare cavity without (blue) and with (black) an embedded QE. The bare cavity spectrum features a single resonance peak, which corresponds to a single complex frequency \(\widetilde{\omega}_{\rm c}\). The spectrum for the coupled system, on the other hand, shows two peaks, each corresponding to a complex frequency and for which the frequency splitting is proportional to the coupling strength \(g\). Bottom: Complex frequency spectrum showing the discrete frequencies (circles) as well as the approximate values for the coupled system as obtained by the semi-analytical approach (stars).
plasmonic, or hybrid. For the calculations in this work, we consider a particular optical cavity belonging to a relatively new class of dielectric cavities with deep subwavelength confinement, that are of contemporary interest both theoretically and experimentally [30-46]. This class of cavities, which we refer to as extreme dielectric confinement (EDC) cavities, opens intriguing possibilities for significantly enhancing light-matter interaction. Experimental results for a silicon EDC cavity designed using topology optimization were presented in Ref. [47], and a thorough numerical investigation using a simplified version of this design is available in Ref. [48]. Our calculations in this work are based on a slightly modified version of this design, as shown in Fig. 1.
The paper starts with the mathematical formulation of the scattering problem of interest. We begin with a brief description of the use of the LS equation for calculating the QNMs of the coupled system, after which we present a special case where we recover the result of the Jaynes-Cummings model and thereby provide an explicit expression connecting the dipole moment to the properties of the dispersive material forming the QE. In the following section, we provide an application example, in which we demonstrate the expected anti-crossing of the resonances when varying the QE-cavity detuning and compare the predictions of the semi-analytical approach to full electromagnetic reference calculations as well as to the predictions from the Jaynes-Cummings model. Summary and conclusions are given in the last section.
## Results and Discussion
### Formulation
In this section, we describe the semi-analytical model for analyzing the scattering problem in Fig. 1(a) in which a QE is placed in the center of the optical cavity and thereby forms a strongly-coupled QE-optical cavity hybrid system. In particular, we will show that the QNM resonances of the coupled system, which describe the peaks of the spectrum in Fig. 1(b), can be calculated using the QNM of the bare optical cavity with no QE. Throughout the manuscript, we consider non-magnetic and piecewise isotropic materials for simplicity, but we note that the method generalizes immediately to more complicated material models.
The starting point is the LS equation, which provides the solution to a general scattering problem in terms of the background electric field Green function \(\mathbf{G}_{\mathrm{B}}(\mathbf{r},\mathbf{r}^{\prime},\omega)\) through an integral over all space of the form [24, 29]
\[\mathbf{E}_{\mathrm{tot}}(\mathbf{r},\omega)\mathbf{=E}_{\mathrm{in}}( \mathbf{r},\omega)\mathbf{+}\frac{\omega^{2}}{\mathrm{c}^{2}}\int\mathbf{G}_{ \mathrm{B}}(\mathbf{r},\mathbf{r}^{\prime},\omega)\Delta\varepsilon(\mathbf{r }^{\prime},\omega)\mathbf{E}_{\mathrm{tot}}(\mathbf{r}^{\prime},\omega) \mathrm{d}\mathbf{V}^{\prime}, \tag{1}\]
in which \(\mathbf{E}_{\mathrm{tot}}(\mathbf{r},\omega)\) denotes the total electric field, \(\mathbf{E}_{\mathrm{in}}(\mathbf{r},\omega)\) is the incoming electric field, \(\mathbf{c}\) is the speed of light in vacuum, and \(\Delta\varepsilon(\mathbf{r},\omega)\mathbf{=}\varepsilon_{\mathrm{R}}( \mathbf{r},\omega)\mathbf{-}\varepsilon_{\mathrm{B}}(\mathbf{r},\omega)\) denotes the difference between the total permittivity distribution and the background permittivity \(\varepsilon_{\mathrm{B}}(\mathbf{r},\omega)\), which in general is a function of position \(\mathbf{r}\) and angular frequency \(\omega\).
Importantly, the incoming electric field is a solution to the wave equation in a geometry described by the background permittivity, so that all scattering is caused by the change in permittivity \(\Delta\varepsilon(\mathbf{r},\omega)\). Similarly, the background Green function is the Green function for the geometry described by the background permittivity. In typical scattering calculations, one will often choose the background permittivity to be constant, in which case the background Green
function is known analytically, and one can use combinations of plane waves for the incoming field.
The QNMs are solutions to the sourceless wave equation subject to a suitable radiation condition, such as the Silver-Muller radiation condition in the case of homogeneous media [24]. Therefore, we can calculate the QNMs corresponding to a given geometry defined by \(\Delta\varepsilon(\mathbf{r},\omega)\) as the solutions to the LS equation with no incoming field [49-50],
\[\mathbf{\tilde{f}}_{n}(\mathbf{r})=\frac{\widetilde{\omega}_{n}^{2}}{c^{2}} \int\mathbf{G}_{\mathbf{B}}(\mathbf{r},\mathbf{r}^{\prime},\widetilde{\omega} _{n})\Delta\varepsilon(\mathbf{r}^{\prime},\widetilde{\omega}_{n})\mathbf{ \tilde{f}}_{n}(\mathbf{r}^{\prime})\mathrm{d}\mathbf{V}^{\prime}, \tag{2}\]
where \(\mathbf{\tilde{f}}_{n}(\mathbf{r})\) denotes the \(n\)'th electric field QNM with corresponding complex frequency \(\widetilde{\omega}_{n}=\omega_{n}-\mathrm{i}\gamma_{n}\), in which the imaginary part describes the dissipation, and the associated quality factor can be calculated as \(Q_{n}=\omega_{n}/2\gamma_{n}\). In particular, the QNM of the bare cavity in Fig. 1 above can be calculated using Eq. (1) by choosing \(\mathbf{G}_{\mathbf{B}}(\mathbf{r},\mathbf{r}^{\prime},\omega)\) to be the Green function of free space and choosing \(\varepsilon_{\mathrm{R}}(\mathbf{r},\omega)\) to be the permittivity distribution defining the optical cavity. Similarly, the QNMs of the coupled QE-optical cavity system can be calculated by applying the same equation but choosing \(\varepsilon_{\mathrm{R}}(\mathbf{r},\omega)\) to be the permittivity of the optical cavity including the embedded QE. A third option for calculating the QNMs of the coupled system - which is the one we shall exploit in this work - is to choose the Green function to be that of the bare cavity and \(\Delta\varepsilon(\mathbf{r},\omega)\) to be the permittivity change due to the QE only. This choice is illustrated in Figure 2.
Following Refs. [15, 16, 19], we model the QE as a small sphere of homogeneous but dispersive permittivity in the form of a single Lorentzian response,
\[\varepsilon_{\mathrm{QE}}\left(\omega\right)=\varepsilon_{\infty}+\frac{f \omega_{\mathrm{QE}}^{2}}{\omega_{\mathrm{QE}}^{2}-\omega^{2}-2\mathrm{i} \gamma_{\mathrm{QE}}\omega}\,, \tag{3}\]
where \(\varepsilon_{\infty}\) is a constant background permittivity, \(f\) is the oscillator strength of the electronic transition in the material, and \(\gamma_{\mathrm{QE}}\) and \(\omega_{\mathrm{QE}}\) denote, respectively, the damping rate and the resonance angular frequency of this transition. Since we will be investigating the case where the QE is placed inside the material at the cavity center, we take the constant background permittivity to be that of the high-index cavity material. The permittivity perturbation that enters the LS equation is then given by: \(\Delta\varepsilon_{\mathrm{QE}}(\omega)=\frac{f\omega_{\mathrm{QE}}^{2}}{ \omega_{\mathrm{QE}}^{2}-\omega^{2}-2\mathrm{i}\gamma_{\mathrm{QE}}\omega}\).
Equation (2) is a Fredholm integral equation of the second kind in which the electric field QNM \(\mathbf{\tilde{f}}_{n}(\mathbf{r})\) appears on both sides of the equality. The general solution calls for numerical calculation schemes, but for the present problem we can turn it into a simple algebraic equation by means of a few simplifying assumptions. First, we make use of the fact that the electromagnetic response of the bare cavity is well represented by a single QNM for frequencies close to the resonance and for positions close to the cavity center, as detailed in Ref. [48]. Therefore, we can approximate the Green tensor by use of this QNM only as
\[\mathbf{G}_{\mathrm{B}}(\mathbf{r},\mathbf{r}^{\prime},\omega)\approx\frac{ \mathrm{c}^{2}\,\,\tilde{\mathbf{f}}_{\mathrm{c}}(\mathbf{r})\tilde{\mathbf{f} }_{\mathrm{c}}(\mathbf{r}^{\prime})}{\tilde{\omega}_{\mathrm{c}}-\tilde{\omega }} \tag{4}\]
where \(\mathbf{\tilde{f}}_{\mathrm{c}}(\mathbf{r})\) is the normalized electric field of the QNM of interest for the bare cavity at the position \(\mathbf{r}\), and \(\tilde{\omega}_{\mathrm{c}}=\omega_{\mathrm{c}}-\mathrm{i}\gamma_{\mathrm{c}}\) is the corresponding complex resonance frequency. Next, inserting the single-QNM approximation of the Green tensor in Eq. (2) and carrying out the integration assuming the field to be approximately constant across the volume of the QE, we can rewrite Eq. (2) as
\[\mathbf{\tilde{f}}_{n}\big{(}\mathbf{r}_{\mathrm{QE}}\big{)}\left[1-\frac{ \tilde{\omega}_{n}\,\,\mathbf{f}_{\mathrm{c}}(\mathbf{r}_{\mathrm{QE}})\, \,\mathbf{\tilde{\omega}}_{\mathrm{c}}-\tilde{\omega}_{n}\,\,\mathbf{\tilde{ \omega}}_{\mathrm{QE}}}\,\Delta\varepsilon_{\mathrm{QE}}(\tilde{\omega}_{n}) \mathrm{V}_{\mathrm{QE}}\right]\approx 0, \tag{5}\]
In which \(\mathbf{r}_{\mathrm{QE}}\) and \(\mathrm{V}_{\mathrm{QE}}\) denote the position and the volume of the QE, respectively. For non-trivial solutions, we demand that
\[\mathbf{1}-\frac{\tilde{\omega}_{n}\,\,\mathbf{\tilde{f}}_{\mathrm{c}}( \mathbf{r}_{\mathrm{QE}})\,\,\mathbf{\tilde{f}}_{\mathrm{c}}(\mathbf{r}_{ \mathrm{QE}})}{\tilde{\omega}_{\mathrm{c}}-\tilde{\omega}_{n}\,\,\mathbf{ \tilde{\omega}}_{\mathrm{QE}}^{2}-\mathbf{\tilde{\omega}}_{n}^{2}-2\mathrm{i }\gamma_{\mathrm{QE}}\tilde{\omega}_{n}}\mathrm{V}_{\mathrm{QE}}=0. \tag{6}\]
The left hand side of this equation defines a complex function for which the QNM frequencies \(\tilde{\omega}_{n}\) appear as zeros. Noting that the denominator in the permittivity function derives from the two approximate resonances at \(\tilde{\omega}_{\mathrm{QE}}=\omega_{\mathrm{QE}}-\mathrm{i}\gamma_{\mathrm{ QE}}\) and \(-\tilde{\omega}_{\mathrm{QE}}^{*}=-\omega_{\mathrm{QE}}-\mathrm{i}\gamma_{ \mathrm{QE}}\), we can rewrite this expression in the approximate form
\[\mathbf{1}-\frac{\tilde{\omega}_{n}\,\,\mathbf{\tilde{f}}_{\mathrm{c}}( \mathbf{r}_{\mathrm{QE}})\,\,\mathbf{\tilde{f}}_{\mathrm{c}}(\mathbf{r}_{ \mathrm{QE}})}{\tilde{\omega}_{\mathrm{c}}-\tilde{\omega}_{n}\,\,\mathbf{ \tilde{\omega}}_{\mathrm{QE}}^{2}}\frac{f\omega_{\mathrm{QE}}^{2}}{(\tilde{ \omega}_{\mathrm{QE}}-\tilde{\omega}_{n})(\tilde{\omega}_{\mathrm{QE}}^{2}+ \tilde{\omega}_{n})}\mathrm{V}_{\mathrm{QE}}=0 \tag{7}\]
from which we expect two resonances in the vicinity of \(\tilde{\omega}_{n}\approx\tilde{\omega}_{\mathrm{c}}\approx\tilde{\omega}_{ \mathrm{QE}}\). With this ansatz, we approximate the non-resonant factor as \(\tilde{\omega}_{n}/(\tilde{\omega}_{\mathrm{QE}}^{*}+\tilde{\omega}_{n}) \approx\,\,\mathbf{\tilde{\nu}}_{2}\), and we can then rewrite the equation as
\[(\tilde{\omega}_{\mathrm{c}}-\tilde{\omega}_{n})(\tilde{\omega}_{\mathrm{QE} }-\,\,\tilde{\omega}_{n})\,\,-\,\,g^{2}=0 \tag{8}\]
where we have defined \(g^{2}=f\mathrm{V}_{\mathrm{QE}}\omega_{\mathrm{QE}}^{2}/(4\varepsilon_{ \mathrm{R}}(\mathbf{r}_{\mathrm{QE}})\nu_{\mathrm{c}})\). In this expression, \(\varepsilon_{\mathrm{R}}(\mathbf{r})\) is the dispersionless permittivity distribution of the bare cavity, and \(\nu_{\mathrm{c}}\) is the generalized effective mode volume [49],
\[\nu_{\mathrm{c}}\,\,=\,\,\frac{\langle\langle\tilde{\mathbf{f}}_{\mathrm{c}}| \tilde{\mathbf{f}}_{\mathrm{c}}\,\rangle\rangle}{\varepsilon_{\mathrm{R}}( \mathbf{r}_{\mathrm{QE}})\tilde{f}_{\mathrm{c}}(\mathbf{r}_{\mathrm{QE}}) \tilde{f}_{\mathrm{c}}(\mathbf{r}_{\mathrm{QE}})} \tag{9}\]
in which \(\langle\langle\tilde{\mathbf{f}}_{\mathrm{c}}|\tilde{\mathbf{f}}_{\mathrm{c}}\,\rangle\rangle\) denotes the QNM normalization [51-54]. Solving Eq. (8), we can express the complex eigenfrequencies of the coupled system as
Figure 2: The general scheme of the calculation approach: the geometry of the coupled QE-optical cavity system can be divided into the bare cavity in free space and an added permittivity perturbation \(\Delta\varepsilon(\mathbf{r},\omega)\) due to the QE.
\[\widetilde{\omega}_{1,2}=\frac{\omega_{\rm c}+\omega_{\rm QE}}{2}-{\rm i}\frac{Y_{ \rm c}+Y_{\rm QE}}{2}\pm\sqrt{g^{2}+\frac{1}{4}}\left[\left(\omega_{\rm c}- \omega_{\rm QE}\right)-{\rm i}\left(Y_{\rm c}-Y_{\rm QE}\right)\right]^{2} \tag{10}\]
### Connection to the Jaynes-Cummings model
The Jaynes-Cummings (JC) model [55] describes the interaction of an optical cavity and a QE without losses in second quantization. The coupled system is described by the Hamiltonian
\[{\rm H}=\hbar\omega_{\rm c}a^{\dagger}a+\hbar\omega_{\rm QE}\sigma^{\dagger} \sigma+\hbar(ga^{\dagger}\sigma+g^{*}\sigma^{\dagger}a) \tag{11}\]
in which \(\hbar\) is the reduced Planck constant, and the operators \(a\) (\(a^{\dagger}\)) and \(\sigma\) (\(\sigma^{\dagger}\)) are lowering (raising) operators of the cavity field and the QE, respectively. The cavity field inherently behaves as a harmonic oscillator [55], whereas the QE can be well represented as a two level system. We note, however, that for the calculations in this work, in which we focus on the single-excitation subspace, we can take the QE to be a harmonic oscillator as well; this also justifies the alternative analysis in the appendix. The parameter \(g\) denotes the coupling strength, which is connected to the dipole moment of the QE as [56]
\[\hbar g=\sqrt{\frac{\hbar\omega_{\rm c}}{2\varepsilon_{0}}}\mu_{\rm QE}\cdot \tilde{\bf f}_{\rm c}\left({\bf r}_{\rm QE}\right) \tag{12}\]
where \(\varepsilon_{0}\) is the permittivity of free space. The operators evolve in time according to the Heisenberg equations of motion. In order to account for dissipation of energy, we follow the general ideas of Refs. [56-57] and add imaginary parts to the frequencies along with fluctuating source terms. With this approach, we find that the system dynamics can be written in the form
\[\partial_{t}\begin{bmatrix}\alpha\\ \sigma\end{bmatrix}=\begin{bmatrix}-{\rm i}\widetilde{\omega}_{\rm c}&-{\rm i} g\\ -{\rm i}g&-{\rm i}\widetilde{\omega}_{\rm QE}\end{bmatrix}\begin{bmatrix}\alpha \\ \sigma\end{bmatrix}+\begin{bmatrix}F_{\rm c}\\ F_{\rm QE}\end{bmatrix} \tag{13}\]
where \(F_{\rm c}\) and \(F_{\rm QE}\) are fluctuating white noise terms connected with the dissipation, which are important for preserving the commutation relations for the system operators in time [56-57]. From Eq. (13), it is clear that the system dynamics are governed by the eigenvalues of the matrix, and by direct calculation, we find the complex frequencies of the coupled system in the exact form of Eq. (10).
By comparing the results of this fully quantum mechanical approach in the limit \(\widetilde{\omega}_{\rm c}=\widetilde{\omega}_{\rm QE}\) to the solution of the semi-analytical approach based on the LS equation, we can now directly connect the dipole moment in the Jaynes-Cummings model to the oscillator strength and volume of the dispersive material making up the QE. Alternatively, we can follow the approach of Ref [58] and calculate the solution to the quantum mechanical problem in terms of a LS equation, as we do in the appendix. In both cases, we find that
\[\mu_{\rm QE}=\sqrt{\frac{f\hbar\varepsilon_{0}V_{\rm QE}\omega_{\rm QE}}{2}} \tag{14}\]
### Application example
In this section, we provide an example application of the theory. We consider the optical cavity in Figs. 1 and 2, which is derived from the design in Ref. [48] by slightly modifying it to use a
relative permittivity of \(\varepsilon_{\text{Inp}}=10.02\) corresponding to indium phosphide around the target wavelength \(\lambda_{0}=1550\) nm [59]. For the QE, we consider a small sphere with radius 20 nm and placed at the center of the cavity. The material of the QE is described by the dispersive permittivity in Eq. (3) with oscillator strength \(f\)=\(7\times 10^{-3}\) and \(\widetilde{\omega}_{\text{QE}}=\widetilde{\omega}_{\text{c}}\) to perfectly match the bare cavity resonance. As an experimentally relevant way of probing the resonances in the system, we first calculate the absorption cross section of the system with and without QE, as detailed in the appendix. The results are shown in the top panel of Fig. 1(b) and show the characteristic splitting of a single resonance into a double-peaked spectrum when the QE is included.
From the discussion above, we expect the single peak in the spectrum of Fig. 1(b) to be attributable to a single QNM frequency. Instead of the formulation in Eq. (2), we calculate the QNM fields numerically with the finite element method, as detailed in the appendix. We find that the bare optical cavity supports a QNM with an angular resonance frequency of \(\widetilde{\omega}_{\text{c}}=(1213.66-\text{i}0.287)\;10^{12}\text{rad s}^{-1}\) corresponding to a \(Q=2114\), as shown by the blue open circle in the bottom panel of Fig. 1(b). The mode profile of this QNM is shown in Fig. 3(a) and it has a generalized effective mode volume of \(v_{\text{c}}=(0.691+\text{i}0.002)\;(\lambda_{0}/2\text{n})^{3}\).
Next, we consider the effect of including the QE. By direct numerical calculations, we find the two complex frequencies \(\widetilde{\omega}_{1}=(1212.84-\text{i}0.215)\;10^{12}\text{rad s}^{-1}\) and \(\widetilde{\omega}_{2}=(1214.67-\text{i}0.209)\;10^{12}\text{rad s}^{-1}\) as shown by the black open circles in the bottom part of Fig. 1(b). The mode profiles of the two corresponding QNMs are shown in Fig. 3(b). We can now compare the results of the full numerical calculation to the approximate resonance frequencies resulting from the LS equation. To this end, we solve Eq. (6) iteratively by numerical means and without
Figure 3: **(a)** Mode profile showing the magnitude of the electric field QNM of interest in the bare cavity. The field is strongly localized at the cavity center. **(b)** Mode profiles of the two QNMs of interest in the coupled QE-optical cavity hybrid system.
the assumption \(\widetilde{\omega}_{n}\approx\widetilde{\omega}_{c}\approx\widetilde{\omega}_{QE}\), and we find the approximate frequencies \(\widetilde{\omega}_{1}^{LS}=(1212.72-\mathrm{i}0.217)\)\(10^{12}\mathrm{rad}\)\(\mathrm{s}^{-1}\) and \(\widetilde{\omega}_{2}^{LS}=(1214.60-\mathrm{i}0.215)\)\(10^{12}\mathrm{rad}\)\(\mathrm{s}^{-1}\) as shown by black stars in the bottom part of Fig. 1(b). The corresponding relative errors are on the order of one part in ten thousand for the real parts and one percent for the imaginary part. The semi analytical approach evidently works remarkably well for predicting the coupling in this QE-optical cavity hybrid system.
We note that these calculations were performed with the mesh resulting from two refinements of the original mesh, as discussed in the appendix. Based on the convergence study, we expect the error on the real and imaginary parts of the stated numbers to be on the order of 1 and 0.01, respectively, as calculated by comparing to the best estimate of the true value for the case of the bare cavity. Notably, this accuracy does not justify the stated number of digits. Nevertheless, we include the additional digits to highlight the fact that since all calculations were done on the same mesh, the results are internally consistent to a higher accuracy than the estimate based on the absolute error. Essentially, the numerical error stemming from discretization and the finite size of the calculation domain affects all the calculations in a similar way.
To further examine the behavior of the coupled system, we consider the change in the spectrum when detuning the QE resonance with respect to the bare cavity resonance. The results are summarized in Fig. 4, which shows the real parts of both resonance frequencies in the coupled QE-optical cavity hybrid system as calculated with the full numerical simulations as well as the approximate LS equation. Comparing the two datasets, we find relative errors smaller than one part in ten thousand over the full spectral range of interest. In addition, we show the results from Eq. (10), which are identical to the prediction of the Jaynes-Cummings model with the appropriate scaling of the dipole moment from Eqs. (12) and (14). In this case, we find relative errors close to one part in a thousand.
The quantitative agreement between the LS equation and the reference calculations underscores the usefulness of the semi-analytical approach for precise predictions of the resonance frequencies of the QE-optical cavity hybrid system.
## Summary and Conclusions
We theoretically investigated the interaction between a single QE - as modeled by a small volume of material with a Lorentzian response - and a general electromagnetic resonator - such as an optical cavity, a plasmonic particle, or a combination - and presented a semi-analytical approach to calculate the coupling of the two into a hybrid system. The approach relies on the LS equation to calculate the resonance frequencies of the coupled system without the need for fitting to calculated spectra. Once the complex frequency and generalized effective mode volume of the QNM of interest in the bare cavity is calculated - typically by numerical means - the complex frequencies of the coupled system can be readily calculated to high accuracy based only on the properties of the QE. As a special case, we recover the result of the Jaynes-Cummings model and thereby provide an explicit expression connecting the dipole moment to the oscillator strength and volume of the dispersive material making up the QE in this model. As an example application, we investigated the coupling of a QE to a dielectric nanocavity featuring deep subwavelength confinement. By detuning the QE resonance frequency we found the expected anti-crossing of the QE and cavity resonance frequencies, and by comparing to full numerical reference calculations, we found relative errors smaller than one part in ten thousand over the full frequency range of interest.
Efficient and transparent tools for modeling light-matter interaction in coupled systems at the nanoscale are important from a theoretical point of view as well as for applications in quantum technology. Based on the extraordinary quantitative agreement with the reference calculations, the semi-analytical approach presented in this article provides one such tool, which enables classical calculations to be used for evaluating the coupling strength entering quantum optical theories in a precise and transparent way.
## Associated Content
### Supporting Information
The Supporting Information is available free of charge at [http://pubs.acs.org](http://pubs.acs.org).
Cavity design and numerical calculations, QNM convergence study and explicit expression of the coupling strength by connecting the classical oscillator strength of the emitter to the dipole moment of the corresponding quantum optical model.
## Author Information
### Corresponding Author
Fig 4: **(a)** Eigenfrequencies of the coupled QE-optical cavity hybrid system when detuning the QE resonance with respect to the cavity frequency, showing the expected anti-crossing of the frequencies. Blue circles, gray stars, and red triangles correspond to the full numerical solution (Full), the semi-analytical approach (LS) and the Jaynes-Cummings model, respectively. The dashed horizontal black line shows the resonance frequency of the optical cavity, while the solid black line indicates the angular frequency of the QE. **(b)** Corresponding relative errors of the semi-analytical approach (Full-LS) and the Jaynes-Cummings model (Full-JC) when compared t the full numerical solution.
**Mohammad Abtoama - DTU Electro, Technical University of Denmark, Orsteds Plads, building 343, 2800 Kgs. Lyngby, Denmark; NanoPhoton - Center for Nanonphotonics, Orsteds Plads, building 3454, 2800 Kgs. Lyngby, Denmark; orcid.org/0000-0002-4286-0434; Email: [email protected]**
**George Kountouris - DTU Electro, Technical University of Denmark, Orsteds Plads, building 343, 2800 Kgs. Lyngby, Denmark; NanoPhoton - Center for Nanonphotonics, Orsteds Plads, building 345A, 2800 Kgs. Lyngby, Denmark; orcid.org/0000-0003-4750-8701; Email: [email protected]**
**Jesper Mork - DTU Electro, Technical University of Denmark, Orsteds Plads, building 343, 2800 Kgs. Lyngby, Denmark; NanoPhoton - Center for Nanonphotonics, Orsteds Plads, building 345A, 2800 Kgs. Lyngby, Denmark; Email: [email protected]**
**Philip Trost Kristensen - DTU Electro, Technical University of Denmark, Orsteds Plads, building 343, 2800 Kgs. Lyngby, Denmark; NanoPhoton - Center for Nanonphotonics, Orsteds Plads, building 345A, 2800 Kgs. Lyngby, Denmark; orcid.org/0000-0001-5804-1989; Email: [email protected]**
**Author Contributions**
The manuscript was written through contributions of all authors. All authors have given approval to the final version of the manuscript.
**Funding/ ACKNOWLEDGMENT**
This work was supported by the Danmark Grundforskningsfond (DNRF147); Danmarks Frie Forskningsfond (0164-00014B). M.A. was partially supported by the Planning and budgeting committee of the Israeli council for higher education (VATAT). **Notes**
The authors declare no competing financial interest.
## References
* (1) Wrachtrup, J.; Jelezko, F. Processing Quantum Information in Diamond. J. Phys.: _Condens. Matter_**2006**, _18_, S807-S824.
* (2) Koenderink, A. F. Plasmon nanoparticle array waveguides for single photon and single plasmon sources. _Nano Lett._**2009**, \(9\), 4228-4233.
* (3) Claudon, J.; Bleuse, J.; Malik, N. S.; Bazin, M.; Jaffrennou, P.; Gregersen, N.; Sauvan, C.; Lalanne, P.; Gerard, J.-M. A Highly Efficient Single-Photon Source Based on a Quantum Dot in a Photonic Nanowire. _Nat. Photonics_**2010**, \(4\) (3), 174-177.
* (4) Babinec, T. M.; Hausmann, B. J. M.; Khan, M.; Zhang, Y.; Maze, J. R.; Hemmer, P. R.; Loncar, M. A Diamond Nanowire Single-Photon Source. _Nat. Nanotechnol._**2010**, \(5\) (3), 195-199.
* (5) He, Y.-M.; He, Y.; Wei, Y.-J.; Wu, D.; Atature, M.; Schneider, C.; Hofling, S.; Kamp, M.; Lu, Y.-L.; Pan, J.-W. On-demand semiconductor single-photon source with near-unity indistinguishability. _Nat. Nanotechnol._**2013**, \(8\), 213-217.
* (6) Somaschi, N.; Giesz, V.; De Santis, L.; Loredo, J. C.; Almeida, M. P.; Hornecker, G.; Portalupi, S. L.; Grange, T.; Anton, C.; Demory, J.; Gomez, C.; Sagnes, I.; Lanzillotti-Kimura, N. D.; Lemaitre, A.; Auffeves, A.; White, A. G.; Lanco, L.; Senellart, P. Near-Optimal Single-Photon Sources in the Solid State. _Nat. Photonics_**2016**, _10_ (5), 340-345.
* (7) Tomm, N.; Javadi, A.; Antoniadis, N. O.; Najer, D.; Lobl, M. C.; Korsch, A. R.; Schott, R.; Valentin, S. R.; Wieck, A. D.; Ludwig, A.; Warburton, R. J. A Bright and Fast Source of Coherent Single Photons. _Nat. Nanotechnol._**2021**, _16_ (4), 399-403.
* (8) Lo, H. K.; Chau, H. F. Unconditional security of quantum key distribution over arbitrarily long distances. _Science_**1999**, _283_, 2050-2056.
* (9) Monroe, C. Quantum information processing with atoms and photons. _Nature_**2002**, _416_, 238-246.
10. Hennessy, K. et al. Quantum nature of a strongly coupled single quantum dot-cavity system. _Nature_**2007**, _445_, 896-899.
* Kimble (2008) 11. Kimble, H. J. The quantum internet. _Nature_**2008**, _453_, 1023-1030.
* Purcell (1946) 12. E. M. Purcell. Proceedings of the American Physical Society, b10. Spontaneous emission probabilities at radio frequencies. _Phys. Rev._**1946**, _69_, 674.
* Zhu et al. (1999) 13. Zhu, Y.; Gauthier, D. J.; Morin, S. E.; Wu, O.; Carmichael, H. J.; Mossberg, T. W.; Vacuum Rabi splitting as a feature of linear dispersion theory: Analysis and experimental observations. _Phys. Rev. Lett._**1990**, _64_, 2499.
* Rudin and Reinecke (1999) 14. Rudin, S.; Reinecke, T. L.; Oscillator model for vacuum Rabi splitting in microcavities. _Phys. Rev. B._**1999**, _59_, 10227-10233.
* Wu et al. (2010) 15. Wu, X.; Gray, S. K.; Pelton, M. Quantum-dot-induced transparency in a nanoscale plasmonic resonator. _Opt. Express_**2010**, _18_, 23633-23645.
* Santhosh et al. (2016) 16. Santhosh, K.; Bitton, O.; Chuntonov, L.; Haran, G. Vacuum Rabi splitting in a plasmonic cavity at the single quantum emitter limit. _Nat Commun._**2016**, \(7\), ncomms11823.
* Autore et al. (2018) 17. Autore, M.; Li, P.; Dolado, I.; Alfaro-Mozaz, J. F.; Esteban, R.; Atxabal, A.; Casanova, F.; Hueso, L. E.; Alonso-Gonzalez, P.; Aizpurua, J.; Nikitin, A. Y.; Velez, S.; Hillenbrand, R. Boron nitride nanoresonators for phonon-enhanced molecular vibrational spectroscopy at the strong coupling limit. _Light-Sci. Appl._**2018**, \(7\), 17172.
* Pelton et al. (2019) 18. Pelton, M.; Storm, S. D.; Leng, H. Strong coupling of emitters to single plasmonic nanoparticles: exciton-induced transparency and Rabi splitting. _Nanoscale_, **2019**, _11_, 14540-14552.
* Bitton et al. (2020) 19. Bitton, O.; Gupta, S.N.; Houben, L.; Kvapil, M.; Krapek, V.; Sikola, T.; Haran, G. Vacuum Rabi splitting of a dark plasmonic cavity mode revealed by fast electrons. _Nat Commun._**2020**, _11_, 487.
* Gupta et al. (2021) 20. Gupta, S.N.; Bitton, O.; Neuman, T.; Esteban, R.; Chuntonov, L.; Aizpurua, J.; Haran, G. Complex plasmon-exciton dynamics revealed through quantum dot light emission in a nanocavity. _Nat. Commun._**2021**, 1310.
* Ching et al. (1998) 21. Ching, E. S. C.; Leung, P. T.; Brink, A. M. van den; Suen, W. M.; Tong, S. S.; Young, K. Quasinormal-mode expansion for waves in open systems. _Rev. Mod. Phys._**1998**, _70_, 1545-1554.
* Kristensen and Hughes (2014) 22. Kristensen, P. T.; Hughes, S. Modes and mode volumes of leaky optical cavities and plasmonic nanoresonators. _ACS. Photon._**2014**, \(1\), 2-10.
* Lalanne et al. (2018) 23. Lalanne, P.; Yan, W.; Vynck, K.; Sauvan, C.; Hugonin, J.-P. Light interaction with photonic and plasmonic resonances. _Laser Photon. Rev._**2018**, _12_, 1700113.
* Kristensen et al. (2020) 24. Kristensen, P. T.; Herrmann, K.; Intravaia, F.; Busch, K. Modeling electromagnetic resonators using quasinormal modes. _Adv. Opt. Photonics_**2020**, _12_(3), 612.
* Muljarov et al. (2010) 25. Muljarov, E. A.; Langbein, W.; Zimmermann, R. Brillouin-Wigner perturbation theory in open electromagnetic systems. _Eur. Phys. Lett._**2010**, _92_, 50010.
* Both and Weiss (2022) 26. Both, S.; Weiss, T. Resonant states and their role in nanophotonics, _Semicond. Sci. Technol._**2022**, _37_, 013002.
* Carlson et al. (2021) 27. Carlson, C.; Salzwedel, R.; Selig, M.; Knorr, A.; Hughes, S. Strong coupling regime and hybrid quasinormal modes from a single plasmonic resonator coupled to a transition metal dichalcogenide monolayer. _Phys. Rev. B._**2021**, _104_, 125424.
* Denning et al. (2022) 28. Denning, E.V.; Wubs, M.; Stenger, N.; Mork, J.; Kristensen, P. T. Quantum theory of two-dimensional materials coupled to electromagnetic resonators, _Phys. Rev. B._**2022**, _105_, 085306.
* Lippmann and Schwinger (1950) 29. Lippmann B. A.; Schwinger, J. Variational principles for scattering processes. I. _Phys. Rev._**1950**, _79_, 469-480.
* Almeida et al. (2004) 30. Almeida, V. R.; Xu, Q.; Barrios, C. A.; Lipson, M. Guiding and confining light in void nanostructure. _Opt. Lett._**2004**, _29_(11), 1209.
* Xu et al. (2004) 31. Xu, Q.; Almeida, V. R.; Panepucci, R. R.; Lipson, M. Experimental demonstration of guiding and confining light in nanometer-size low-refractive-index material. _Opt. Lett._**2004**, _29_(14), 1626.
* Robinson et al. (2005) 32. Robinson, J. T.; Manolatou, C.; Chen, L.; Lipson, M. Ultrasmall mode volumes in dielectric optical microcavities. _Phys. Rev. Lett._**2005**, _95_(14), 143901.
* Gondarenko et al. (2006) 33. Gondarenko, A.; Preble, S.; Robinson, J.; Chen, L.; Lipson, H.; Lipson, M. Spontaneous emergence of periodic patterns in a biologically inspired simulation of photonic structures. _Phys. Rev. Lett._**2006**, _96_(14), 143904.
* Gondarenko and Lipson (2008) 34. Gondarenko, A.; and Lipson, M. Low modal volume dipole-like dielectric slab resonator. _Opt. Express_**2008**, _16_(22), 17689.
* Barrios (2009) 35. Barrios, C. A. Optical slot-waveguide based biochemical sensors. _Sensors_**2009**, _9_(6), 4751-4765.
* Lu et al. (2013) 36. Lu, Q.; Shu, F.-J.; Zou, C.-L. Dielectric bow-tie nanocavity. _Opt. Lett._**2013**, _38_(24), 5311.
* Hu and Weiss (2016) 37. Hu, S.; Weiss, S. M. Design of photonic crystal cavities for extreme light concentration. _ACS Photonics_**2016**, _3_(9), 1647-1653.
* Choi et al. (2017) 38. Choi, H.; Heuck, M.; Englund, D. Self-similar nanocavity design with ultrasmall mode volume for single-photon nonlinearities. _Phys. Rev. Lett._**2017**, _118_(22), 223605.
* Wang et al. (2018) 39. Wang, F.; Christiansen, R. E.; Yu, Y.; Mork, J.; Sigmund, O. Maximizing the quality factor to mode volume ratio for ultra-small photonic crystal cavities. _Appl. Phys. Lett._**2018**, _113_(24), 241101.
* Yue et al. (2018) 40. Yue, W.-C.; Yao, P.-J.; Xu, L.-X.; Ming, H. All-dielectric bowtie waveguide with deep subwavelength mode confinement. _Front. Phys._**2018**, _13_(4), 134207.
* Hu et al. (2018) Hu, S.; Khater, M.; Salas-Montiel, R.; Kratschmer, E.; Engelmann, S.; Green, W. M. J.; Weiss, S. M. Experimental realization of deep-subwavelength confinement in dielectric optical resonators. _Sci. Adv._**2018**, _4_(8), 1.
* Sakib and Ryckman (2019) 42. Sakib N.; Ryckman, J. D. Theory of extreme optical concentration in all-dielectric waveguides. _arXiv:1909.12878_, **2019**.
* Zhou et al. (2019) 43. Zhou, J.; Zheng, J.; Fang, Z.; Xu, P.; Majumdar, A. Ultra-low mode volume on-substrate silicon nanobeam cavity. _Opt. Express_**2019**, _27_(21), 30692.
* Mork and Yvind (2020) 44. Mork, J.; Yvind, K. Squeezing of intensity noise in nanolasers and nanoLEDs with extreme dielectric confinement. _Optica_**2020**, \(7\), 1641-1644.
* Albrechtsen et al. (2022) 45. Albrechtsen, M.; Lahijani, B. V.; Stobbe, S. Two regimes of confinement in photonic nanocavities: bulk confinement versus lightning rods. _Opt. Express_**2022**, _30_(9), 15458.
* Saldutti et al. (2022) 46. Saldutti, M.; Yu, Y.; Kristensen, P. T.; Kountouris, G.; Mork, J. Carrier dynamics in nonlinear photonic nanocavities with extreme dielectric confinement. _IEEE Photonics Conference (IPC)_ **2022**.
* Albrechtsen et al. (2022) 47. Albrechtsen, M.; Vosoughi Lahijani, B.; Christiansen, R.E.; Hoang Nguyen, V.T.; Casses, L.N.; Hanses, S.e.; Stenger, N.; Sigmund, O.; Jansen, H.; Stobbe, S. Nanometer-scale photon confinement in topology-optimized dielectric cavities. _Nat. Commun._**2022**, _13_, 6281.
* Kountouris et al. (2022) 48. Kountouris, G.; Mork, J.; Denning, E.V.; Kristensen, P. T. Modal properties of dielectric bowtie cavities with deep sub-wavelength confinement. _Opt. Express_**2022,** _30_, 40367-40378.
* Kristensen et al. (2012) 49. Kristensen, P. T.; Vlack, C. V.; Hughes, S. Generalized effective mode volume for leaky optical cavities. _Opt. Lett._**2012**, _37_(10), 1649-1651.
* De Lasson et al. (2013) 50. De Lasson J. R.; Mork, J.; Kristensen, P. T. Three-dimensional integral equation approach to light scattering, extinction cross sections, local density of states, and quasi-normal modes. Journal of the Optical Society of America B **2013**, _30_, 1996.
* Lai et al. (1990) 51. Lai, H. M.; Leung, P. T.; Young, K.; Barber, P. W.; Hill, S. C. Time-independent perturbation for leaking electromagnetic modes in open systems with application to resonances in microdroplets. _Phys. Rev. A._**1990**, _41_(9), 5187-5198.
* Muljarov et al. (2010) 52. Muljarov, E. A.; Langbein, W.; Zimmermann, R. Brillouin-Wigner perturbation theory in open electromagnetic systems. Europhys. Lett. **2010**, _92_, 50010.
* Sauvan and Hugonin (2013) 53. Sauvan, C.; and Hugonin, J. P.; Maksymov, I. S.; Lalanne, P. Theory of the Spontaneous Optical Emission of Nanosize Photonic and Plasmon Resonators. _Phys. Rev. Lett._**2013**, _110_ (23), 237401.
* Kristensen et al. (2015) 54. Kristensen, P. T.; Ge, Rong-Chun.; Hughes, S. Normalization of quasinormal modes in leaky optical cavities and plasmonic resonators. _Phys. Rev. A._**2015**, _92_(5), 053810.
* Jaynes and Cummings (1963) 55. Jaynes E. T.; and Cummings, F. W. Comparison of quantum and semiclassical radiation theories with application to the beam maser. _Proc. IEEE_**1963**, _51_, 89-109.
* Franke et al. (2019) 56. Franke, S.; Hughes, S.; Dezfouli, M. K.; Kristensen, P. T.; Busch, K.; Knorr, A.; Richter, M. Quantization of Quasinormal Modes for Open Cavities and Plasmonic Cavity Quantum Electrodynamics. _Phys. Rev. Lett._**2019**, _122_(21), 213901.
* Gardiner and Zoller (2004) 57. Gardiner. C.; Zoller. P. Quantum Noise. _Springer Series in Synergetics (SSSYN), Springer Berlin, Heidelberg_. **2004**, \(3\).
* Wubs et al. (2004) 58. Wubs, M.; Suttorp, L. G.; Lagendijk, A. Multiple-scattering approach to interatomic interactions and superradiance in inhomogeneous dielectrics. _Phy. Rev. A._**2004**, _70_, 053823.
* Pettit and Turner (1965) 59. Pettit G. D.; Turner. W. J. Refractive index of InP. _J. Appl. Phys._**1965**, 36, 2081.
## Supporting Information
### Cavity design and numerical calculations
The specific cavity is designed to be manufactured from a 240 nm thick membrane of material with refractive index n=3.165 embedded in air, and the dimensions of the cavity are shown in Fig. S1 below. As noted in the main text, it supports a cavity mode with a wavelength close to the target wavelength of \(\lambda_{0}=1550\) nm.
All numerical solutions of Maxwell's equations were performed with the finite element method as implemented in Comsol Multiphysics, and in all calculations, the calculation domain was truncated by use of a first-order scattering boundary condition (SBC). For the convergence study, a number of different meshes were created by successive refinement, as detailed below. The stated QNM frequencies as well as the absorption cross section calculations in the main text were calculated with a full domain size with radius \(R\)=1.5\(\lambda_{0}\) and a mesh resulting from two refinements. Although we could go to a finer mesh for the QNM calculations, this was not feasible for the absorption cross sections. The use of the same mesh ensures that the results of the two calculations are consistent, since the residual error from the finite mesh and calculation domain size influence both calculations in a similar way. When calculating the QNMs of the bare cavity, we turned off the QE by setting its oscillator strength to 0 so as to use the exact same mesh.
The absorption cross-sections were calculated through an integral over the QE volume as \(\sigma_{\rm abs}=(1/l_{0})\int_{V_{QE}}P({\bf r},\omega)\) dv, where \(I_{0}\) is the intensity of the incident light, and \(P\) is the power loss density in the QE given by Poynting's theorem as [1]: \(P({\bf r},\omega)=J({\bf r},\omega)\cdot{\bf E}({\bf r},\omega)\), in which \(J\) is the current density and \(E\) is the local electric field in the emitter. The current density can be expressed in terms of the electric field as
\[J({\bf r},\omega)=i\omega\varepsilon_{0}\frac{f\omega_{QE}^{2}}{\omega_{QE}^ {2}-\omega^{2}-2iYQE\omega}E({\bf r},\omega) \tag{10}\]
We consider this measure to be a convenient and experimentally relevant approach for probing the spectral response of the system. With the chosen material parameters, we changed the system between the two cases of interest by changing the oscillator strength between \(f=0\) and \(f=7\times 10^{-3}\). Setting \(f=0\), however, leads to a vanishing absorption cross section, so in practical calculations of the bare cavity, for which the material is assumed to be non-absorbing in the frequency range of interest, we considered instead a very small oscillator strength of \(f=7\times 10^{-9}\). In this way, the relative absorption cross section provides the relevant spectral features of the electromagnetic response of the bare cavity. The results are shown in the top part of Fig. 1(b) in the main text where the blue curve features a single peak at \(\omega=1213.66\ 10^{12}rad\ {\rm s}^{-1}\), which matches the real part of the QNM resonance frequency of the bare cavity. Similarly, the black curve shows two peaks at \(\omega=1212.84\ 10^{12}rad\ {\rm s}^{-1}\) and \(\omega=1214.60\ 10^{12}rad\ {\rm s}^{-1}\), respectively, which also match the two QNM resonance frequencies of the QE-optical cavity hybrid system.
### QNM convergence study
For a coordinate system as shown in Fig. S1, the QNM field of the cavity is symmetric with respect to the \(xy\)- and \(yz\)-planes, while it is antisymmetric with respect to the \(xz\)-plane [2]. Exploiting this symmetry, we used perfect electric conductor (PEC) and perfect magnetic conductor (PMC) boundary conditions to reduce the calculation domain to one-eighth of the original size for the convergence study in this section, which significantly reduced the computational time.
Following [2-3], a convergence study was performed by varying the domain size and mesh discretization. The investigation was carried out for 5 different mesh discretizations and for 12 domain sizes with radii ranging from \(R\)=\(1.5\lambda_{0}\) to \(R\)=\(2.6\lambda_{0}\) in steps of \(0.1\lambda_{0}\). In the case of the finest mesh, the calculations were performed for the range of radii \(R\)=\(1.5\lambda_{0}\) to \(R\)=\(2.1\lambda_{0}\) in steps of \(0.1\lambda_{0}\) due to the extended calculation times. The five curves in Fig. S2. (a) show the variation of the calculated eigenfrequencies in the complex frequency plane for the different discretizations used in the simulations. The black curve corresponds to the initial coarsest mesh, while the purple curve corresponds to the finest mesh. Each refinement of the mesh was performed by splitting the elements of the previous discretization, and we characterize each mesh by the average element size, as calculated by averaging the longest side \(h\) of each mesh element in the entire domain [2]. The individual data points comprising each curve are the complex eigenfrequencies as calculated for different domain sizes but with the same mesh discretization. The curves form an inwards spiral around a central point, which we take to be the nominal correct value for the case of an infinite calculation domain [2]. For relatively coarse meshes, the spirals are distorted due to additional numerical errors stemming from the finite mesh size. The red circle inside each spiral shows the average of the data points forming the spiral, and we use this average as an estimate of the nominal correct value.
To investigate the convergence of the complex eigenfrequency in a more quantitative way, Fig. S2 (b) shows the logarithm of the real (blue) and imaginary (red) parts of the difference in the calculated complex eigenfrequencies between two consecutive discretizations as a function of the logarithm of the average element size [2]. The points fall approximately on straight lines in this double-logarithmic plot, which indicates the expected polynomial convergence with mesh element size. Assuming the error to be polynomial, we can estimate the true value of the complex eigenfrequency of the underlying continuous problem corresponding to the limit of vanishing mesh size and infinite calculation domain [3]. With this approach, we find \(\tilde{\omega}_{c}=(1212.6(2)-\mathrm{i}0.27(1))\)\(10^{12}\mathrm{rad\ s^{-1}}\), as indicated by the red diamond in Fig. S2 (a). As a conservative estimate of the error on this number, we take the difference to the best direct calculation with four refinements. In this way, we find the estimated error to be 0.2 and 0.01 for the real and the imaginary part, respectively.
For two different discretizations, we now consider the complex eigenfrequencies of the coupled system, as predicted by the LS equation, and compare it to the results of full numerical calculations. The results are shown in Fig. S3. The blue and red solid spirals show the numerical oscillations of the QNM frequency for the bare cavity for the second (R2) as well as the fourth (R4) mesh refinement. For each calculation, we can set up and solve Eq. (6) in order to find the corresponding approximation to the complex frequencies of the QE-optical cavity hybrid system. These results are indicated by the black dashed curves for the R4 mesh refinement. To assess the accuracy of the calculation, we also directly calculated the results for the coupled system using the same mesh, by introducing the permittivity describing the QE. These results are shown by the blue and red dashed curves in Fig. S3.
As is evident from these figures, the QNM frequencies of the coupled QE-optical cavity hybrid system change in a very similar manner to that of the bare cavity when increasing the
calculation domain size. The black circle inside each spiral indicates the average of the complex QNM frequencies forming the spiral.
Explicit expression of the coupling strength \(g\) by connecting the classical oscillator strength of the emitter to the dipole moment of the corresponding quantum optical model
As an alternative to the derivations presented in the main text, we can also connect the classical electrodynamics model to the quantum optical model by treating both in the framework of a scattering problem, for which the solution can be written by use of the LS equation. Specifically, we consider the problem of an incoming electric field \(\mathbf{E}_{\text{in}}(\mathbf{r},\omega)\), which is assumed to be a solution to the Maxwell wave equation in the background system depicted in Fig. 2 in the main text. Notably, the background system consists of the bare cavity embedded in air.
For the classical electrodynamics problem, the solution is given by Eq. (1) of the main text. At the QE center \(\mathbf{r^{\prime}}\), and assuming the total electric field to be approximately constant throughout the volume of the QE, we find that
\[\mathbf{E}_{\text{tot}}(\mathbf{r},\omega)=\frac{\mathbf{E}_{\text{in}}( \mathbf{r},\omega)}{1-\frac{\omega^{2}}{c^{2}}\mathbf{G}_{\text{B}}(\mathbf{r},\mathbf{r^{\prime}},\omega)\Delta\varepsilon(\omega)\mathbf{V}_{\text{QE}}}\] (A2)
For the quantum optical problem, we describe the system using the multiple-scattering approach for atoms in the form of point-like harmonic oscillators developed in Ref. [4], and
consider the case where the single QE is initially in the ground state. In this context, we note that the formulation in Ref. [4] is performed in terms of specialized functions \(\mathbf{F(r,\omega)}\) and \(\mathbf{K(r,r^{\prime},\omega)}\), which differ from the electric field \(\mathbf{E(r,\omega)}\) and the Green tensor \(\mathbf{G(r,r^{\prime},\omega)}\) only when the two position arguments coincide at the position of one of the point scatterers. This construction effectively recovers the correct sum rule when integrating the electric field across the point scatterer, despite the fact that the Green tensor is known to diverge in this limit. In the present approach, for which we compare to the result of integration over the finite volume of the QE, we do not make this distinction and can therefore simply set \(\mathbf{F(r,\omega)=\ E(r,\omega)}\) and \(\mathbf{K(r,r^{\prime},\omega)}\)\(=\mathbf{G(r,r^{\prime},\omega)}\). Otherwise following the approach of Ref [4], we can express the total electric field operator at the position of the QE as
\[\mathbf{\hat{E}_{tot}(r,\omega)=}\frac{\mathbf{\hat{E}_{\mathit{ in}}(r,\omega)}}{1-\mathbf{\hat{E}_{\mathit{in}}(r,r^{\prime},\omega)V(\omega)}} \tag{10}\]
in which \(\mathbf{\hat{E}_{\mathit{in}}}\) is the electric field operator of the incoming field, and \(V(\omega)\) is the scattering potential produced by the QE and is given by [4]:
\[V(\omega)=\left(\frac{\mathbf{\hat{u}_{QE}^{2}\omega^{2}}}{\hbar\varepsilon_ {0}\mathbf{c}^{2}}\right)\left(\frac{2\omega_{QE}}{\omega^{2}-\omega_{QE}^{2 }}\right) \tag{11}\]
Comparing Eqs. (10) and (10), we can identify the connection between the permittivity and the dipole moment as
\[\Delta\varepsilon(\omega)V_{QE}=\frac{\mathbf{\hat{u}_{QE}^{2}}}{\hbar \varepsilon_{0}\mathbf{c}^{2}}\frac{2\omega_{QE}}{\omega^{2}-\omega_{QE}^{2 }} \tag{12}\]
and by rewriting slightly and including a non-radiative decay rate for the QE, we can express this in the form of the Lorentz oscillator model for the permittivity as
\[\Delta\varepsilon(\omega)=\frac{\mathbf{\hat{u}_{QE}^{2}}}{\hbar\varepsilon_ {0}\mathbf{V}_{QE}}\left(\frac{2\omega_{QE}}{\omega_{QE}^{2}-\omega^{2}-2 \mathrm{i}\gamma_{QE}\omega}\right) \tag{13}\]
Finally, comparing to Eq. (3) in the main text, we find that the dipole moment of the QE can be expressed in terms of the oscillator strength in the exact form of Eq. (14) in the main text.
Using the parameters of the QE in this work, \(f\)\(=\)\(7\times 10^{-3}\), 20 nm radius and \(\omega_{QE}\)\(=\)\(1213.66\)\(10^{12}\)rad s\({}^{-1}\), we find \(\mu_{QE}\)\(=\)\(3.645\times 10^{-28}\) [Col\(\cdot\)m], which is consistent with typical values for colloidal quantum dots [5-7].
|
2308.03682 | Acoustodynamic mass determination: Accounting for inertial effects in
acoustic levitation of granular materials | Acoustic traps use forces exerted by sound waves to confine and transport
small objects. The dynamics of an object moving in the force landscape of an
acoustic trap can be significantly influenced by the inertia of the surrounding
fluid medium. These inertial effects can be observed by setting a trapped
object in oscillation and tracking it as it relaxes back to mechanical
equilibrium in its trap. Large deviations from Stokesian dynamics during this
process can be explained quantitatively by accounting for boundary-layer
effects in the fluid. The measured oscillations of a perturbed particle then
can be used not only to calibrate the trap but also to characterize the
particle. | Mia C. Morrell, David G. Grier | 2023-08-07T15:56:43Z | http://arxiv.org/abs/2308.03682v1 | Acoustodynamic mass determination: Accounting for inertial effects in acoustic levitation of granular materials
###### Abstract
Acoustic traps use forces exerted by sound waves to confine and transport small objects. The dynamics of an object moving in the force landscape of an acoustic trap can be significantly influenced by the inertia of the surrounding fluid medium. These inertial effects can be observed by setting a trapped object in oscillation and tracking it as it relaxes back to mechanical equilibrium in its trap. Large deviations from Stokesian dynamics during this process can be explained quantitatively by accounting for boundary-layer effects in the fluid. The measured oscillations of a perturbed particle then can be used not only to calibrate the trap but also to characterize the particle.
## I Introduction
Acoustic manipulation of granular media was first demonstrated by Kundt in 1866 as a means to visualize the nodes and antinodes of sound waves [1]. After a century and a half of gestation, acoustic trapping is emerging as a focal area for soft-matter physics [2; 3; 4; 5] and a practical platform for dexterous noncontact materials processing [6; 7] thanks in part to recent advances in the theory of wave-matter interactions [8; 9] and innovations in the techniques for crafting acoustic force landscapes [10; 11]. An object's trajectory through such a landscape encodes information about the wave-matter interaction and therefore can be used not just to calibrate the trap but also to characterize the object. The present study demonstrates how to extract that information through machine-vision measurements of trapped objects' oscillations under the combined influences of gravity, the trap's restoring force and drag due to displacement of the surrounding fluid medium.
Correctly interpreting the measured trajectory of an acoustically trapped particle can be challenging because the drag force deviates substantially from the standard Stokes form, as has been noted in previous studies [12; 13; 14; 15]. We incorporate non-Stokesian drag into a self-consistent measurement framework by invoking Landau's hydrodynamic boundary-layer approximation [16; 17] to account for the fluid's inertia. This approach appears not to have been demonstrated previously and provides a fast and accurate way to measure physical properties of the trapped object without requiring separate calibration of the acoustic trap. The same measurement also yields an absolute calibration of the trap's stiffness for that specific object.
## II Dynamics of an acoustically trapped particle
### Imaging measurements of damped oscillations
Figure 1(a) schematically represents the acoustic trapping system used for this study. Based on the standard TinyLev design [10], this acoustic levitator consists of two banks of piezoelectric ultrasonic transducers (MA40S4S, Murata, Inc.) with a resonance frequency around \(40\,\mathrm{kHz}\). Each bank of 36 transducers is driven sinusoidally by a function generator (DS345, Stanford Research Systems) and projects a traveling wave into a spherical volume of air. Interference between the two waves creates an array of acoustic traps along the instrument's vertical axis. Figure 1(b) presents a video image of a millimeter-scale sphere of expanded polystyrene localized in air within one of the acoustic traps. The camera (Blackfly S USB3, FLIR) records the particle's motions at \(170\,\mathrm{frames/s}\), with an exposure time of \(2\,\mathrm{ms}\) and an effective magnification of \(61\,\mathrm{\SIUnitSymbolMicro m}\)/pixel. Under these imaging conditions, the height of the particle in the trap, \(z_{p}(t)\), can be measured
Figure 1: (a) Schematic diagram of the reference acoustic trap. (b) Typical video frame of a millimeter-scale styrofoam sphere levitated in the acoustic trap together with a schematic diagram of the forces acting on the particle. (c) Measured trajectory (black symbols) of a styrofoam bead returning to mechanical equilibrium in an acoustic trap compared with predictions of the damped oscillator model (red curve). |
2302.09492 | Tiltan and graphs with no infinite paths | We prove the consistency of tiltan with the positive relation
$\omega^*\cdot\omega_1\rightarrow(\omega^*\cdot\omega_1,{\rm infinite\
path})^2$. | Shimon Garti | 2023-02-19T06:31:34Z | http://arxiv.org/abs/2302.09492v2 | # Tiltan and graphs with no infinite paths
###### Abstract.
We prove the consistency of tiltan with the positive relation \(\omega^{*}\cdot\omega_{1}\to(\omega^{*}\cdot\omega_{1},\text{infinite path})^{2}\).
Key words and phrases:Tiltan, infinite path, independent sets, generalized Martin's axiom 2010 Mathematics Subject Classification: 03E02, 03E35, 03E50, 03E75, 05C63
## 1. Introduction
Let \(\mathcal{F}\) be a finite
## 0. Introduction
Let \(G=(V,E)\) be a graph. An independent subset of \(V\) is a set of vertices \(W\subseteq V\) such that \([W]^{2}\cap E=\varnothing\). An infinite path in \(G\) is a sequence of vertices \(\langle v_{n}:n\in\omega\rangle\) with no repetitions such that \(\{v_{n},v_{n+1}\}\in E\) for every \(n\in\omega\). Intuitively, these concepts are orthogonal. If one wishes to eliminate large independent sets then one must add edges to many pairs. In such cases it becomes harder to avoid infinite paths. For making this intuition precise we need a definition of _large_ independent sets. The most natural suggestion would be a subset \(W\) of \(V\) with the same order type.
**Definition 0.1**.: The relation \(\tau\to(\tau,\text{infinite path})^{2}\) means that for every graph \(G=(V,E)\) with \(\operatorname{\mathrm{otp}}(V)=\tau\) there exists either an independent set \(W\subseteq V\) so that \(\operatorname{\mathrm{otp}}(W)=\tau\) or an infinite path.
By order type we do not confine ourselves to well-orderings. Rather, we refer to a variety of structures. We consider ordinals \(\alpha\) with their well-ordering, the backward ordering \(\alpha^{*}\) and ordinal products of these types. All graphs in this paper are undirected.
The notation \(\tau\to(\tau,\text{infinite path})^{2}\) comes from partition theorems of infinite combinatorics. Given a graph \(G\) one may think of a coloring of its pairs with two colors. The first color is assigned to every pair of vertices with no edge, and the second color is given to pairs with an edge. The positive relation states that there exists a full sized subset with the first color or an infinite sequence with the second color.
We shall focus on the order type \(\omega^{*}\cdot\omega_{1}\). For a convenient and concrete example, if the ambient set is \(\omega_{1}\times\omega\) then the order defined by \((\alpha,m)<^{*}(\beta,n)\) iff \((\alpha<\beta)\vee(\alpha=\beta\text{ and }m>n)\) is of type \(\omega^{*}\cdot\omega_{1}\). A convenient way to visualize this type is by thinking about \(\omega_{1}\) many columns, each of which is a copy of \(\omega^{*}\). Regarding the above definition one may wonder whether \(\omega^{*}\cdot\omega_{1}\to(\omega^{*}\cdot\omega_{1},\text{infinite path})^{2}\).
In the parallel abstract situation of infinite combinatorics, \(\lambda\to(\lambda,\omega)^{2}\) for every infinite cardinal \(\lambda\), and this is known as the Erdos-Dushnik-Miller theorem. However, if \(\alpha\) is an ordinal but not a cardinal then \(\alpha\nrightarrow(\alpha,\omega)^{2}\). These facts motivate the investigation of more types like \(\omega^{*}\cdot\omega_{1}\). We indicate that an infinite path in a graph is a weaker notion than an infinite monochromatic set, since the homogeneity is required only at consecutive elements of the path. There is some evidence that the existence of monochromatic paths is strictly weaker than the existence of monochromatic sets, see [10] and [14]. In our context, one may obtain such paths even though the order-type of the graph is neither a cardinal, nor an ordinal. Put another way, a mysterious path may show up, as described in [11, page 10]: Moom-introll was just putting up a swing when Sniff got home. He seemed very interested in the mysterious path, and directly after lunch they set off to have a look at it.
Back to the context of graph theory, the above relation cannot be decided by the axioms of set theory. Namely, one can prove the consistency of
\(\omega^{*}\cdot\omega_{1}\to(\omega^{*}\cdot\omega_{1},\text{infinite path})^{2}\) in some extension of ZFC on the one hand, and one can show that \(\omega^{*}\cdot\omega_{1}\nrightarrow(\omega^{*}\cdot\omega_{1},\text{infinite path})^{2}\) in another extension on the other hand. The negative direction was given by Baumgartner and Larson, [1], and the positive direction by Larson in [1]. It is done in these papers through the classical way of confronting the constructible universe with the universe under Martin's axiom with large continuum.
Actually, the full strength of the constructible universe is not required. Baumgartner and Larson constructed a graph \(G=(V,E)\) of type \(\omega^{*}\cdot\omega_{1}\) with no independent subset of this type and no infinite path merely from the diamond principle at \(\aleph_{1}\). Recall that \(\Diamond_{\aleph_{1}}\) says that there exists a sequence of sets \(\langle A_{\alpha}:\alpha\in\omega_{1}\rangle\) such that \(A_{\alpha}\subseteq\alpha\) for every \(\alpha\in\omega_{1}\) and for every \(A\subseteq\omega_{1}\) the set \(S_{A}=\{\alpha\in\omega_{1}:A\cap\alpha=A_{\alpha}\}\) is a stationary subset of \(\omega_{1}\). The opposite direction employs Martin's axiom with \(2^{\omega}>\omega_{1}\), and then \(\omega^{*}\cdot\omega_{1}\to(\omega^{*}\cdot\omega_{1},\text{infinite path})^{2}\). Both directions are elaborated in another paper of Larson, [1]. In this paper she explains the importance of the type \(\omega^{*}\cdot\omega_{1}\) and poses the problem which stands in the hub of this paper. We let the following definition into the discussion.
**Definition 0.2**.: Tiltan.
Let \(\kappa=\operatorname{cf}(\kappa)>\aleph_{0}\).
The tiltan principle \(\clubsuit_{\kappa}\) says that there exists a sequence \(\langle T_{\alpha}:\alpha\) is a limit ordinal of \(\kappa\rangle\) such that each \(T_{\alpha}\) is a cofinal subset of \(\alpha\) and for every \(A\in[\kappa]^{\kappa}\) the set \(S_{A}=\{\alpha\in\kappa:T_{\alpha}\subseteq A\}\) is a stationary subset of \(\kappa\).
The common name of this statement is the club principle. It has been introduced by Ostaszewski, in [12]. We shall call it tiltan1 since the word _club_ is extensively used as an acronym for closed and unbounded sets.
Footnote 1: Let us indicate that in some good old manuscripts the pronunciation is _taltan_, see the relevant discussion in [13].
The tiltan follows from the diamond, and it is strictly weaker than the diamond. In particular, \(\Diamond_{\aleph_{1}}\Rightarrow 2^{\omega}=\omega_{1}\) while \(\clubsuit_{\aleph_{1}}\) is consistent with \(2^{\omega}>\omega_{1}\). Remark that Martin's axiom with \(2^{\omega}>\omega_{1}\) implies \(\clubsuit_{\aleph_{1}}\). Therefore, the following question of Larson from [1] is natural:
**Question 0.3**.: Is it consistent that tiltan holds at \(\aleph_{1}\) and concomitantly \(\omega^{*}\cdot\omega_{1}\to(\omega^{*}\cdot\omega_{1},\text{infinite path})^{2}\)?
We shall give a positive answer to this question. Let us indicate that having a positive result is a bit surprising. One of the main differences between tiltan and diamond is that the diamond prediction is based on equality \((A\cap\alpha=A_{\alpha})\) while the tiltan prediction gives only inclusion (\(T_{\alpha}\subseteq A\)). In the negative arrow relation proved under diamond in [1], only inclusion is needed for the construction of a graph exemplifying \(\omega^{*}\cdot\omega_{1}\nrightarrow(\omega^{*}\cdot\omega_{1},\text{infinite path})^{2}\). Despite this fact, tiltan is consistent with the positive relation \(\omega^{*}\cdot\omega_{1}\to(\omega^{*}\cdot\omega_{1},\text{infinite path})^{2}\), as we shall see.
The rest of the paper is arranged in two additional sections. In the first one we unfold some background material and we try to explicate the idea
behind the proof. In the second, we prove the main theorem. Our notation is mostly standard. Let us mention the notation \(S_{\kappa}^{\lambda}\) which refers to the set \(\{\delta<\lambda:\operatorname{cf}(\delta)=\kappa\}\) where \(\kappa\) is a regular cardinal. We employ the Jerusalem forcing notation, so \(p\leq q\) reads \(p\) is weaker than \(q\). Consequently we shall speak about a least upper bound of conditions, a downward closed generic set, and so forth. If \(p\) is compatible with \(q\) then we write \(p\parallel q\). If \(p\) and \(q\) are incompatible then we shall write \(p\perp q\). The meaning of the symbol \(\exists^{\infty}\) is that there are infinitely many elements which satisfy the statement which falls under the scope of this quantifier. We employ this notation with respect to sets of natural numbers.
I am deeply indebted to the anonymous referee for many mathematical corrections and a lot of helpful suggestions. The referee pointed out a major flaw in the original version of the manuscript and enabled me to fix the problematic issue. I learned several mathematical things from the work of the referee on my paper, but I learned much more from his/her infinite patience for paths and infinite path of patience.
## 1. Background
Larson proved the consistency of \(\omega^{*}\cdot\omega_{1}\to(\omega^{*}\cdot\omega_{1},\text{infinite path})^{2}\) under Martin's axiom. A central component in our proof is a similar theorem at the level of \(\omega_{2}\). We shall use a generalized form of Martin's axiom, and there are several such theorems in the literature. The most appropriate among which for our proof is Shelah's version from [1].
**Theorem 1.1**.: _Generalized Martin's axiom. One can force \(2^{\aleph_{0}}=\aleph_{1},2^{\aleph_{1}}>\aleph_{2}\) and if \(\mathbb{P}\) satisfies:_
1. _If_ \(p,q\in\mathbb{P}\) _and_ \(p\parallel q\) _then there is a least upper bound for_ \(p,q\) _in_ \(\mathbb{P}\)_._
2. _If_ \(\langle p_{i}:i\in\omega\rangle\) _is an increasing sequence of conditions in_ \(\mathbb{P}\) _then it has a least upper bound in_ \(\mathbb{P}\)_._
3. _If_ \(\{p_{\alpha}:\alpha\in\omega_{2}\}\subseteq\mathbb{P}\) _then there is a club_ \(C\subseteq\omega_{2}\) _and a regressive function_ \(f:\omega_{2}\to\omega_{2}\) _such that for every_ \(\alpha,\beta\in C\cap S^{\aleph_{2}}_{\aleph_{1}}\) _if_ \(f(\alpha)=f(\beta)\) _then_ \(p_{\alpha}\parallel p_{\beta}\)_._
_then for every \(\kappa<2^{\aleph_{1}}\) such that \(\gamma<\kappa\Rightarrow\gamma^{\aleph_{0}}<\kappa\) and any collection \(\mathcal{D}=\{D_{\eta}:\eta\in\kappa\}\) of dense subsets of \(\mathbb{P}\) there exists a filter \(G\subseteq\mathbb{P}\) so that \(G\cap D_{\eta}\neq\varnothing\) for every \(\eta\in\kappa\)._
\(\square_{\ref{eq:main}}\)
The forcing conditions in Larson's proof are finite independent sets. In our proof the conditions are countable. As a first step we shall use the generalized Martin's axiom in order to force \(\omega^{*}\cdot\omega_{2}\to(\omega^{*}\cdot\omega_{2},\text{infinite path})^{2}\), and requirement \((b)\) above forces us to force with countable conditions. This is one major difference between Martin's axiom and the generalized Martin's axiom which complicates the density argument.
Another problem is the chain condition. Martin's axiom applies to any \(ccc\) forcing notion, but all the generalizations to higher cardinals require more than \(\kappa\)-cc, and it is known that \(\kappa\)-cc is insufficient. In Shelah's version, the strengthening of the chain condition is reflected in requirement \((c)\). Our proof of \((c)\) in the specific forcing notion of this paper is based on the ordinary partition relation \(\omega_{2}\to(\omega_{2}-st,\omega_{1})^{2}\) which says that for every coloring \(d:[\omega_{2}]^{2}\to\{0,1\}\) one can find either a \(1\)-monochromatic sequence of type \(\omega_{1}\) or a stationary \(0\)-monochromatic subset of \(\omega_{2}\). The following is a folklore but we give the proof since we will use both the statement and the argument within the proof.
**Lemma 1.2**.: _Assume \(2^{\omega}=\omega_{1}\). Then \(\omega_{2}\to(\omega_{2}-st,\omega_{1})^{2}\). Moreover, \(T\to(\omega_{2}-st,\omega_{1})^{2}\) whenever \(T\subseteq S^{\omega_{2}}_{\omega_{1}}\) is stationary._
_Proof_.
Let \(d:[\omega_{2}]^{2}\to\{0,1\}\) be a coloring. If there is a \(1\)-monochromatic sequence of length \(\omega_{1}\) then we are done. Suppose that there is no such a sequence. For every \(\delta\in S^{\omega_{2}}_{\omega_{1}}\) choose a sequence \(c_{\delta}\) of ordinals below \(\delta\) such that \(c_{\delta}\cup\{\delta\}\) is \(1\)-monochromatic and \(c_{\delta}\) is maximal with this property.
By our assumption, \(c_{\delta}\) is bounded below \(\delta\) since \(\operatorname{cf}(\delta)=\omega_{1}\). Hence the mapping \(h(\delta)=\sup(c_{\delta})\) is regressive on \(S^{\omega_{2}}_{\omega_{1}}\). Choose \(\eta\in\omega_{1}\) and a stationary \(S\subseteq S^{\omega_{2}}_{\omega_{1}}\) such that \(h(\delta)=\eta\) for every \(\delta\in S\). Notice that \(\eta<\min(S)\). Since \(2^{\omega}=\omega_{1}\), there are only \(\aleph_{1}\)-many sequences of the form \(c_{\delta}\) (recall that \(\eta\) is an ordinal less than \(\omega_{2}\) and each \(c_{\delta}\) is a subset of \(\eta\)). Hence by shrinking \(S\) if needed we may assume that there is a fixed sequence \(c\) such that \(c_{\delta}=c\) for every \(\delta\in S\).
We claim that \(S\) is \(0\)-monochromatic under \(d\). To see this, suppose that \(\delta,\varepsilon\in S\) and \(\delta<\varepsilon\). If \(d(\delta,\varepsilon)=1\) then \(c\cup\{\delta\}\) is \(1\)-monochromatic with \(\varepsilon\) and then \(h(\varepsilon)\geq\delta>\eta\). This is impossible since \(\varepsilon\in S\). Hence necessarily \(d(\delta,\varepsilon)=0\) whenever \(\{\delta,\varepsilon\}\subseteq S\), so we are done. The additional part of the lemma is proved in the same way, upon replacing \(S^{\omega_{2}}_{\omega_{1}}\) by \(T\).
\(\square_{\ref{lem:main}}\)
The next issue is a special kind of tiltan which we shall need for our proof. Definition 0.2 is phrased with respect to \(\kappa\), but one can replace \(\kappa\) by any stationary subset \(S\subseteq\kappa\). Clearly, if \(S_{0}\subseteq S_{1}\) are stationary then and hence whenever \(S\) is a stationary subset of \(\kappa\). The following theorem from [10] served for proving the consistency of tiltan at \(\aleph_{1}\) with \(2^{\omega}>\omega_{1}\), and we shall use it with respect to infinite graphs.
**Theorem 1.3**.: _Assume that \(\Diamond_{S}\) holds at every stationary subset \(S\) of \(\aleph_{1}\) and \(\aleph_{2}\). Then one can define a tiltan sequence on \(S^{\aleph_{2}}_{\aleph_{0}}\) which is indestructible upon any further forcing extension with an \(\aleph_{1}\)-complete forcing notion._
\(\square_{\ref{lem:main}}\)
We indicate that the proof of the generalized Martin's axiom employs an \(\aleph_{1}\)-complete forcing notion, hence preserves instances of indestructible tiltan. We shall use this fact in the proof of the main theorem.
We mention three additional classical theorems, to be used within our proof. Firstly, Ramsey's theorem which says that \(\omega\to(\omega)_{\ell}^{2}\) for every \(\ell\in\omega\). Namely, any coloring \(c:[\omega]^{2}\to\ell\) admits a monochromatic infinite set. Secondly, Hajnal's free set theorem which says that if \(\kappa<\lambda,|A|=\lambda,f:A\to\mathcal{P}(A)\) is a set-mapping (i.e. \(a\notin f(a)\) for every \(a\in A\)) and \(|f(a)|<\kappa\) for every \(a\in A\) then there exists an \(f\)-free subset \(B\subseteq A\) of size \(\lambda\). Recall that \(B\) is \(f\)-free iff \(B\cap f(b)=\varnothing\) whenever \(b\in B\). For the third theorem recall that if \(\kappa\) is an infinite cardinal then \(\log_{\kappa}(\kappa^{+})=\min\{\theta:\kappa^{\theta}>\kappa\}\). One can show that if \(\kappa\geq\omega\) then \(\kappa^{+}\to(\kappa^{+},\log_{\kappa}(\kappa^{+})+1)^{2}\), see [1]. In particular, if \(2^{\omega}=\omega_{1}\) then \(\log_{\omega_{1}}(\omega_{2})=\omega_{1}\) and hence \(\omega_{2}\to(\omega_{2},\omega_{1})^{2}\). In fact, one has the stronger relation \(\omega_{2}\to(\omega_{2}-st,\omega_{1})^{2}\) as proved above.
We shall also need a statement concerning path relations in the following weak form. Call a coloring \(d:\kappa\times\kappa\to\omega\times\omega\)_anti-symmetric_ iff \(d(\alpha,\beta)=(i,j)\Leftrightarrow d(\beta,\alpha)=(j,i)\) whenever \(\alpha,\beta\in\kappa\). Let us say that \(\kappa\to_{\mathrm{asp}}(\omega)_{\omega\times\omega}^{2}\) iff for every anti-symmetric coloring \(d:\kappa\times\kappa\to\omega\times\omega\) one can find an infinite path \(\psi=(\alpha_{m}:m\in\omega)\), the elements of \(\psi\) being ordinals of \(\kappa\) and for every \(m\in\omega\) if \(d(\alpha_{m},\alpha_{m+1})=(i,j)\wedge d(\alpha_{m+1},\alpha_{m+2})=(k,\ell)\) then \(j=k\)
**Lemma 1.4**.: \(\omega_{1}\rightarrow_{\rm asp}(\omega)_{\omega\times\omega}^{2}\)_._
Proof.: Let \(d:\omega_{1}\times\omega_{1}\rightarrow\omega\times\omega\) be anti-symmetric. Let \(\chi\) be a sufficiently large regular cardinal and choose a countable elementary submodel \(M\prec\mathcal{H}(\chi)\) so that \(d\in M\). Let \(\delta=M\cap\omega_{1}\) be the characteristic ordinal of \(M\) and notice that \(\operatorname{cf}(\delta)=\omega\).
Fix an ordinal \(\alpha_{0}\in\delta\) and assume that \(d(\alpha_{0},\delta)=(i,j)\). Denote the set \(\{\alpha\in\delta:d(\alpha,\delta)=(i,j)\}\) by \(B\) and notice that \(B\) is unbounded in \(\delta\) by elementarity. By definition, \(\alpha_{0}\in B\). Choose \(\alpha_{1}>\alpha_{0}\) so that \(\alpha_{1}\in B\). This means that \(d(\alpha_{0},\delta)=d(\alpha_{1},\delta)=(i,j)\) so by elementarity one can find \(\beta_{1}>\alpha_{1}\) such that \(\beta_{1}<\delta\) and \(d(\alpha_{0},\beta_{1})=d(\alpha_{1},\beta_{1})=(i,j)\). We choose now another element \(\alpha_{2}\in B\) so that \(\alpha_{2}>\beta_{1}\). Since \(d(\alpha_{2},\delta)=(i,j)\) one can choose \(\beta_{2}<\delta\) such that \(\beta_{2}>\alpha_{2}\) and \(d(\alpha_{1},\beta_{2})=d(\alpha_{2},\beta_{2})=(i,j)\). We render this process in the same way by induction on \(n\in\omega\) and finally define:
\[\psi=(\alpha_{0},\alpha_{n},\beta_{n}:0<n<\omega).\]
One can verify that \(\psi\) forms an infinite path in the sense defined before the statement of the lemma. We conclude, therefore, that \(\omega_{1}\rightarrow_{\rm asp}(\omega)_{\omega\times\omega}^{2}\) as required.
## 2. Graphs with no infinite path
In this section we prove the main result of the paper, namely tiltan is consistent with \(\omega^{*}\cdot\omega_{1}\to(\omega^{*}\cdot\omega_{1},\text{infinite path})^{2}\). Let us describe the architecture of the proof. The first step is to fix an indestructible tiltan sequence at \(S=S_{\aleph_{0}}^{\aleph_{2}}\). The second step is to force the generalized Martin's axiom so that \(2^{\omega}=\omega_{1},2^{\omega_{1}}>\omega_{2}\) and the tiltan from the first step is preserved. The main theorem at this stage is the positive relation \(\omega^{*}\cdot\omega_{2}\to(\omega^{*}\cdot\omega_{2},\text{infinite path})^{2}\). This relation will follow from the generalized Martin's axiom. The final step is to collapse \(\aleph_{1}\) by making it a countable ordinal.
It is easy to show that the tiltan is preserved by this collapse, in the sense that it holds in the generic extension over some stationary subset of \(\aleph_{1}\). Likewise, the above positive relation obtained by the generalized Martin's axiom becomes after the collapse \(\omega^{*}\cdot\omega_{1}\to(\omega^{*}\cdot\omega_{1},\text{infinite path})^{2}\). This general plan has been used by Shelah, [10], in his proof of the consistency of tiltan with \(2^{\omega}>\omega_{1}\). At the end of the paper we shall try to explain what are the features of a statement that one should expect to hold (consistently) with tiltan.
We commence with the concept of clean columns, as defined by Larson. The definitions and claims in our context are adapted to the level of \(\aleph_{2}\). In the definition and lemma below we follow in the footsteps of Larson.
**Definition 2.1**.: Clean columns.
Let \(G=(V,E)\) be a graph where \(V\subseteq\omega_{2}\times\omega\).
1. For every \(\beta\in\omega_{2}\), the \(\beta\)th column of the graph is the set \(V(\beta)=V\cap(\{\beta\}\times\omega)\).
2. \(G\) has clean columns iff the following three properties hold for every \(\beta\in\omega_{2}\): 1. Either \(V(\beta)=\varnothing\) or \(|V(\beta)|=\aleph_{0}\). 2. \([V(\beta)]^{2}\cap E=\varnothing\). 3. For every \((\alpha,n)\in V\) there is at most one pair \((\beta,\ell)\) such that \(\{(\alpha,n),(\beta,\ell)\}\in E\).
Graphs with clean columns simplify considerably the treatment of independent subsets and related notions. Of course, a graph \(G\) may lack this property. However, we focus on graphs of type \(\omega^{*}\cdot\omega_{2}\) with no infinite path. In such graphs one can always pass to a subgraph of the same order type with clean columns. Ahead of proving this assertion, we need a simple lemma.
**Lemma 2.2**.: _Let \(G=(V,E)\) be a graph over \(\omega_{2}\times\omega\) with no infinite path, and let \(\beta\in\omega_{2}\). Assume that \(C\subseteq\{\beta\}\times\omega\) and \(|C|=\aleph_{0}\). There exists a finite set \(A\subseteq\omega_{2}\times\omega\) and an infinite set \(B\subseteq C\) such that:_
1. _Every element of_ \(A\) _is connected with every element of_ \(B\)_._
2. _If_ \((\alpha,m)\notin A\) _then there is at most one element_ \((\beta,n)\in B\) _such that_ \(\{(\alpha,m),(\beta,n)\}\in E\)_._
3. \([B]^{2}\cap E=\varnothing\)
Proof.: We try to define by induction on \(i\in\omega\) pairs of the form \((D_{i},a_{i})\) such that \(D_{i}\subseteq C\) is infinite and \(a_{i}\in\omega_{2}\times\omega\). We indicate that this attempt is doomed to failure after finitely many steps.
At the stage of \(i=0\) we choose any infinite independent \(D_{0}\subseteq C\). The existence of such a set follows from Ramsey's theorem upon defining \(d:[C]^{2}\to\{0,1\}\) by \(d(x,y)=0\) iff \(\{x,y\}\notin E\). Ramsey's theorem ensures the existence of an infinite monochromatic set \(D_{0}\subseteq C\). The assumption that \(G\) has no infinite path implies that \(D_{0}\) must be \(0\)-monochromatic, that is an independent set.
Now we ask whether there is an element \(a\in\omega_{2}\times\omega\) so that \(a\) is connected with \(\aleph_{0}\)-many elements from \(D_{0}\). If the answer is positive then let \(a_{0}\) be the \(<_{\text{lex}}\)-first such element. If the answer is negative then the process is terminated.
At the stage of \(i+1\) we let \(D_{i+1}=D_{i}\cap E(a_{i})\). By the induction hypothesis at the \(i\)th stage, \(D_{i+1}\) is infinite. Now we ask if there exists some \(a\in\omega_{2}\times\omega\) connected with \(\aleph_{0}\)-many elements from \(D_{i+1}\) such that \(a\neq a_{j}\) for every \(j\leq i\). If yes, let \(a_{i+1}\) be the \(<_{\text{lex}}\)-first with this property. If not, the process is terminated.
Remark that for some \(\ell\in\omega\) we will be able to define \(D_{\ell}\) but not \(a_{\ell}\). Otherwise, for every \(i\in\omega\) choose an element \(d_{i}\in D_{i+2}-\{d_{j}:j<i\}\) (here we use the infinitude of each \(D_{i}\)) and then \(\langle a_{n},d_{n}:n\in\omega\rangle\) forms an infinite path, a contradiction.
Set \(D=D_{\ell},A=\{a_{i}:i<\ell\}\). Define a coloring \(c:[D]^{2}\to\{0,1\}\) as follows. Let \(c(x,y)=0\) iff there exists \(v\notin A\) such that both \(\{x,v\},\{y,v\}\in E\). By another application of Ramsey's theorem there is an infinite \(B\subseteq D\) which is monochromatic under \(c\). Observe that \(B\) must be \(1\)-monochromatic, since if \(x,y\in B,c(x,y)=0\) then one can produce an infinite path from the elements of \(B\) and the elements \(v\notin A\) which connect them. This argument uses the fact that every such \(v\) is not in \(A\) hence connected with only finitely many elements of \(B\).
Now the sets \(A,B\) are as required. First, \(A\) is finite and \(B\) is infinite. Second, \((a)\) follows from the choice of the elements of \(A\), \((b)\) follows from the fact that \(B\) is \(c\)-monochromatic and \((c)\) from the fact that \(B\subseteq D\) and \(D\) is independent.
Equipped with the above lemma, we can proceed to the following.
**Claim 2.3**.: _Assume that:_
1. \(V\subseteq\omega_{2}\times\omega\) _and_ \(\operatorname{otp}(V)=\omega^{*}\cdot\omega_{2}\)_._
2. \(G=(V,E)\) _is a graph with no infinite path._
_Then there exists \(W\subseteq V,\operatorname{otp}(W)=\omega^{*}\cdot\omega_{2}\) such that the graph \(H=(W,E\cap[W]^{2})\) has clean columns._
Proof.: We may assume that all the columns of \(V\) are infinite, since \(\operatorname{otp}(V)=\omega^{*}\cdot\omega_{2}\)
and hence it will remain with the same order type after removing all the finite columns. We apply Lemma 2.2 to every column of \(V\), and we get some \(U\subseteq V,\operatorname{otp}(U)=\omega^{*}\cdot\omega_{2}\), every nonempty column of \(U\) is infinite and edge-free and for each \(U(\beta)\) there is a finite set \(A(\beta)\) as in the lemma.
Denote the left component \(\{\beta:|U(\beta)|=\aleph_{0}\}\) by \(I\), and define \(f:I\to[I]^{<\omega}\) by \(f(\beta)=\{\alpha\in I:\exists m\in\omega,(\alpha,m)\in A(\beta)\}\). Notice that \(\beta\notin f(\beta)\) for every \(\beta\in I\), since \(U(\beta)\) is edge-free and hence no pair of the form \((\beta,m)\) can be an element of \(A(\beta)\). This means that \(f\) is a set-mapping. Further, for every \(\beta\in I\) one can see that \(f(\beta)\) is a finite set. This is simply because \(A(\beta)\) is finite, due to Lemma 2.2. By Hajnal's free set theorem there exists \(J\subseteq I,|J|=\aleph_{2}\) such that \(J\) is \(f\)-free, that is \(\alpha\notin f(\beta)\) whenever \(\alpha,\beta\in J\).
Define \(W=\bigcup\{U(\beta):\beta\in J\}\) and observe that \(\operatorname{otp}(W)=\omega^{*}\cdot\omega_{2}\). The fact that \(H=(W,[W]^{2}\cap E)\) has clean columns comes from the properties of each \(U(\beta)\) as guranteed by the lemma, so we are done.
\(\square_{\ref{eq:H}}\)
The ability to clean the columns is helpful in the proof of the main theorem. The proof depends on two additional lemmata. The second lemma will be postponed after the proof of the main theorem. For the first lemma let us define the concept of _a replete ordinal_. Let \(G=(V,E)\) be a graph with \(V\subseteq\omega_{2}\times\omega\), and assume that \(\psi=\{(\alpha_{i},m_{i}):i\in\omega\}\subseteq V\). An ordinal \(\beta\in\omega_{2}\) will be called \(\psi\)-replete iff there exists \(n(\beta)\in\omega\) such that for every \(k\in[n(\beta),\omega)\) there is \(i_{k}\in\omega\) for which \(\{(\alpha_{i_{k}},m_{i_{k}}),(\beta,k)\}\in E\).
**Lemma 2.4**.: _Suppose that:_
1. \(2^{\omega}=\omega_{1}\)_._
2. \(V\subseteq\omega_{2}\times\omega\) _and_ \(\operatorname{otp}(V)=\omega^{*}\cdot\omega_{2}\)_._
3. \(H=(V,E)\) _is a graph with clean columns._
4. _There is no independent subset of_ \(V\) _of type_ \(\omega^{*}\cdot\omega_{2}\)_._
5. \(\psi=\{(\alpha_{i},m_{i}):i\in\omega\}\subseteq V\)_._
6. \(R\) _is an unbounded subset of_ \(\omega_{2}\) _such that every_ \(\beta\in R\) _is_ \(\psi\)_-replete._
_Then there exists an infinite path in \(H\)._
_Proof_.
For every \(\beta\in R\) let \(n(\beta)\in\omega\) be as in the definition of repleteness and let \(A_{\beta}\in[\omega]^{\omega}\) be such that if \(i\in A_{\beta}\) then there is \(k\in\omega\) so that \(i=i_{k}\), that is \(\{(\alpha_{i},m_{i}),(\beta,k)\}\in E\). Since \(2^{\omega}=\omega_{1}\) and \(|R|=\aleph_{2}\) we may assume that \(A_{\beta}=A\) for every \(\beta\in R\), where \(A\) is some fixed element of \([\omega]^{\omega}\). Similarly, we can assume that \(n(\beta)\in\omega\) is the same natural number for every \(\beta\in R\), and without loss of generality \(n(\beta)=0\) for every \(\beta\in R\).
Define \(c:[R]^{2}\to 2\) by \(c(\beta,\gamma)=0\) iff there is no edge from \((\beta,k)\) to \((\gamma,\ell)\) whenever \(k,\ell\in\omega\). Put another way, \(c(\beta,\gamma)=1\) iff there are \(k,\ell\in\omega\) for which \(\{(\beta,k),(\gamma,\ell)\}\in E\). By the Erdos-Dushnik-Miller theorem either some \(S\in[R]^{\omega_{2}}\) is \(0\)-monochromatic or some \(\{\beta_{m}:m\in\omega\}\subseteq R\) is \(1\)-monochromatic. In the first case \(S\times A\) forms an independent subset of
of type \(\omega^{*}\cdot\omega_{2}\), contradicting \((d)\). We conclude, therefore, that \(\{\beta_{m}:m\in\omega\}\subseteq R\) is \(1\)-monochromatic for some infinite subset of \(R\).
By induction on \(m\in\omega\) we try to choose an element \(t_{m}\in\psi\) such that \(\{t_{m},(\beta_{m},k_{m})\}\in E\) and \(m<n<\omega\Rightarrow t_{m}\neq t_{n}\) and for some \(\ell\) we have \(\{(\beta_{m},k_{m}),(\beta_{m+1},\ell)\}\in E\). This is possible since \(c(\beta_{m},\beta_{m+1})=1\) so we fix \(k_{m}\) and \(\ell\) for which \(\{(\beta_{m},k_{m}),(\beta_{m+1},\ell)\}\in E\) and then we can choose \(t_{m}\) and \(t_{m+1}\) from \(\psi\) using the fact that both \(\beta_{m}\) and \(\beta_{m+1}\) are \(\psi\)-replete.3 Now the sequence \(\langle(\beta_{m},k_{m}),(\beta_{m+1},k_{m+1}),t_{m+1}:m\in\omega\rangle\) forms an infinite path in \(H\) so the proof is accomplished.
Footnote 3: By a careful choice of \(A\) we may assume that \(t_{m}\neq t_{m+1}\).
We can prove now the substantial result which reads as follows:
**Theorem 2.5**.: _Assume \(2^{\omega}=\omega_{1},2^{\omega_{1}}>\omega_{2}\) and the generalized Martin's axiom holds. Then \(\omega^{*}\cdot\omega_{2}\to(\omega^{*}\cdot\omega_{2},\text{infinite path})^{2}\)._
Proof.:
Let \(H=(V,E)\) be a graph with no infinite path such that \(\operatorname{otp}(V)=\omega^{*}\cdot\omega_{2}\). We are assuming toward contradiction that there is no independent subset of \(V\) of type \(\omega^{*}\cdot\omega_{2}\). By Claim 2.3 we may assume that \(H\) has clean columns. As annotated above, let \(I=\{\beta\in\omega_{2}:|V(\beta)|=\aleph_{0}\}\).
We define a forcing notion \(\mathbb{P}\). A condition \(p\in\mathbb{P}\) is a countable independent subset of \(V\). If \(p,q\in\mathbb{P}\) then \(p\leq_{\mathbb{P}}q\) iff \(p\subseteq q\). By Lemma 2.6 below, \(\mathbb{P}\) satisfies \((c)\) of Theorem 1.1. If \(p,q\in\mathbb{P}\) and \(p\parallel q\) then \(p\cup q\in\mathbb{P}\) and it is a least upper bound by the definition of \(\leq_{\mathbb{P}}\). Similarly, if \((p_{j}:j\in\omega)\) is \(\leq_{\mathbb{P}}\)-increasing then \(\bigcup_{j\in\omega}p_{j}\in\mathbb{P}\), being countable and independent, and it is a least upper bound. Hence \(\mathbb{P}\) satisfies the requirements of Theorem 1.1.
For every \(\beta\in I\) let \(D_{\beta}=\{p\in\mathbb{P}:\exists\gamma>\beta,\exists^{\infty}\ell,(\gamma, \ell)\in p\}\). We claim that each \(D_{\beta}\) is a dense open subset of \(\mathbb{P}\). The fact that \(D_{\beta}\) is open follows from the definition, so let us prove density. Suppose that \(p=\{(\alpha_{i},m_{i}):i\in\omega\}\notin D_{\beta}\), but \(p\in\mathbb{P}\). We know, therefore, that \(p\) is independent and we observe that assumptions \((a)-(e)\) of Lemma 2.4 hold with \(p\) here stands for \(\psi\) there (note that \((d)\) is our assumption toward contradiction). Since the conclusion of the lemma fails we see that necessarily assumption \((f)\) of the lemma fails. That is, the set \(R\) of \(p\)-replete ordinals is bounded in \(\omega_{2}\). Choose \(\gamma\in\omega_{2}\) such that \(\gamma>\beta\) and \(\gamma>\sup(R)\). Namely, \(\gamma\) is not \(p\)-replete and hence \((\gamma,\ell)\) is not connected with an element of \(p\) for an infinite set \(a\) of \(\ell\)s. Define \(q=p\cup\{(\gamma,\ell):\ell\in a\}\). Since \(p\leq q\in D_{\beta}\) we see that \(D_{\beta}\) is dense.
The collection \(\mathcal{D}=\{D_{\beta}:\beta\in I\}\) is of size \(\aleph_{2}\) and \(2^{\omega_{1}}>\omega_{2}\). Further, \(\alpha<\aleph_{2}\Rightarrow\alpha^{\aleph_{0}}<\aleph_{2}\) since \(2^{\omega}=\omega_{1}\). Hence there exists a generic filter \(G\subseteq\mathbb{P}\) such that \(G\cap D_{\beta}\neq\varnothing\) for every \(\beta\in I,n\in\omega\). Define \(W=\bigcup\{p:p\in G\}\), and notice that \(\operatorname{otp}(W)=\omega^{*}\cdot\omega_{2}\). Since every \(p\in G\) is independent and \(G\) is a directed set, \(W\) is independent as well, so we arrived at a contradiction and hence we are done.
We accomplish the above proof by the following lemma.
**Lemma 2.6**.: _Let \(H=(V,E)\) be a graph with no infinite path, and let \(\mathbb{P}\) be the associated forcing whose conditions are countable independent sets. Assume that \(2^{\omega}=\omega_{1}\). Then \(\mathbb{P}\) satisfies requirement \((c)\) of Theorem 1.1._
_Proof_.
Let \(\{p_{\alpha}:\alpha\in\omega_{2}\}\) be a set of conditions in \(\mathbb{P}\). By induction on \(\beta\in\omega_{2}\) we choose a stationary subset \(A_{\beta}\) of \(S^{\omega_{2}}_{\omega_{1}}\) such that:
1. If \(i,j\in A_{\beta}\) then \(p_{i}\parallel p_{j}\).
2. The set \(W=S^{\omega_{2}}_{\omega_{1}}-\bigcup A_{\beta}\) is not stationary in \(\omega_{2}\).
We can choose \(A_{\beta}\) by applying Lemma 1.2 inductively over the set \(S^{\omega_{2}}_{\omega_{1}}-\bigcup_{\gamma<\beta}A_{\gamma}\). Let us describe the construction explicitly. For \(\beta=0\) define \(d:[S^{\omega_{2}}_{\omega_{1}}]^{2}\to 2\) as follows. If \(i,j\in S^{\omega_{2}}_{\omega_{1}}\) then let \(d(i,j)=0\) iff \(p_{i}\parallel p_{j}\). By Lemma 1.2, either there is a stationary set \(T\subseteq S^{\omega_{2}}_{\omega_{1}}\) for which \(d^{\prime\prime}[T]^{2}=\{0\}\) or an \(\omega_{1}\)-sequence of elements of \(S^{\omega_{2}}_{\omega_{1}}\) for which the range of \(d\) on its pairs is constantly one. The second option is impossible since by an application of Lemma 1.4 it means that there is an infinite path in \(H\), contrary to our assumptions. Thus, the first option holds. Set \(A_{0}=T\).
For \(\beta>0\), without loss of generality \(S^{\omega_{2}}_{\omega_{1}}-\bigcup_{\gamma<\beta}A_{\gamma}\) is stationary. Apply Lemma 1.2 in the same way to obtain \(A_{\beta}\). We emphasize that the choice of the \(A_{\beta}\)s depend on \(\{p_{\alpha}:\alpha\in\omega_{2}\}\) and, moreover, on the enumeration of its elements. Since we wish to use the argument given in that lemma, for every \(\beta\in\omega_{2}\) there will be an ordinal \(\eta_{\beta}\in\omega_{2}\) and a fixed sequence \(c_{\beta}\subseteq\eta_{\beta}\) as described in the proof of Lemma 1.2. Recall that \(\eta_{\beta}<\min(A_{\beta})\). Notice that if \(c_{\beta}=c_{\gamma}\) then \(\eta_{\beta}=\eta_{\gamma}\) and \(A_{\beta}\cup A_{\gamma}\) is linked.4 Hence by taking unions of \(A_{\beta}\)s whenever possible we may assume that \(\beta\neq\gamma\Rightarrow c_{\beta}\neq c_{\gamma}\).
Footnote 4: The adjective _linked_ means that every two elements from \(A_{\beta}\cup A_{\gamma}\) are compatible.
Let \(T=\{\min(A_{\beta}):\beta\in\omega_{2}\}\). We claim that \(T\) is not stationary. Suppose not, and define \(g:T\to\omega_{2}\) by \(g(\min(A_{\beta}))=\eta_{\beta}\). This is a regressive function so there is a stationary set \(T^{\prime}\subseteq T\) and a fixed \(\eta\in\omega_{2}\) such that \(\beta\in T^{\prime}\) implies \(\eta_{\beta}=\eta\). Moreover, since each \(c_{\beta}\) is countable we may shrink \(T^{\prime}\) to a stationary set for which all the \(c_{\beta}\)s are the same fixed sequence, say \(c\). However, this is impossible since by the choice of the \(A_{\beta}\)s if \(\beta\neq\gamma\) then \(c_{\beta}\neq c_{\gamma}\). We conclude, therefore, that \(T\) is not stationary.
Re-enumerate the family of sets \((A_{\beta}:\beta\in\omega_{2})\) by \((S_{i}:i\in\omega_{2})\) in such a way that \(i<j\Rightarrow\min(S_{i})<\min(S_{j})\). Notice that this implies \(i\leq\min(S_{i})\) for every \(i\in\omega_{2}\). Define \(f:\omega_{2}\to\omega_{2}\) by letting \(f(\alpha)=i\) iff \(\alpha\in S_{i}\). Let \(C\) be a club of \(\omega_{2}\) disjoint from \(T\cup W\). If \(\alpha\in C\cap S^{\omega_{2}}_{\omega_{1}}\) then \(f(\alpha)<\alpha\) since \(f(\alpha)\leq\alpha\) by the fact that \(i\leq\min(S_{i})\) and \(f(\alpha)=\alpha\) is possible only if \(\alpha\in T\). Therefore, \(f\) is regressive on \(C\cap S^{\omega_{2}}_{\omega_{1}}\). If \(\alpha_{0},\alpha_{1}\in C\cap S^{\omega_{2}}_{\omega_{1}}\) and \(f(\alpha_{0})=f(\alpha_{1})\) then \(\alpha_{0}\) and \(\alpha_{1}\) belong to the same \(S_{i}\), so \(p_{\alpha_{0}}\parallel p_{\alpha_{1}}\). This concludes the proof of the lemma.
\(\square\)2.6
We can prove, finally, the main result of this paper:
**Theorem 2.7**.: _Tiltan at \(\aleph_{1}\) is consistent with the positive relation \(\omega^{*}\cdot\omega_{1}\to(\omega^{*}\cdot\omega_{1},\text{infinite path})^{2}\)._
Proof.: Begin with a universe in which there are enough diamonds so that one can create a tiltan sequence at some stationary set \(S\subseteq S_{\omega}^{\omega_{2}}\), and this instance of tiltan is indestructible upon any \(\aleph_{1}\)-complete forcing notion. The description of this construction and the indestructibility proof appear in [10]. We may assume that \(2^{\omega}=\omega_{1}\) along with this construction (actually, this is the natural situation).
We force now the generalized Martin's axiom of Theorem 1.1 with \(2^{\omega_{1}}>\omega_{2}\). This is done by an \(\aleph_{1}\)-complete forcing notion, hence the tiltan over \(S\) is preserved. Observe that this means also tiltan at \(\aleph_{2}\), since \(S\subseteq\omega_{2}\). By Theorem 2.5 we see that the relation \(\omega^{*}\cdot\omega_{2}\to(\omega^{*}\cdot\omega_{2},\text{infinite path})^{2}\) holds at this stage.
Now we force with \(L\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
translation to \(\omega_{1}\times\omega\). Larson considered in [10] a stronger version of tiltan in which strongly cofinal sets are predicted by strongly cofinal tiltan elements. Unfortunately, such a prediction principle implies the continuum hypothesis (hence diamond)5 and Larson concluded that this direction will not settle the problem of tiltan and \(\omega^{*}\cdot\omega_{1}\to(\omega^{*}\cdot\omega_{1},\text{infinite path})^{2}\). On the other hand, this fact motivated our attempt to force the consistency of tiltan with \(\omega^{*}\cdot\omega_{1}\to(\omega^{*}\cdot\omega_{1},\text{infinite path})^{2}\).
Footnote 5: Recall that tiltan with the continuum hypothesis imply diamond.
It would be interesting to ask a similar question with respect to _superclub_, a prediction principle defined by Primavesi in [14]. A superclub sequence \(\langle S_{\alpha}:\alpha\in\text{lim}(\omega_{1})\rangle\) satisfies \(S_{\alpha}\subseteq\alpha\) for every \(\alpha\) and for every \(A\in[\omega_{1}]^{\omega_{1}}\) one can find \(B\subseteq A,|B|=\aleph_{1}\) such that \(\{\delta\in\omega_{1}:B\cap\delta=S_{\delta}\}\) is a stationary subset of \(\omega_{1}\). Superclub holds iff there exists a superclub sequence. One can see that diamond implies superclub and superclub implies tiltan. Both implications are irreversible. In particular, superclub is strictly stronger than tiltan, see [1] and [1].
**Question 2.8**.: Is it consistent that superclub holds at \(\aleph_{1}\) and \(\omega^{*}\cdot\omega_{1}\to(\omega^{*}\cdot\omega_{1},\text{infinite path})^{2}\)?
We indicate that if one begins with superclub (or even diamond) at \(\omega_{2}\) and collapses \(\aleph_{1}\) then superclub fails at \(\aleph_{1}\) in the generic extension, as follows from [1]. In other words, the method of the current paper does not resolve the above problem. |
2308.13491 | Towards Optimal Head-to-head Autonomous Racing with Curriculum
Reinforcement Learning | Head-to-head autonomous racing is a challenging problem, as the vehicle needs
to operate at the friction or handling limits in order to achieve minimum lap
times while also actively looking for strategies to overtake/stay ahead of the
opponent. In this work we propose a head-to-head racing environment for
reinforcement learning which accurately models vehicle dynamics. Some previous
works have tried learning a policy directly in the complex vehicle dynamics
environment but have failed to learn an optimal policy. In this work, we
propose a curriculum learning-based framework by transitioning from a simpler
vehicle model to a more complex real environment to teach the reinforcement
learning agent a policy closer to the optimal policy. We also propose a control
barrier function-based safe reinforcement learning algorithm to enforce the
safety of the agent in a more effective way while not compromising on
optimality. | Dvij Kalaria, Qin Lin, John M. Dolan | 2023-08-25T17:05:41Z | http://arxiv.org/abs/2308.13491v1 | # Towards Optimal Head-to-head Autonomous Racing with Curriculum Reinforcement Learning
###### Abstract
Head-to-head autonomous racing is a challenging problem, as the vehicle needs to operate at the friction or handling limits in order to achieve minimum lap times while also actively looking for strategies to overtake/stay ahead of the opponent. In this work we propose a head-to-head racing environment for reinforcement learning which accurately models vehicle dynamics. Some previous works have tried learning a policy directly in the complex vehicle dynamics environment but have failed to learn an optimal policy. In this work, we propose a curriculum learning-based framework by transitioning from a simpler vehicle model to a more complex real environment to teach the reinforcement learning agent a policy closer to the optimal policy. We also propose a control barrier function-based safe reinforcement learning algorithm to enforce the safety of the agent in a more effective way while not compromising on optimality.
Reinforcement learning-based control, head-to-head autonomous racing, game theory
## I Introduction
There has been a growing interest in autonomous racing research in recent years [1] also accelerated by competitions such as RoboRace [2], F1Tenth [3], and the Indy Autonomous Challenge [4]. Professional human race drivers operate to follow racing lines to achieve optimal performance and tend to outperform opponents while adhering to the racing rules. Prior works in autonomous racing tend to ignore the latter and only consider collision avoidance. It is difficult to inculate these complex rules and design a classical rule-based controller which takes care of all scenarios and tackles a wide range of opponent behaviors. Most previous reinforcement learning (RL)-based works tackling this do not include racing line information in the framework [5][6], due to which it becomes difficult for the learning agent to learn an optimal policy which can be generalized to other tracks. Also, training an agent directly on a complex racing environment with complex vehicle dynamics debars the learning agent from learning complex behaviors like skidding. In this work, we propose a complex racing environment which can be used to train head-to-head RL agents to learn an optimal policy for competing with the opponents. Then, we propose a curriculum learning-based framework which transitions the vehicle model from a simple to a more complex one to tackle the problem of this sub-optimality. Some works [7] also use Control Barrier Functions (CBFs) as a shield while learning, which has also been proven to boost the safety performance. However, these safety constraints may debar the agent from learning a high-performant policy, especially in competitive environments like ours where performance is also a primary concern alongside safety. In this work, we also propose a curriculum learning-based CBF framework to enforce safety during learning while not compromising on the optimality of the final learned policy. We gradually remove the CBF interference with the policy as the agent progressively learns to be more safe, hence focusing on improving performance next. We test our controllers against baselines by performing head-to-head races with other baselines. We briefly summarize our contribution as follows: 1) We propose a head-to-head racing environment which models complex vehicle dynamics and collision among agents or with walls; 2) We design an effective hierarchical controller which includes racing line information inside the hierarchical controller to train an optimal policy; 3) We design a curriculum learning-based framework which effectively enables learning an optimal policy for the agent.
The rest of the paper is organized as follows: Section II briefly discusses the previous related works. Section III presents the problem formulation. Section IV elaborates on the proposed framework. The simulation results are presented in Section V. The concluding remarks and future work can be found in Section VI.
## II Related Works
Autonomous racing has received a lot of interest from the research community recently at all levels of the stack including the perception, localization, path planning and control, as discussed in this literature review paper [1]. We specifically focus on the path planning and control level of the stack. Most recent works focus on optimizing lap times for a single agent, with very recent works addressing multi-agent planning and control.
For single-agent, most works propose calculating an optimal racing line offline and using a control algorithm to run on it as a reference online. [8] propose IPOPT optimization to compute the racing line while [9] proposes Bayesian optimization to obtain the racing line. [10] calculates a minimum-curvature path which is very close to the optimal line. [11] proposes using an LQR controller to track the racing line. [12][13] proposes a discrete MPC controller, while [14] proposes a model predictive contouring controller (MPCC). There also have been some recent works to account for model and environment uncertainty [15] and some [16][17] to account for model changes online. Several works proposed to use reinforcement learning [6][18] and imitation
learning [19][20] to control a vehicle around the track with the objective of minimal race times.
For multi-agent racing control, there are works [21][22] on using rule-based strategy selection and using high-level path planner/low-level control to execute strategies like overtaking, blocking, collision avoidance, etc. However, these works rely on a lot of parameters and it always difficult to find an optimal set of parameters to work on all track maps and environments. Some works use game theoretic planning followed by classical control [23]. Some recent works like [5][24] use reinforcement learning to effectively learn an end-to-end controller that learns to win the race and thus learns certain strategies to do so. The environments used to train the RL agent(s) can be varied with different maps, surfaces, etc. to learn a widely generalizable policy. However, these works still struggle in learning an optimal policy. Some works like [5] use a simple kinematic vehicle model to train and test the RL policy with other classical approaches. In this work, we propose a complex racing environment with a dynamic vehicle model with challenging parameters close to the actual racing car to train and test the RL controller. We also propose a curriculum-based course to train the RL agent that helps in learning a better policy.
## III Problem Formulation
We first present our dynamic game formulation. Let there be 2 players \(i\) and \(j\) racing against each other over \(T\) time steps. The track is defined by a sequence of \(k\) checkpoints along the center, \(\{c_{i}\}_{i=1}^{T}\). The objective for each player is to minimize the time difference between it and its opponent in completing the final lap defined as \(c_{k}\). Let \(\boldsymbol{\gamma}_{i}\) be the earliest time step when a player \(i\) reaches a checkpoint. Let the state of the vehicle be \(x_{i}\in X\subset R^{6}\) and control action at each time step be \(u_{i}\in U\subset R^{2}\). Let \(r_{i}\in{1,2...k}\) be the index of the last checkpoint passed by the player. Let \(p:S\to C\) be a function mapping a state \(s\) to a checkpoint. Also, we must ensure the state \(s_{t}\) is always within the track boundaries, i.e., \(q(s_{t})\leq w\) where \(q\) is a function for the closest distance to the center line and \(w\) is the track width, assumed to be constant. For collision avoidance, let \(d:X*X\to R\) be a function that returns the shortest distance between bodies given the \(2\) vehicle states as \(d(s_{t}^{i},s_{t}^{j})\). Based on these variables, the objective for agent \(i\) is given as:
\[\begin{split}\min_{u_{0}^{i},u_{1}^{i}\cdots u_{T-1}^{i}}& \gamma_{i}-\gamma_{j}\\ s.t.& x_{t+1}^{k}=f(x_{t}^{k},u_{t}^{k}),\forall t \in{0,1,..,T-1k}\in i,j\\ & x_{t+1}^{k}=f(x_{t}^{k},u_{t}^{k}),\forall t\in{0,1,..,T-1} \forall k\in i,j\\ & q(x_{t}^{k})\leq w\\ & d(x_{t}^{i},x_{t}^{j})\forall k\in i,j\geq 0\end{split} \tag{1}\]
This formulation is similar to [5] except for the transition model \(f\). For more details, readers are referred to it. The dynamic bicycle model is used to define the model transition \(f\). The dynamic model state \(s_{t}\) is defined with global coordinates \(x\), \(y\), and yaw rotation \(\phi\) in the global frame; longitudinal velocity \(v_{x}\), lateral velocity \(v_{y}\), and yaw angular velocity \(\omega\) in the vehicle's body frame. Throttle \(d\) and steering \(\delta\) define the action space of the model. \(F_{r,x}\) is the longitudinal force on the rear tire in the tire frame assuming a rear-driven vehicle, \(F_{f,y}\) and \(F_{r,y}\) are the forces on the front and rear tires, respectively, and \(\alpha_{f}\) and \(\alpha_{r}\) are the corresponding slip angles. We denote the mass of the vehicle \(m\), the moment of inertia in the vertical direction about the center of mass of the vehicle \(I_{z}\), the length of the vehicle from the COM (center of mass) to the front wheel \(l_{f}\), and the length from the COM to the rear wheel \(l_{r}\). \(B_{f/r}\), \(C_{f/r}\), \(D_{f/r}\) are the Pacejka tire model parameters specific to the tire and track surface. For longitudinal force, \(C_{m1},C_{m2}\) are known constants obtained from the gear model and \(C_{r},C_{d}\) are aerodynamic force constants which are learned from vehicle interactions. Mathematically, the vehicle model \(f\) is defined as follows:
\[\begin{bmatrix}\dot{x}\\ \dot{y}\\ \dot{\phi}\\ \dot{v}_{x}\\ \dot{v}_{y}\\ \dot{\omega}\end{bmatrix}=\begin{bmatrix}v_{x}\cos(\phi)-v_{y}\sin(\phi)\\ v_{x}\sin(\phi)+v_{y}\cos(\phi)\\ \omega\\ \frac{1}{n}(F_{r,x}-F_{f,y}\sin(\delta)+mv_{y}\omega-mg\sin(p))\\ \frac{1}{m}(F_{r,y}+F_{f,y}\cos(\delta)-mv_{x}\omega+mg\sin(r))\\ \frac{1}{l_{x}}(F_{f,y}l_{f}\cos(\delta)-F_{r,y}l_{r})\end{bmatrix} \tag{2}\]
where \(F_{r,x}=(C_{m1}-C_{m2}v_{x})d-C_{r}-C_{d}v_{x}^{2}\), \(F_{f,y}=D_{f}\sin(C_{f}\tan^{-1}(B_{f}\alpha_{f})),\alpha_{f}=\delta-\tan^{-1} \left(\frac{\omega l_{f}+v_{y}}{v_{x}}\right)\), and \(F_{r,y}=D_{r}\sin(C_{r}\tan^{-1}(B_{r}\alpha_{r})),\alpha_{r}=\tan^{-1}\left( \frac{\omega l_{r}-v_{y}}{v_{x}}\right)\).
## IV Hierarchical control design
Similar to [5], we also propose a hierarchical design with a high-level planner returning a discrete checkpoint plan followed by a low-level controller to track the planned checkpoints. Having a decoupled planning approach helps to achieve long-term plans like overtaking. As discussed in detail in [5], directly executing reinforcement learning strategies as a single controller may not allow reliably meeting all the constraints or is not strategically optimal in the long run.
### _High level planner_
The high-level tactical planner approximates the general game formulation discussed earlier into a simpler discrete form. This discrete game formulation requires \(2\) components.
#### Iv-A1 State space model
We first transform the continuous state of the vehicle into a discrete state. We convert the position of the vehicle into a pair of discrete variables, i.e., lane ID and last passed checkpoint velocity, the latter converted to a range with suitable window size. Tire wear is also contained within a range. An example of a continuous state conversion to discrete state is shown in Fig 1.
#### Iv-A2 Dynamics transition model
We need a transition model to define the transition between \(2\) states. If the transition is possible such that all the continuous states from the current state are able to reach at least one state on the
next state, then the transition is deemed to be feasible. If all the boundary conditions are not satisfied, we rule out the transition. In our update implementation, we use simple one-dimensional equations of motion to determine current time state by taking the mean of the velocities in the range of the initial and final state [25]. For every state transition, the longitudinal segment is incremented strictly by \(1\). The number of lane changes can be used to formulate a penalty on changing lanes too frequently on straights, which we will discuss later.
The game is played with both players starting at an initial checkpoint and progressively updating each player's choices with the smallest time state at each point. A lower time state value implies that the player in question reached the particular checkpoint before the other player and hence gets to choose the next action. This gives the other player a chance to make a resultant strategic action, like whether and how to overtake, etc. A collision avoidance rule is incorporated in the high-level planner by restricting actions that result in the same checkpoint and the same lateral lane with time difference less than \(mT\).
#### Iii-A3 Solution
The high-level problem of minimizing time w.r.t. the other agent is solved using monte-carlo tree search (MCTS). The optimal performance is defined as follows:
\[C_{X_{a},X_{a+1},...,X_{b-1}}=\sum_{i=a}^{i=b-1}(o_{i}-X_{i,\text{ lane}})^{2} \tag{3}\]
where \(o_{i}\) is the optimal lane and \(X_{i}\) is the high-level state at the \(i^{\text{th}}\) checkpoint. The optimal lane is obtained from the optimal racing line. The optimal racing line is obtained from [8] by computing the time-optimal trajectory with the given vehicle model parameters. The optimal lane at each checkpoint is obtained by finding the (segment,lane) pair at which the racing line passes. The solution from MCTS is a series of discrete states both for the player and the adversarial opponent. Note that we assume that the opponent is optimal here, i.e., it too tries to achieve optimal performance. The formulation for MCTS is similar to [5] except that the optimal solution is defined by staying closer to the racing line rather than minimizing time. We believe this leads to a better optimal solution, as explained in Section V. Figure 2 shows path planned from the 2 approaches. On choosing minimum time difference as the criterion for a solution, the trajectory comes out to be Figure 1(a), which is closer to the inner boundary, as it covers less distance, but would take a longer time in the long run. Figure 1(b), however, shows a new optimal trajectory along the racing line which yields shorter lap times in the long run.
### _Low level controller_
The aim of the low-level controller is to execute the high-level trajectory plan. The low-level controller is typically an RL controller which takes the current state as an input and outputs the control command.
#### Iii-B1 Reward design
The reward for the low-level controller is defined by the distance to capture the following objectives:
1. Reward for passing through a checkpoint and additional reward for passing through the target lane and through the target speed : \(k_{\text{target}}e^{-d_{tc}}\) where \(d_{tc}\) is the distance from the target checkpoint
2. Reward for minimizing time between passing \(2\) checkpoints : \(-k_{\text{time}}\Delta t\) where \(\Delta t\) is the time difference between passing \(2\) checkpoints
3. Negative reward for swerving too frequently on straights : \(-k_{\text{sweve}}\mathbb{1}_{(x,y)\in S}\) where \(S\) is the set of straight section checkpoints
4. Negative reward for colliding with the wall. We use an indicator function \(\mathbb{1}_{I_{j}\leq h\cup I_{j}\text{ hit wall}}\) that determines if the \(j^{th}\) LIDAR reading is less than \(h\) and if LIDAR bounced off the wall : \(-\sum_{j=1}^{9}k_{\text{wall-hit}}\ \mathbb{1}_{I_{j}\leq h\cup I_{j}\text{ hit wall}}\)
5. Negative reward for collision with other players. We use the indicator function \(\mathbb{1}_{I_{j}\leq h\cup I_{j}\text{ opponent}}\) to check if the \(j^{th}\) LIDAR reading reads hitting the opponent and we have a set \(\phi\) containing all LIDAR rays that point to the front of the car for which we impose additional penalty : \(-\sum_{j=1}^{9}k_{\text{opp},1}\mathbb{1}_{I_{j}\leq h\cup I_{j}\text{ opponent}}+k_{\text{opp},2}\mathbb{1}_{I_{j}\leq h\cup I_{j}\text{ opponent},\cup j\in\phi}\)
6. Negative reward for braking unnecessarily, i.e., when speed is lower than the target window, high lateral slips : \(k_{\text{brake}}1_{v\leq\text{tweapt}\cup\delta\leq 0}+k_{\text{slip}}(\alpha_{f}^{2}+ \alpha_{r}^{2})\)
#### Iii-B2 Network architecture
PPO RL algorithm is used to obtain the optimal policy. The neural network used for estimating the value function and the policy is a simple feedforward neural network with \(8\) layers and \(128\) neurons on each layer, as shown in Fig. 3. There is a Tanh layer at the end to restrict the output steering and throttle to their ranges. Both the steering and throttle are obtained from the output by scaling them by their ranges. The input consists of the following values: 1. The dynamic state of the vehicle consisting of (\(v_{x}\), \(v_{y}\), \(w\)); 2. The Frenet frame state w.r.t. the racing line reference, i.e., signed lateral distance from the racing line \(e_{1}\), relative angle w.r.t. the closest point on the
Fig. 1: Converting a continuous state to discrete state
Fig. 2: High-level plans for (a) min. time cost (b) min. distance to raceline cost
racing line \(e_{2}\); 3. Relative position of the opponent vehicle; 4. Discrete high-level target state. All the values within the range are passed by the average of the lower and upper limits 5. Raw Lidar data consisting of distances at \(32\) rays cast from the extreme left of the car to the right.
#### Iii-B3 Training environment
Training is conducted on \(16\) parallel tracks (\(8\) clock-wise and \(8\) counter-clockwise) so that the agent does not overfit to one side. Also, for each side, \(4\) environments have steeper turns and \(4\) moderate turns, as shown in Fig. 3.
### _Curriculum learning_
With the problem formulation and the hierarchical control design in place, we now define the proposed curriculum learning framework to progressively teach the RL agent an optimal policy. Let us define a parameter \(t_{s}\) denoting the time scale. We vary \(t_{s}\) as:
\[t_{s}=\max\left(0,\min\left(1,\frac{t-t_{\text{start}}}{t_{\text{end}}-t_{ \text{start}}}\right)\right) \tag{4}\]
#### Iii-C1 Vehicle model transition
The dynamic model defined in 2 makes it very difficult for the RL agent to learn an effective policy, as it is very difficult to learn to move at optimal speeds while respecting the friction limits and skidding caused due to high lateral slips at higher speeds. Hence, we define a transition from a relatively simpler dynamic model which is close to the kinematic model (very few slips at the same speeds and much higher friction limits) to the complex model with the actual parameters. The tire model Pacejka parameter changes are defined as follows:
\[\begin{split}& D_{ft_{s}}=2^{1-t_{s}}D_{f0}\\ & C_{ft_{s}}=C_{f0}^{2t_{s}-1}\\ & B_{ft_{s}}=2^{1-t_{s}}\frac{D_{f0}C_{f0}B_{f0}}{D_{ft_{s}}B_{ f_{s}}}\end{split} \tag{5}\]
#### Iii-C2 Safety CBF transition
We also define a safety Control Barrier Function (CBF) to shield the agent while learning similar to [26]. We observed that the RL agent struggles a lot at the beginning hitting the walls as it tries to understand the environment. Hitting the wall once debars the agent from learning meaningful behavior later in the episode, as it gets stuck at the wall. To get rid of this, we define a safety CBF for the wall boundary constraints which overrides the RL controller by the minimum amount required to avoid hitting the wall. For more rigorous details on the CBF, readers are referred to [27]. We define the CBF function \(h\) and the \(2^{\text{nd}}\) order CBF as follows:
\[\begin{split}& h_{\text{right}}(x)=-e_{\text{center}}+w\\ &\dot{h}_{\text{right}}(x)=-v_{x}\sin(\theta-\theta_{\text{ref}} )-v_{y}\cos(\theta-\theta_{\text{ref}})\\ &\ddot{h}_{\text{right}}(x,u)=-\dot{v}_{x}\sin(\theta-\theta_{ \text{ref}})-\dot{v}_{y}\cos(\theta-\theta_{\text{ref}})\\ &\hskip 28.452756pt-\omega.(v_{x}\cos(\theta-\theta_{\text{ref}})- v_{y}\sin(\theta-\theta_{\text{ref}}))\\ & h_{\text{left}}(x)=e_{\text{center}}+w\\ &\dot{h}_{\text{left}}(x)=v_{x}\sin(\theta-\theta_{\text{ref}})+ v_{y}\cos(\theta-\theta_{\text{ref}})\\ &\ddot{h}_{\text{left}}(x,u)=\dot{v}_{x}\sin(\theta-\theta_{\text{ ref}})+\dot{v}_{y}\cos(\theta-\theta_{\text{ref}})\\ &\hskip 28.452756pt+\omega.(v_{x}\cos(\theta-\theta_{\text{ref}})- v_{y}\sin(\theta-\theta_{\text{ref}}))\end{split} \tag{6}\]
Finally,
\[\begin{split}& C_{\text{right}}(x,u)=\max(0,\lambda_{1}\lambda_{2} \ddot{h}_{\text{right}}+(\lambda_{1}+\lambda_{2})\dot{h}_{\text{right}}+h_{ \text{right}})\\ & C_{\text{left}}(x,u)=\max(0,\lambda_{1}\lambda_{2}\ddot{h}_{ \text{left}}+(\lambda_{1}+\lambda_{2})\dot{h}_{\text{left}}+h_{\text{left}} )\\ &\lambda_{1,t_{s}}=\lambda_{1,0}(1-t_{s})\\ &\lambda_{2,t_{s}}=\lambda_{2,0}(1-t_{s})\end{split} \tag{7}\]
The updated command is obtained via the following optimization process where \(K_{\text{viol}}\) is typically set to a very high value and \(u_{\text{ref}}\) is the reference control command before the change:
\[\min_{u}(K_{\text{viol}}(C_{\text{right}}^{2}+C_{\text{left}}^{2})+|u-u_{ \text{ref}}|^{2}) \tag{8}\]
We also add a negative reward for constraint violation as follows:
Fig. 4: Tire curve variation for curriculum learning
Fig. 3: Training environment
\[R_{\text{constraint}}=k_{\text{constraint}}(C_{\text{right}}^{2}+C_{\text{left}}^{2}) \tag{8}\]
Higher values of \(\lambda_{1}\) and \(\lambda_{2}\) imply higher interference from the CBF, as the constraints get activated even when the agent is far from the wall, while lower values imply less interference. As the RL agent is more prone to make collisions at the beginning, higher values of \(\lambda_{1}\) and \(\lambda_{2}\) enable the agent to learn quickly to move along the safer centerline so as to avoid any violations. We vary the parameters of the CBF as follows:
\[\lambda_{1,t_{s}}=\lambda_{1,0}(1-t_{s}) \tag{9}\] \[\lambda_{2,t_{s}}=\lambda_{2,0}(1-t_{s})\]
## V Results
Our framework is implemented in the Unity Game Engine, with an example representative image shown in Fig. 6. We test our controllers on \(2\) track maps, as shown in Figure 6. We conduct test races on this track against \(2\) agents to compare with them. The races are conducted with initial position set randomly either on the left or the right at the starting line with both starting at the same longitudinal level. One position may be at an advantage if it is closer to the racing line. Hence, we randomly choose the positions with an equal chance of getting either position. We first give training rewards obtained by curriculum learning to show the advantage of using curriculum learning in Fig. 4(b). As can be observed, using curriculum learning clearly with only model changes clearly beats the rewards without using it. \(t_{\text{start}}\) and \(t_{\text{end}}\) are chosen to be \(500000\) and \(1500000\). It is unfair to compare till \(1500000\) steps, as the first controller runs on a simple RL environment, but after \(1500000\) steps both environments are the same and our controller clearly beats the non-curriculum-based RL controller in reward. Also, the number of wins with \(4\) races each between \(3\) pairs of agents (so effectively \(12\) races) conducted every \(250000\) steps clearly shows our controller wins most races at all times. With the CBF-based curriculum added, due to negative reward for each CBF constraint violation, the reward is less at the beginning but it eventually improves at the end, achieving an even larger reward. Also, it is slightly better than only using model-based curriculum learning in terms of the number of races won.
Finally, we conduct races among other baselines for comparison. All races consist of \(3\) laps, with the car which crosses the finish line first after \(3\) laps winning the race. In total \(20\) races are conducted with each pair for comparison.
### _Metrics_
We compare the runs against the following metrics: 1. No. of wins; 2. Average lap time; 3. Average lateral distance from race line; 4. No of collisions with wall; 5. No. of collisions with opponents from behind.
### _Baseline methods_
We compare against the following baselines: 1. Ours 2. Ours - CBF 3. [5] : Ours - raceline, curriculum learning 4. MCTS + LQR : To compare against a classical rule-based controller 5. End-to-end : Not using any hierarchical controller
Fig.6 contains the win statistics for all methods. As can be observed, Our Method beats all other methods in most
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline & \begin{tabular}{c} Avg. lap \\ time \\ (in s) \\ \end{tabular} & \begin{tabular}{c} Avg. lateral \\ distance from \\ raceline (in m) \\ \end{tabular} &
\begin{tabular}{c} No of collisions \\ \end{tabular} \\ \cline{2-5} & Wall & Opponent \\ \hline Ours & 28.8 & 1.03 & 250 & 85 \\ \hline Ours - CBF & 29.2 & 1.05 & 360 & 92 \\ \hline Ours - raceline, curriculum learning & 29.5 & 1.80 & 589 & 78 \\ \hline MCTS + LQR & 29.6 & 0.53 & 212 & 64 \\ \hline End to end & 30.3 & 1.67 & 670 & 56 \\ \hline \end{tabular}
\end{table} TABLE I: Race statistics of different methods
Fig. 5: (a) Training rewards (b) No of wins across training steps
Fig. 6: The race setup
Fig. 7: The race win statistics
races. Especially with races against [5], we observe that using racing line information and our proposed curriculum learning approach is beneficial as compared to training without them. Table I contains all the other statistics. As can be seen, we achieve better lap times as compared to other methods. Also, we stay much closer to raceline as compared to [5] after adding explicit raceline state as input. However, using classical control like LQR is able to achieve better raceline tracking but may not be able to achieve optimal speeds as the RL controller does. Also, clearly, using the hierarchical controller yields better results than direct End-to-end approach as it looks from the average lap time. In terms of the no of collisions, it suggests that using CBF based curriculum helps the agent in learning more robust safety as it has fewer collision with wall as compared to not using CBF.
## VI Conclusion and Future Work
In this work we propose a more realistic head-to-head racing environment to race against with more closer to actual dynamics as compared to [5]. We then propose a hierarchical control design with a high-level controller planning a sequence of checkpoints as close to the racing line as possible and avoiding collision with other agents. We then propose a curriculum-based learning method to effectively learn an optimal policy. We compare the results with other baseline methods. It is important to note here that this is a work in progress and we admit that the experiments and inferences are incomplete (for example, comparison with an RL controller trained with constant CBF parameters should have been added to compare with [5]). This is because we were not able to complete all experiments before the deadline. In the near work, we aim to also use trajectory prediction for the opponent agent instead of MCTS. We also aim to test against more complex environments where the agents would be allowed to take a pit stop for tire changes due to wear and more agents can be added into the game.
|
2310.09683 | Detection of the 2021 Outburst of RS Ophiuchi with the LST-1 | Novae are luminous explosions in close binaries which host a white dwarf and
a companion donor star. They are triggered by a thermonuclear runaway when the
white dwarf accretes a critical amount of matter from the secondary. Though
novae are established as high-energy gamma-ray emitters through observations by
the Fermi Large Area Telescope (LAT), the origin of the gamma-ray emission,
whether it is hadronic or leptonic, had been under intense debate until very
recently. RS Ophiuchi (RS Oph) is a well-known recurrent symbiotic nova with a
recurrence time scale of 15 years. The most recent outburst of RS Oph in 2021
brought the first detection of very-high-energy (VHE) gamma rays from a nova
ever. The first Large-Sized Telescope prototype (LST-1) of the Cherenkov
Telescope Array observed this historic event along with H.E.S.S. and MAGIC. The
LST-1 observations in the first days after the burst onset show a clear VHE
gamma-ray signal from RS Oph. The low energy threshold of LST-1 allows us to
reconstruct the RS Oph gamma-ray spectrum down to $\sim$30 GeV, providing the
best connection of the VHE gamma-ray data to the Fermi LAT energy range. The
results from the analysis of the LST-1 observations are consistent with those
obtained with H.E.S.S. and MAGIC, and also support a hadronic origin for the
observed gamma-ray fluxes. In this contribution, we will present the analysis
results of the LST-1 observations of the 2021 outburst of RS Oph. | Yukiho Kobayashi, Arnau Aguasca-Cabot, María Isabel Bernardos Martín, David Green, Rubén López-Coto | 2023-10-14T23:43:17Z | http://arxiv.org/abs/2310.09683v1 | # Detection of the 2021 Outburst of RS Ophiuchi with the LST-1
###### Abstract:
Novae are luminous explosions in close binaries which host a white dwarf and a companion donor star. They are triggered by a thermonuclear runaway when the white dwarf accretes a critical amount of matter from the secondary. Though novae are established as high-energy gamma-ray emitters through observations by the Fermi Large Area Telescope (LAT), the origin of the gamma-ray emission, whether it is hadronic or leptonic, had been under intense debate until very recently. RS Ophiuchi (RS Oph) is a well-known recurrent symbiotic nova with a recurrence time scale of 15 years. The most recent outburst of RS Oph in 2021 brought the first detection of very-high-energy (VHE) gamma rays from a nova ever. The first Large-Sized Telescope prototype (LST-1) of the Cherenkov Telescope Array observed this historic event along with H.E.S.S. and MAGIC. The LST-1 observations in the first days after the burst onset show a clear VHE gamma-ray signal from RS Oph. The low energy threshold of LST-1 allows us to reconstruct the RS Oph gamma-ray spectrum down to \(\sim\)30 GeV, providing the best connection of the VHE gamma-ray data to the Fermi LAT energy range. The results from the analysis of the LST-1 observations are consistent with those obtained with H.E.S.S. and MAGIC, and also support a hadronic origin for the observed gamma-ray fluxes. In this contribution, we will present the analysis results of the LST-1 observations of the 2021 outburst of RS Oph.
Introduction
Novae are luminous eruptions observed in close binaries where a white dwarf (WD) is interacting with its stellar companion. The explosion is powered by a thermonuclear runaway ignited on the surface of the WD when it accretes a critical amount of matter from the companion donor star. Novae have been revealed to be high-energy gamma-ray emitters through observations by the Fermi Large Area Telescope (LAT), starting with the detection of gamma-ray emission from V407 Cyg in 2010 [1]. Though more than a dozen novae have been detected by the Fermi LAT, the mechanism responsible for the production of gamma-ray emission in novae had remained unclear until very recently. In August 2021, RS Ophiuchi (RS Oph), a well-known recurrent nova, erupted after an interval of 15 years since the previous outburst in 2006. Noteworthy, the 2021 outburst of RS Oph was detected in the very-high-energy (VHE, >100 GeV) regime by Imaging Atmospheric Cherenkov Telescopes (IACTs) such as High Energy Stereoscopic System (H.E.S.S.) and the Major Atmospheric Gamma Imaging Cherenkov (MAGIC) telescopes, which marked the event as the first nova explosion that was detected in VHE gamma rays in history [2, 3].
The Cherenkov Telescope Array (CTA) is the next-generation IACT array consisting of three kinds of telescopes with different sizes. Among them, the Large-Sized Telescopes (LSTs), equipped with a large mirror dish of 23 m in diameter and a sensitive focal-plane camera with fast readout, dominate the lower energy band of CTA down to \(\sim\)20 GeV. The LST prototype (LST-1) for CTA, built on the CTA northern site, La Palma in the Canary Islands of Spain, has been in its commissioning phase and performing astrophysical observations since 2019 [4]. The LST-1 observed the historic nova outburst of RS Oph in 2021. In this contribution, we present the analysis results of the LST-1 observations.
## 2 Observations
RS Oph is a well-known recurrent nova with a relatively short recurrence time-scale of \(\sim\)15 years. RS Oph is also known to be a symbiotic system, where the WD is embedded in the outflow from the red giant companion. Thanks to the short recurrence time-scale, outbursts of RS Oph have been recorded several times in history [5]. The outburst in 2006 was especially well studied at various wavelengths, such as optical, radio, and x-ray, but it was before the launch of Fermi [6].
On August 8th 2021, RS Oph was reported to be in a new burst state from observations in the optical and gamma-ray bands [7, 8, 9, 10]. In response to these alerts, the LST-1 started observations of RS Oph on August 9th, just a day after the burst onset. The observations were performed under proper conditions, i.e., a clear and dark sky, until August 12th. These good-quality data in the first nights amount to 6.4 hours of effective observational time. From August 13th, however, bad weather and enhanced moonlight conditions prevented further data taking. We resumed observations on August 29th and continued until September 2nd under good observational conditions, amounting to about four more hours. The zenith angle range of the observations was between 35\({}^{\circ}\) and 64\({}^{\circ}\). The observations were performed in the so-called _wobble_ mode with a wobble offset of 0.4\({}^{\circ}\)[11]. The LST-1 observations of the 2021 outburst of RS Oph are summarized in Table 1.
## 3 Analysis
The LST-1 observations are reduced following the standard LST analysis procedure [12]. The analysis is performed with cta-lstchain1, a dedicated LST analysis software based on the CTA low-level analysis pipeline ctapipe[13, 14, 15, 16]. The cta-lstchain performs all the analysis steps from calibration and signal extraction to the reconstruction of events. For the reconstruction of the energy and direction of primary particles, random forest (RF) algorithms are adopted. An RF is also used to reject background cosmic-ray events by giving a score called _gammaness_ to each event, which represents how likely the primary particle is a gamma ray. The so-called source-independent approach, a method to reconstruct events without an assumption on the source position, is applied in this work. The LST-1 data are reduced to the DL3 format, which is subsequently fed to gammapy2, the official science analysis tool for CTA, for computing the gamma-ray spectrum of the source and the source flux light-curve [17, 18].
Footnote 1: [https://github.com/cta-observatory/cta-lstchain](https://github.com/cta-observatory/cta-lstchain)
Footnote 2: [https://gammapy.org](https://gammapy.org)
Monte Carlo (MC) simulations are prepared to train the RF algorithms and evaluate the telescope's instrumental response functions (IRFs). The MC simulations that are used in this work are generated according to a standard procedure for the LST analysis, but are tuned to the RS Oph observations [12]. For instance, the amount of night sky background contamination in the simulations is adjusted to the observations at the camera image level. Simulations are produced for different positions in the sky to take into account the dependence of the telescope performance for different pointing directions. As shown in Figure 1, the simulations to train the RF algorithms are generated along a declination path close to that of RS Oph and those for testing the telescope performance are produced in a grid of \(\cos{\rm ZD}\) and \(\sin\delta\), where ZD is the zenith angle and \(\delta\) is an angle between the geomagnetic field and the pointing direction. The IRFs of the LST-1 are evaluated in each testing MC node and the closest node to each observational run is used to compute gamma-ray flux. For processing MC simulations, lstmcpipe, a dedicated package for reduction of LST MC simulations, is adopted [19, 20].
\begin{table}
\begin{tabular}{c c c} \hline date & observation & zenith range \\ & time [h] & [deg] \\ \hline \hline
9 Aug. 2021 & 1.4 h & 35-42 \\
10 Aug. 2021 & 2.7 h & 35-59 \\
12 Aug. 2021 & 2.3 h & 35-55 \\
13 Aug. 2021 & 1.3 h & 36-54 \\
14 Aug. 2021 & 1.5 h & 35-46 \\
15 Aug. 2021 & 1.3 h & 41-57 \\
29 Aug. 2021 & 1.0 h & 46-58 \\
30 Aug. 2021 & 1.5 h & 40-57 \\
1 Sep. 2021 & 0.3 h & 56-64 \\
2 Sep. 2021 & 1.3 h & 41-57 \\ \hline \end{tabular}
\end{table}
Table 1: Summary of the LST-1 observations of RS Oph during its 2021 outburst
For the spectral analysis, the energy-dependent cuts on _gammaness_ are defined from the MC simulations so that gamma rays are kept with an efficiency of 50%. Directional cuts are also applied energy-dependently with a 70% gamma-ray efficiency. The energy threshold of the LST-1 observations for this work is evaluated from the true energy distribution of the simulated gamma-ray events surviving all the event selection criteria. Though the energy threshold depends on the zenith angle, an average value over the LST-1 observations is found to be \(\sim\)30 GeV, assuming that RS Oph has a power-law spectrum with an index of \(\sim\)4, as indicated from the observations by H.E.S.S. and MAGIC. This is the lowest energy threshold for the observations of VHE gamma rays from RS Oph among the IACTs, and thus the LST-1 provides the best connection of the VHE gamma-ray data to the Fermi LAT energy range. The gamma-ray flux of RS Oph is reconstructed assuming a point-like source with a power-law spectral model, \(\Phi(E)=\Phi_{0}\) (\(E/E_{\rm ref}\))\({}^{-p}\), where the spectrum is normalized at \(\Phi(E_{\rm ref})=\Phi_{0}\) with a reference energy \(E_{\rm ref}=130\) GeV.
## 4 Results
Figure 2 shows the distribution of the squared angular distance between the reconstructed arrival direction of the gamma-ray candidate events obtained from the LST-1 observations and the
Figure 1: The pointing directions of the data taken on 9th, 10th, and 12th of August 2021, the closest testing MC nodes to the data and the training MC nodes on the plane of the zenith angle and the azimuth angle.
position of RS Oph. Events with _gammaness_\(>0.6\) and reconstructed energies between 30 GeV and 1 TeV are shown. From the observations between August 9th and August 12th, a gamma-ray signal from the direction of RS Oph is clearly detected at a statistical significance of 9.5 \(\sigma\). The signal is also visible each night at a significance above 5 \(\sigma\). From the observations on and after 29th August, on the other hand, no significant excess is found. The light curve during the first nights after the burst onset is computed at energies above 100 GeV and shown in Figure 3. The light curve that is reconstructed from the LST-1 observations is compatible with a constant flux during the first nights and it is in good agreement with the MAGIC results.
## 5 Conclusion
The LST-1 performed observations of RS Oph during its 2021 outburst and acquired good-quality data during the first nights after the eruption. The LST-1 data are analyzed with the standard LST analysis tools and dedicated all-sky MC simulations tuned to the RS Oph observations. The gamma-ray emission from RS Oph is firmly detected by the LST-1 during the first nights after the burst onset at a statistical significance of 9.5 \(\sigma\). Noteworthy, the LST-1 achieves the lowest energy
Figure 2: Distribution of events recorded for RS Oph as a function of \(\theta^{2}\), the squared angular distance between the reconstructed arrival direction of the gamma-ray candidate events and the position of RS Oph. Events with _gammaness_\(>0.6\) and reconstructed energies between 30 GeV and 1 TeV are shown. _Left_: Observations between August 9th and August 12th of 2021. _Right_: Observations between August 29th and September 2nd.
threshold for the VHE gamma-ray observations of the 2021 outburst of RS Oph among the IACTs \(\sim\)30 GeV, which gives the best connection of the VHE gamma-ray data to the Fermi LAT energy band. A detailed interpretation of the LST-1 observations with dedicated modeling is in progress and the results will be presented in a forthcoming publication.
Figure 3: Daily gamma-ray flux of RS Oph during the first nights of its 2021 outburst reconstructed with the LST-1 in comparison with the MAGIC results.
## LST Acknowledgements
We gratefully acknowledge financial support from the following agencies and organisations:
Ministry of Education, Youth and Sports, MEYS LM2015046, LM2018105, LTT17006, EU/MEYS
CZ.02.1.01/0.0/0.0/16_013/0001403, CZ.02.1.01/0.0/0.0/18_046/0016007 and
CZ.02.1.01/0.0/0.0/16_019/0000754, Czech Republic; Max Planck Society, German Bundesministerium
fur Bildung und Forschung (Verbundforschung / ErUM), Deutsche Forschungsgemeinschaft (SFBs 876 and 1491), Germany; Istituto Nazionale di Astrofisica (INAF), Istituto Nazionale di Fisica Nucleare (INFN),
Italian Ministry for University and Research (MUR); ICRR, University of Tokyo, JSPS, MEXT, Japan; JST SPRING - JPMJSP2108; Narodowe Centrum Nauki, grant number 2019/34/E/ST9/00224, Poland; The Spanish groups acknowledge the Spanish Ministry of Science and Innovation and the Spanish Research State Agency (AEI) through the government budget lines PGE2021/28.06.000X.411.01, PGE2022/28.06.000X.411.01 and PGE2022/28.06.000X.711.04, and grants PGC2018-095512-B-I00,
PID2019-104114RB-C31, PID2019-107847RB-C44, PID2019-104114RB-C32, PID2019-105510GB-C31,
PID2019-104114RB-C33, PID2019-107847RB-C41, PID2019-107847RB-C43, PID2019-107988GB-C22;
the "Centro de Excelencia Severo Ochoa" program through grants no. CEX2021-001131-S,
CEX2019-000920-S; the "Unidad de Excelencia Maria de Maeztu" program through grants no.
CEX2019-000918-M, CEX2020-001058-M; the "Juan de la Cierva-Incorporacion" program through grant no. IJC2019-040315-I. They also acknowledge the "Programa Operativo" FEDER 2014-2020, Consejeria de Economia y Conocimiento de la Junta de Andalucia (Ref. 1257737), PAID 2020 (Ref. P18-FR-1580)
and Universidad de Jaen; "Programa Operativo de Crecimiento Inteligente" FEDER 2014-2020 (Ref. ESFRI-2017-IAC-12), Ministerio de Ciencia e Innovacion, 15% co-financed by Consejeria de Economia, Industria, Comercio y Conocimiento del Gobierno de Canarias; the "CERCA" program of the Generalitat de Catalunya; and the European Union's "Horizon 2020" GA:824064 and NextGenerationEU;
We acknowledge the Ramon y Cajal program through grant RYC-2020-028639-I and RYC-2017-22665;
State Secretariat for Education, Research and Innovation (SERI) and Swiss National Science Foundation (SNSF), Switzerland; The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreements No 262053 and No 317446. This project is receiving funding from the European Union's Horizon 2020 research and innovation programs under agreement No 676134. ESCAPE - The European Science Cluster of Astronomy & Particle Physics
ESFRI Research Infrastructures has received funding from the European Union's Horizon 2020 research and innovation programme under Grant Agreement no. 824064. |
2310.18956 | End-to-End Autoregressive Retrieval via Bootstrapping for Smart Reply
Systems | Reply suggestion systems represent a staple component of many instant
messaging and email systems. However, the requirement to produce sets of
replies, rather than individual replies, makes the task poorly suited for
out-of-the-box retrieval architectures, which only consider individual
message-reply similarity. As a result, these system often rely on additional
post-processing modules to diversify the outputs. However, these approaches are
ultimately bottlenecked by the performance of the initial retriever, which in
practice struggles to present a sufficiently diverse range of options to the
downstream diversification module, leading to the suggestions being less
relevant to the user. In this paper, we consider a novel approach that
radically simplifies this pipeline through an autoregressive text-to-text
retrieval model, that learns the smart reply task end-to-end from a dataset of
(message, reply set) pairs obtained via bootstrapping. Empirical results show
this method consistently outperforms a range of state-of-the-art baselines
across three datasets, corresponding to a 5.1%-17.9% improvement in relevance,
and a 0.5%-63.1% improvement in diversity compared to the best baseline
approach. We make our code publicly available. | Benjamin Towle, Ke Zhou | 2023-10-29T09:56:17Z | http://arxiv.org/abs/2310.18956v1 | # End-to-End Autoregressive Retrieval via Bootstrapping
###### Abstract
Reply suggestion systems represent a staple component of many instant messaging and email systems. However, the requirement to produce sets of replies, rather than individual replies, makes the task poorly suited for out-of-the-box retrieval architectures, which only consider individual message-reply similarity. As a result, these system often rely on additional post-processing modules to diversify the outputs. However, these approaches are ultimately bottlenecked by the performance of the initial retriever, which in practice struggles to present a sufficiently diverse range of options to the downstream diversification module, leading to the suggestions being less relevant to the user. In this paper, we consider a novel approach that radically simplifies this pipeline through an autoregressive text-to-text retrieval model, that learns the smart reply task end-to-end from a dataset of (message, reply set) pairs obtained via bootstrapping. Empirical results show this method consistently outperforms a range of state-of-the-art baselines across three datasets, corresponding to a 5.1%-17.9% improvement in relevance, and a 0.5%-63.1% improvement in diversity compared to the best baseline approach. We make our code publicly available.12
Footnote 1: [https://github.com/BenjaminTowle/STAR](https://github.com/BenjaminTowle/STAR)
Footnote 2: Paper accepted to FINDINGS-EMNLP 2023.
## 1 Introduction
Reply suggestion, or smart reply (SR), systems are a staple component of many commercial applications such as Gmail, Skype, Outlook, Microsoft Teams, LinkedIn and Facebook Messenger. They help the user process chats and emails quicker by offering a set of canned replies which can be clicked without requiring manual typing. However, dialogue is known to be a one-to-many problem (Zhao et al., 2017; Towle and Zhou, 2022) - namely, for any given message, there are multiple possible replies. To reflect this uncertainty, systems should present a diverse set of options to the user. For instance, given the message How are you?, an SR system could suggest: [I'm good; Ok; Not great]. Resultantly, the quality of a given reply depends not only on the message, but on the other replies in the reply set.
Several prior works explore solutions to this problem such as removing near duplicates, penalising inter-reply similarity (Deb et al., 2019), clustering by intent (Henderson et al., 2017; Weng et al., 2019), learning latent variables (Deb et al., 2019, 2021), or model-based simulation (Towle and Zhou, 2023). However, these methods share a common design choice (Figure 1A): (1) a retrieval-based Matching model, which has learned a shared embedding space between messages and replies, returns a shortlist of top scoring replies; (2) this shortlist is refined through some diversification procedure to obtain the final reply set.
Unfortunately, this assumes that the initial shortlist contains at least one good reply set. In practice, we find Matching models often search myopically, only retrieving candidates that are very similar to one another (Figure 1A). Thus, the chosen reply set often fails to reflect a diverse range of user intents, while latency constraints make more sophisticated diversification techniques or larger shortlists prohibitive (Deb et al., 2019).
An intuitive, but - to the best of our knowledge - unexplored, solution to this problem is to conduct the retrieval autoregressively, with each reply conditioned on _both_ the initial message _and_ the previous replies in the set. Unfortunately, this approach encounters a second problem, namely, the lack of any datasets containing (message, reply set) pairs (Towle and Zhou, 2023). In practice, SR systems are trained on individual (message, reply) pairs obtained from conversation datasets, while the task of presenting multiple diverse replies to the user is outsourced to a separate diversification
module.
To meet this dual need, we present both (i) a bootstrapping method for creating a high-quality dataset of (message, reply sets) and (ii) a novel autoregressive retrieval model which predicts sequences of replies. For solving (i), we observe how model-based planning algorithms have been known to serve as a powerful policy improvement operator (Silver et al., 2017; Schrittwieser et al., 2019), including in several NLP systems (Jang et al., 2020, 2021). Specifically, the outputs of a model-based planning algorithm can be used to bootstrap a SR system. Further, by conducting this planning offline we are able to leverage two key advantages: (1) the system is free of the latency constraints of online inference, and therefore can increase the search space coverage of the planning algorithm; (2) the system can leverage information that would not be available during inference, such as the ground-truth reply, to further guide the search process. For (ii) we unify both steps of the standard SR pipeline into a single end-to-end model, which mitigates the myopic search, and allows the model to learn to diversify its predictions in a principled way through gradient-based learning. To this end, we present **STAR** (**S**uggested replies with **T**5 and **A**utoregressive **R**etrieval) (Figure 1B). At a high level, STAR is a text-to-text model trained to output sequences of replies, where each reply is conditioned _both_ on the initial message _and_ the previous replies in the sequence. Concretely, we instantiate our method with the T5 pretrained model (Raffel et al., 2020). We expand T5's vocabulary by treating each reply in the candidate pool as a novel token, and demonstrate a simple-yet-effective technique for initialising the new token embeddings, which leverages the model's existing semantic priors. Notably, by treating each reply as a token, we limit the number of autoregressive decoding steps required, keeping the model's efficiency comparable to other retrieval-based methods.
Empirically, we evaluate our approach on three benchmarks: Reddit (Zhang et al., 2021), which is the only publicly-available SR benchmark, as well as PersonaChat (Zhang et al., 2018) and DailyDialog (Li et al., 2017) which are both widely-used in dialogue research more broadly (Zhang et al., 2019; Roller et al., 2020, _inter alia_), and share a similar conversational style with SR apps. We demonstrate superior performance over state-of-the-art baselines across all datasets, corresponding to a 5.1%-17.9% improvement in relevance, and a 0.5%-63.1% improvement in diversity compared to the best baseline approach. We further show comparable efficiency to previous methods, and perform a range of ablations to motivate our design choices.
In summary, our key contributions are as follows: (1) an autoregressive retrieval architecture for sequentially predicting suggested replies; (2) a bootstrapping framework for generating high-quality data of (message, reply set) pairs; (3) detailed analysis of model behaviour and performance including a case study and ablation of key components.
Figure 1: Previous methods [A] compared to our approach, STAR [B]. The example displayed is taken from the DailyDialog Test set, and compares the predictions of STAR with SimSR (Towle and Zhou, 2023), the next best method. Our method’s suggestions present a diverse range of topics/intents to drive the conversation.
Related Work
Smart replyThe proprietary nature of data from email and chat applications has led several previous works to use publicly-available dialogue datasets (Zhang et al., 2021; Deb et al., 2021; Towle and Zhou, 2023) to benchmark SR methods, due to their analogous conversational nature. While early SR systems used generative models (Kannan et al., 2016), current production systems favour retrieval methods due to their greater controllability of outputs and superior latency (Deb et al., 2019). Increasing the diversity of reply suggestions is a key focus of previous work, which has been attempted by: (1) mapping replies to discrete intents / topics (Kannan et al., 2016; Chakravarthi and Pasternack, 2017; Weng et al., 2019); (2) re-weighting replies according to their similarity with other replies in the set (Carbonell and Goldstein-Stewart, 1998; Deb et al., 2019); (3) learning continuous latent variables to generate multiple queries (Zhao et al., 2017; Deb et al., 2019); (4) using model-based simulation to iteratively search and evaluate the relevance of candidate reply sets (Towle and Zhou, 2023). Our proposed method differs from all of these approaches in that our model learns to account for the interdependencies between replies through end-to-end backpropagation.
Autoregressive retrievalIntegrating neural retrieval into the well-established paradigm of text-to-text models is of growing interest. Earlier work focuses on outputting a document ID given a query (Tay et al., 2022). Further work has extended this by considering alternate ways of representing the document IDs, such as through unique substrings (Bevilacqua et al., 2022). Another line of work has used autoregressive retrieval for the entity linking task (Cao et al., 2021, 20, 20). There, the motivation is to reduce the large number of entities by relying on the text-to-text model's pre-existing vocabulary, rather than having to retrieve embeddings from a memory-intensive dense index. Our proposed method differs considerably from these previous works both in instantiation and motivation. Instantiation-wise, we generate _multiple_ replies - critical to making this possible is the novel bootstrapping technique for creating the dataset of (message, reply set) pairs to train on. Motivation-wise, our goal is to be able to condition each reply on both the input message and previous replies in the set, enabling the model to learn to predict _sequences_ of replies in a differentiable way.
BootstrappingThe idea of bootstrapping training data from limited resources has received significant recent interest in NLP, given the newly demonstrated few / zero-shot capabilities of many large language models (Brown et al., 2020). It has seen usage in few-shot shot text-classification (Schick and Schutze, 2021), semantic similarity (Schick and Schutze, 2021), tool-usage (Schick et al., 2023), retrieval (Izacard and Grave, 2021), sequence generation (He et al., 2020), and instruction-tuning (Honovich et al., 2023; Wang et al., 2023; Taori et al., 2023), amongst others. These techniques can also be seen as a form of knowledge distillation (Hinton et al., 2015), except that the training typically involves predicting the exact token targets, rather than using the soft probabilities of a teacher model. Although sometimes these techniques are used as an addition to supervised learning (He et al., 2020), in our case there are no datasets containing the ideal reply sets to suggest to the user. Instead, we must bootstrap this in a more unsupervised way, by transforming a dataset of (message, reply) pairs into a dataset of (message, reply set) pairs.
## 3 Methodology
In this section, we first describe the model-based planning process used to obtain the bootstrapped dataset of (message, reply set) pairs (Section 3.1). Then, we show how the STAR architecture can be trained on this dataset (Section 3.2).
### Offline Dataset Creation
Our goal is to transform a dialogue dataset \(\mathcal{D}=\{(x,y)\}\) of (message, reply) tuples, into a dataset \(\mathcal{D}*=\{(x,Y)\}\) where \(Y\) is the set of replies \(\{y_{k}\}^{K}\) to be presented to the user. Algorithm 1 summarises this process. While our method is general to any arbitrary planning algorithm, we choose to instantiate our approach with a modified version of SimSR (Towle and Zhou, 2023), a recently released publicly available state-of-the-art SR method, that employs model-based simulation to predict reply sets. As the original algorithm was designed for online inference, we make several changes to benefit the offline nature of our version, and detail the full implementation below.
The initial retrieval is conducted by a Matching model \(\Phi\) that separately encodes messages and replies into a shared latent space. Given an encoded
message \(\mathbf{x}=\Phi(x)\), it retrieves the top \(N\) candidates from a pool of pre-computed reply vectors \(\mathbf{Y_{R}}=\left\{\mathbf{y_{r}}\right\}^{R}\) by combining their dot product similarity with a pre-computed language-model bias - a standard component of SR systems to downweight overly specific replies (Deb et al., 2019).
\[Y_{N}=\underset{i}{\text{-}\operatorname{argmax}}(\mathbf{x}\cdot\mathbf{y_{r}} +\beta\text{LM}(y_{r})) \tag{1}\]
We then output the \(K\)-tuple \(Y_{i}\in\binom{Y_{N}}{K}\) that has the highest expected similarity with the human reply, according to some similarity function \(f(\cdot,\cdot)\).
\[\underset{i}{\operatorname{argmax}}\operatorname{\mathbb{E}}_{y\sim p(\cdot |x)}\left[f(Y_{i},y)\right] \tag{2}\]
Given the objective in SR is for at least one of the replies to be relevant, the similarity function is defined as a maximum over the sampled reply and each of the replies in the reply set, using term-level F1-score: \(\underset{k}{\max}\text{F1}(y_{k},y)\).
We assume \(y\) is sampled from the ground-truth human distribution \(p(\cdot|x)\). As we do not have access to the true human distribution in practice, we instead use the same Matching model \(q\) as a proxy for this, given it is trained on (message, reply) pairs. We then approximate the expectation by marginalising over the top-\(M\) most likely replies:
\[\approx\underset{i}{\operatorname{argmax}}\sum_{m}^{M}f(Y_{i},y_{m})q(y_{m}|x) \tag{3}\]
In practice, it is intractable to evaluate every possible reply tuple, due to their combinatorial scaling. We therefore approximate this by greedily constructing the reply set one reply at a time. Formally, let \(Y_{G}\) be the set of currently selected replies, such that initially \(Y_{G}=\emptyset\). Then, for each of \(y_{n}\in Y_{N}\), we compute the expected similarity for the union of \(Y_{G}\) and \(y_{n}\), termed \(Y_{G}^{n}=Y_{G}\cup y_{n}\) for brevity:
\[\sum_{m}^{M}f(Y_{G}^{n},y_{m})q(y_{m}|x) \tag{4}\]
We repeat this process for \(K\) timesteps, each time appending the highest scoring reply to \(Y_{G}\), i.e. until \(|Y_{G}|=K\). Note that this greedy search process implicitly canonicalises the order of the replies, as selecting replies in this way causes them to be roughly ordered by individual message-reply relevance.
#### 3.1.1 Adjustments
Scaling \(N\) and \(M\)The original SimSR algorithm was used only in an online setting (Towle and Zhou, 2023). Therefore, the size of the search parameters \(N\) (number of replies in the shortlist) and \(M\) (number of simulated user replies) is kept low (15 and 25 respectively in the original paper). As we only need to run this model offline however to obtain the dataset, we find setting \(N\) and \(M\) to much larger values improves relevance (we use 100 for both), enabling both a broader search (i.e. by increasing \(N\)) and a more accurate similarity function (i.e. by increasing \(M\)).
Redundancy penaltyEarly testing showed that scaling the search parameters reduced diversity. We therefore introduce a redundancy penalty, which penalises the model for selecting replies that are similar to replies already in the set \(Y_{G}\). This is analogous to the inter-document similarity penalty used in the maximum marginal relevance IR (information retrieval) technique (Carbonell and Goldstein-Stewart, 1998).
\[\sum_{m}^{M}f(Y_{G}^{n},y_{m})q(y_{m}|x)-\lambda f(Y_{G},y_{n}) \tag{5}\]
Query augmentationUnlike during online inference, we also have access to the ground-truth reply \(y\) when constructing the dataset. Previous work has found that models obtain greater representational capabilities when given access to posterior information (Paranjape et al., 2022; Towle and Zhou, 2022). We therefore use an augmented query to retrieve with the Matching model. This is obtained by interpolating between the message and ground-truth reply embeddings. This biases the model's predictions towards the observed ground-truth in
the dataset, while still allowing it to benefit from its own learned distribution.
\[\mathbf{\tilde{x}}=\alpha\Phi(x)+(1-\alpha)\Phi(y) \tag{6}\]
### Proposed STAR Model
We initialise STAR with a T5-based text-to-text language model, which has previously been shown to be effective in autoregressive retrieval Tay et al. (2022). While some autoregressive retrieval approaches identify their documents/replies through unique substrings Bevilacqua et al. (2022) or constrained beam search Cao et al. (2021), we focus on approaches requiring only a limited number of autoregressive steps, to maintain competitive inference speeds to existing retrieval methods (Section 5.3). There are several alternatives for this such as treating each reply set as a unique token, or separately training on each (message, reply pair), but ultimately we opted for autoregressively treating each reply as a unique token in the vocabulary in order to exploit the compositionality of reply sets (Section 5.2 for performance comparison). Note that as the types of replies used in smart reply are usually quite short and concise, e.g. 'how are you', 'I'm fine thanks', 'yes, that's right' etc., systems in deployment only need to retrieve from a pool of 30k or so replies Deb et al. (2019), in order to provide good coverage of possible user intents. As a result, we are able to keep the size of the vocabulary reasonable. Thus, our new vocabulary is defined as: \(W_{tokens}\cup W_{replies}\). An obvious challenge to this approach is that by treating each reply as a previously unseen word, it removes any semantic priors the model might have about their meaning. To mitigate this, we employ a **bag-of-words** initialisation strategy. Hence, we define the embedding of the \(t\)-th reply \(E(y_{t})\) as the average over the embeddings of the individual words within \(w_{n}\in y_{t}\).
\[E(y_{t})=\frac{1}{N}\sum_{n}^{N}E(w_{n}) \tag{7}\]
Intuitively, this ensures that the initial embeddings are close to the word embeddings of the original vocabulary, while also capturing some of the underlying semantics of the reply. We allow the weights to update during fine-tuning. Note that for T5 the output and input embedding layers share weights, and therefore this approach is used to initialise both layers. We train the model using cross-entropy loss to predict the next reply given the current sequence of replies and messages:
\[\mathcal{L}_{NLL}=-\sum_{k}^{K}\log p(y_{k}|x,y_{0},...,y_{k-1}) \tag{8}\]
## 4 Experimental Setup
### Baselines
Previous work has largely been closed-source and is therefore unavailable for direct comparison Henderson et al. (2017); Weng et al. (2019); Deb et al. (2019). With the exception of SimSR, which has publicly available code 3, we re-implement a variety of methods that cover the broad range of previous techniques. Due to its comparable size, all baselines apart from Seq2Seq are initialised with DistilBERT as the encoder backbone. These are summarised as follows:
Footnote 3: [https://github.com/BenjaminTowle/SimSR](https://github.com/BenjaminTowle/SimSR)
Seq2Seqis a generative encoder-decoder. While current production systems and the majority of related works use only retrieval models Deb et al. (2019); Towle and Zhou (2023), at least one related work includes a standard generative transformer as a baseline Zhang et al. (2021), which we follow here. For maximum comparability with our method, we use the same t5-small model as a backbone. For each message, we sample \(K\) responses independently.
Matchingrepresents the out-of-the-box encoder with no additional diversification strategy and was used as a baseline method by Zhang et al. (2021). It simply selects the top \(K\) responses according to individual message-reply scores.
Matching-Topicuses an out-of-the-box topic classifier to ensure no two replies share the same topic, similar to previous work Henderson et al. (2017); Weng et al. (2019). The classifier is trained on Twitter Antypas et al. (2022), due to their comparable short-form open-domain chat conversations.
Maximum Marginal Relevance (MMR)Carbonell and Goldstein-Stewart (1998) is originally an IR technique, used in several previous SR works Deb et al. (2019); Towle and Zhou (2023), which re-weights reply scores as a linear combination of their message-reply and inter-reply similarity.
Mcvae(Deb et al., 2019) is a conditional variational autoencoder (Zhao et al., 2017) which learns to generate multiple query vectors from a single message embedding, representing the multiple possible reply intents. Candidates are scored via a voting process, whereby the \(K\) most-selected replies are chosen.
SimSR(Towle and Zhou, 2023) uses an iterative search and evaluation process to select possible reply sets and score them according to their expected similarity from a learned world model, which serves as a proxy for the user. To ensure comparability of SimSR with our method and the other baselines, we include the language-model bias in the scoring process (Equation 1), and also deduplicate the candidate pool.4
Footnote 4: Both changes lead to consistently improved accuracy and diversity across all datasets compared to the original paper.
### Datasets
We evaluate our proposed method across three datasets, summarised in Table 1. Below, we describe the datasets in more detail and motivate their inclusion. Note, other than Reddit, there are no publicly available SR datasets, due to their commercial nature (e.g. Henderson et al. (2017); Deb et al. (2019); Weng et al. (2019)). Therefore, we adopt several dialogue datasets, which is the closest alternative to conversations on proprietary chat applications.
Reddit(Zhang et al., 2021) was originally introduced for training multilingual SR systems, and is the only publicly available dataset specifically intended for SR purposes. As the original dataset is very large, we follow Towle and Zhou (2023) and use the reduced version of the dataset. Note, this version only contains English, as our aim is limited to the monolingual setting. Due to the organic nature of the dataset, conversations cover a very broad range of topics.
PersonaChat(Zhang et al., 2018) is a crowdworker-sourced dataset comprising persona-grounded conversations, in which each speaker is assigned a persona comprising a few short sentences. Following previous methods (Humeau et al., 2020), we concatenate the persona to the beginning of the message. The participants are instructed to chat naturally and to try to get to know one another.
DailyDialog(Li et al., 2017) is a dataset created from English language learning websites and consists of a variety of high-quality dialogues in everyday scenarios. The dataset differs from the former two in that the conversations often involve real-life scenarios, such as asking for directions, and therefore captures a different variety of conversational skills.
### Metrics
We evaluate our method on the same weighted ROUGE ensemble as previous methods (Lin, 2004; Deb et al., 2019, 2021), which is known to correlate well with click-through rate (Zhang et al., 2021):
\[\frac{\text{ROUGE-1}}{6}+\frac{\text{ROUGE-2}}{3}+\frac{\text{ROUGE-3}}{2} \tag{9}\]
As the goal of SR systems it to ensure that at least one of the suggested replies is relevant to the user, we only record the maximum ROUGE score across each of the \(K=3\) suggested replies. We also evaluate the model on Self-ROUGE (Celikyilmaz et al., 2020): This is an unreferenced metric that measures the internal dissimilarity (i.e. diversity) within the reply set by treating one reply as the predicted reply and the other parts as the references. Note that a lower Self-ROUGE score indicates _more_ diversity.
### Inference
For inference, we use the entire training set as the candidate pool for each respective dataset, with deduplication to remove exact matches. For STAR, we greedily decode the next reply token until \(K\) tokens have been decoded. Note, we only allow the model to output replies represented in the bootstrapped dataset, and also block non-replies, i.e. words from the original vocabulary, from being predicted.
## 5 Experimental Results
We focus our efforts on answering the following Research Questions: (\(\mathbf{RQ_{1}}\)) How does STAR com
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & **Train** & **Valid** & **Test** & \(|\mathbf{Y_{R}}|\) \\ \hline Reddit & 50k & 5k & 5k & 48k \\ PersonaChat & 66k & 8k & 8k & 64k \\ DailyDialog & 76k & 7k & 7k & 62k \\ \hline \hline \end{tabular}
\end{table}
Table 1: Number of samples in the Train, Validation, Test sets and Candidate pool in the three datasets for evaluation. The Candidate pool comprises the Train set with duplicate responses removed.
pare to existing state-of-the-art methods? (Section 5.1, 5.4); \((\mathbf{RQ_{2}})\) Which components of the data collection algorithm and fine-tuning have the largest impact on STAR's performance? (Section 5.2); \((\mathbf{RQ_{3}})\) How efficient is STAR in inference? (Section 5.3)
### Main Results
Table 2 compares the performance of different SR systems across the Reddit, PersonaChat and DialyDialog datasets. In terms of relevance (ROUGE), STAR shows an especially large improvement in Reddit (+17.9%) and DailyDialog (+15.8%). We hypothesise the gains in PersonaChat (+5.1%) are more modest because the replies are more easily predicted due to the persona, which is concatenated to each message. This significantly reduces the noise during the initial retrieval for the baselines, as they only need to retrieve the messages relevant to that particular persona.
For diversity (Self-ROUGE), the strongest gains were found in DailyDialog (+63.1%). For PersonaChat, STAR performs much better than the retrieval methods, only falling behind Seq2Seq, due to its altogether noisier outputs as evidenced by having the worst relevance score. The Reddit results were comparatively more modest (+0.5%) - we hypothesise this is because the dataset is altogether more noisy, and so there are relatively few similar replies in the dataset, as shown by the Self-ROUGE scores being lower than the other two datasets. Overall, the consistent outperformance in both relevance and diversity metrics supports the benefits of the STAR approach.
### Ablation
In Table 3, we conduct ablations across two key axes: data collection and STAR training. The data collection ablations serve to investigate the benefits of the novel changes to the SimSR algorithm from Section 3.1.1. The STAR training ablations investigates the degree to which the improvements in performance are caused by the bootstrapped dataset or by STAR's architecture itself; we achieve this by considering several alternative variants of STAR.
Our data collection ablations consider two features: (A) removing the query augmentation prevents the model from leveraging any ground truth information during prediction; (B) removing the redundancy penalty no longer explicitly penalises lack of diversity in predicted reply sets. For STAR training, we consider three alternative configurations: (C) we replace the bag-of-words embeddings with randomly initialised embeddings - this removes any priors about the meaning of replies and forces the model to learn them _tabula rasa_; (D) we treat each reply set as a unique token - this removes the compositional element from the task, constraining the model to only predicting previously seen reply sets, therefore testing whether the model is capable of learning to compose novel reply sets; (E) we remove the ability to account for interdependencies between replies, by restructuring each (message, reply set) data point into \(K\) data points of (message, reply\({}_{k}\)), and then outputting the top-\(K\) replies during inference - this investigates whether the benefit lies simply in the bootstrapped dataset being better suited to the SR task, rather than in STAR's ability to account for interdependencies between replies.
In terms of data collection ablations, we found removing the redundancy penalty significantly reduced the diversity of predictions, although in some cases offered slightly improved relevance; removing the query augmentation generally led to a worse relevance/diversity trade-off. For the variants of STAR training, we found that random embeddings consistently reduced relevance, while also led to less diverse predictions; reply sets as tokens led to the most competitive variant of STAR compared to our default setup: diversity was overall better, due to using preconstructed reply sets from the offline planning algorithm, but this came at the trade-off of reduced flexibility from being unable to construct novel reply sets when the context required it - resultantly, we saw a corresponding reduction in relevance. Finally, predicting replies separately expectedly harmed both relevance and diversity, demonstrating the importance of accounting for reply interdependencies.
In Figure 2, we further validated the individual results of our ablation by aggregating the results across datasets (applying an equal weighting to each dataset). This demonstrates the overall trend that the default STAR offers the superior trade-off between relevance and diversity, while treating reply sets as tokens offered the next best alternative. Nevertheless, we believe that keeping individual replies as tokens - thus allowing the model to construct reply sets dynamically - is likely to be an attractive property for deployed systems, enabling the overall vocabulary size to remain modest.
### Run-time Efficiency
Beyond performance gains in relevance and diversity, a major advantage of an autoregressive retrieval model is the ability to leverage the scalability of GPU-based inference. Figure 3 compares the efficiency of STAR with the other baseline methods. We use an NVIDIA GeForce RTX 3060 Ti GPU and AMD Ryzen 7 5700G with Radeon Graphics CPU, with a batch size of 32. The results show that the methods can be broadly clustered into three groups. The slowest group is the generative method Seq2Seq, due to needing to generate each reply word-by-word. The middle group - SimSR, M-CVAE and M-MMR - is characterised by methods that comprise a more involved diversification pipeline. The final and fastest group includes STAR, M-Topic and Matching, where no additional post-hoc diversification is required (for M-Topic the topics can be pre-computed prior to inference).
### Case Study
Table 4 presents a case study on the DailyDialog Test set. We compare our approach, STAR, with the top-performing baseline from Table 2, SimSR. In both examples we consistently find STAR is able to output a broader range of intents. Quantitatively, we consider the rank that each suggestion receives according to the initial retrieval of the Matching model that underlies SimSR. We see that STAR is able to perform a much more global search across the reply space, selecting replies from within the top 100 or so ranks. This would be difficult for the standard retrieve-and-rerank approach to emulate, given 100 is usually too large a number to efficiently rerank Deb et al. (2019). Qualitatively, SimSR's suggestions converge around common phrases, e.g. 'let's go', which would be difficult to deduplicate with a heuristic rule given only a limited number of overlapping words between the replies. Conversely, STAR is able to represent a broader range of intents, such as replying with a question in both examples. Further examples are
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{**Reddit**} & \multicolumn{2}{c}{**PersonaChat**} & \multicolumn{2}{c}{**DailyDialog**} \\ \cline{2-7} & ROUGE \(\uparrow\) & Self-ROUGE \(\downarrow\) & ROUGE \(\uparrow\) & Self-ROUGE \(\downarrow\) & ROUGE \(\uparrow\) & Self-ROUGE \(\downarrow\) \\ \hline _Generative models_ & & & & & & \\ Seq2Seq & 2.41 & 3.43 & 6.83 & **6.88*** & 4.01 & 3.91 \\ \hline _Retrieval models_ & & & & & & \\ Matching & 1.95 & 9.42 & 7.51 & 21.47 & 6.53 & 16.65 \\ M-Topic & 1.81 & 3.94 & 7.16 & 15.43 & 6.14 & 11.11 \\ M-MMR & 2.20 & 4.44 & 7.81 & 14.57 & 6.13 & 8.63 \\ M-CVAE & 2.30 & 5.02 & 7.43 & 12.21 & 6.78 & 10.49 \\ SimSR5 & 2.79 & 2.18 & 9.04 & 10.52 & 6.82 & 4.80 \\ \hline STAR & **3.29*** & **2.17** & **9.50*** & 7.74 & **7.90*** & **1.77*** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance of STAR across Reddit, PersonaChat and DailyDialog Test sets on relevance (ROUGE) and diversity (Self-ROUGE) metrics. **Bold** indicates best result, underline indicates second-best. * = statistically significant versus next best result on t-test with _p_-value < 0.01.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**Reddit**} & \multicolumn{2}{c}{**PersonaChat**} & \multicolumn{2}{c}{**DailyDialog**} \\ \cline{2-7} & ROUGE \(\uparrow\) & Self-ROUGE \(\downarrow\) & ROUGE \(\uparrow\) & Self-ROUGE \(\downarrow\) & ROUGE \(\uparrow\) & Self-ROUGE \(\downarrow\) \\ \hline STAR & **3.35*** & 2.27 & 8.85 & 7.48 & 8.39 & 1.81 \\ \hline _Data Collection Ablations_ & & & & & & \\ A: No Query Augmentation & 2.94 & 2.00 & 8.99 & 6.94 & 7.24 & 2.89 \\ B: No Redundancy Penalty & 3.06 & 4.29 & **9.03** & 17.26 & **8.98*** & 5.90 \\ \hline _STAR Training Variants_ & & & & & & \\ C: Random embeddings & 2.67 & 4.93 & 8.39 & 10.97 & 6.84 & 4.45 \\ D: Reply sets as tokens & 2.85 & **1.59*** & 8.76 & **6.81** & 7.75 & **1.57*** \\ E: Predict replies separately & 2.20 & 26.61 & 8.07 & 30.98 & 6.43 & 20.50 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance of STAR on the Reddit, PersonaChat and DailyDialog Validation sets under different model configurations. Ablations are applied separately. **Bold** indicates best result, underline indicates second-best. * = statistically significant versus next best result on t-test with _p_-value < 0.01.
provided in Appendix C.
## 6 Conclusion
We introduce **STAR**, an autoregressive retrieval system for SR, which is an end-to-end text-to-text model that sequentially predicts replies conditioned on an initial message. To train STAR, we demonstrate an approach to bootstrap a dataset of high-quality (message, reply set) pairs, from regular dialogue datasets containing only (message, reply) pairs. Empirically, our results show significant improvement over existing state-of-the-art SR baselines, across multiple datasets, corresponding to a 5.1%-17.9% improvement in relevance, and a 0.5%-63.1% improvement in diversity compared to the best baseline approach.
Future work could extend these techniques to other set-prediction tasks: e.g., in IR the relevance of each document depends on the quantity of _new_ information it contains compared to other documents in the set. In recommender systems, use cases include: tailoring a user's news feed requires that the news articles presented are not simply duplicates of the same story; designing a bespoke music playlist requires songs to be unified by common themes but also sufficiently distinct from one another to maintain the listener's interest. Other lines of future work include considering alternate strategies for initialising the reply embeddings, beyond the bag-of-words initialisation demonstrated in this paper.
## Acknowledgements
We thank the reviewers for their helpful feedback and suggestions. This work is partly supported by the EPSRC DTP Studentship program. The opinions expressed in this paper are the authors', and are not necessarily shared/endorsed by their employers and/or sponsors.
\begin{table}
\begin{tabular}{l l} \hline \hline
**Message:** & _Hi, Kenny. Let’s go for a drink_. \\ \hline \multirow{3}{*}{SimSR} & - let’s go! **[\#9]** \\ & - ok, let’s go. **[\#3]** \\ & - ok. let’s get something to drink. **[\#1]** \\ \hline \multirow{3}{*}{STAR} & - ok. let’s go **[\#5]** \\ & - you want something to drink? **[\#89]** \\ & - good idea. **[\#105]** \\ \hline \multirow{3}{*}{**Message:**} & _Of course! Let’s go_. \\ \cline{2-2} & - let’s go! **[\#1]** \\ \cline{2-2} & - ok, let’s go. **[\#5]** \\ \cline{2-2} & - all right. let’s go **. **[\#12]** \\ \hline \multirow{3}{*}{STAR} & - let’s go! **[\#1]** \\ & - where are we’? **[\#43]** \\ \cline{1-1} & - good idea! **[\#85]** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Example model outputs from the DailyDialog Test set, comparing STAR (ours) with the top-performing baseline method. Numbers in **bold** indicate the ranking the reply received according to the Matching model.
Figure 3: Comparison of run-time efficiency between STAR and the baseline methods. Results are calculated over the Reddit Validation set.
Figure 2: Comparison of overall relevance and diversity scores across ablations, obtained by averaging across all three datasets with equal weighting.
### Limitations
Although our work shows that STAR is able to absorb sufficient information about the replies in its weights, this may become increasingly challenging when larger numbers of replies need to be embedded. One notable instance of this would be the multilingual setting, as many SR systems are deployed globally. In this case, each language typically has its own candidate pool. A naive implementation which creates separate reply vectors for each language would incur a significant increase in model size. In this case, we hypothesise techniques around weight-sharing between reply embeddings between languages may be beneficial, e.g. 'how are you' (en) and 'ca va' (fr) sharing the same vector. Further, our techniques are only demonstrated in publicly available datasets, whereas proprietary conversations in chat and email applications may have unique features not accounted for here (e.g. timestamps, cc and bcc information, and file attachments). Our technique also requires a planning algorithm to create the initial dataset. This theoretically creates an upper bound to the overall performance of STAR, as it is limited to cloning the behaviour of the offline planning algorithm.
|
2308.05583 | Generative Diffusion Models for Radio Wireless Channel Modelling and
Sampling | Channel modelling is essential to designing modern wireless communication
systems. The increasing complexity of channel modelling and the cost of
collecting high-quality wireless channel data have become major challenges. In
this paper, we propose a diffusion model based channel sampling approach for
rapidly synthesizing channel realizations from limited data. We use a diffusion
model with a U Net based architecture operating in the frequency space domain.
To evaluate how well the proposed model reproduces the true distribution of
channels in the training dataset, two evaluation metrics are used: $i)$ the
approximate $2$-Wasserstein distance between real and generated distributions
of the normalized power spectrum in the antenna and frequency domains and $ii)$
precision and recall metric for distributions. We show that, compared to
existing GAN based approaches which suffer from mode collapse and unstable
training, our diffusion based approach trains stably and generates diverse and
high-fidelity samples from the true channel distribution. We also show that we
can pretrain the model on a simulated urban macro-cellular channel dataset and
fine-tune it on a smaller, out-of-distribution urban micro-cellular dataset,
therefore showing that it is feasible to model real world channels using
limited data with this approach. | Ushnish Sengupta, Chinkuo Jao, Alberto Bernacchia, Sattar Vakili, Da-shan Shiu | 2023-08-10T13:49:26Z | http://arxiv.org/abs/2308.05583v1 | # Generative Diffusion Models for Radio Wireless Channel Modelling and Sampling
###### Abstract
Channel modelling is essential to designing modern wireless communication systems. The increasing complexity of channel modelling and the cost of collecting high-quality wireless channel data have become major challenges. In this paper, we propose a diffusion model based channel sampling approach for rapidly synthesizing channel realizations from limited data. We use a diffusion model with a _U Net_ based architecture operating in the frequency space domain. To evaluate how well the proposed model reproduces the true distribution of channels in the training dataset, two evaluation metrics are used: _i)_ the approximate \(2\)-Wasserstein distance between real and generated distributions of the normalized power spectrum in the antenna and frequency domains and _ii)_ precision and recall metric for distributions. We show that, compared to existing GAN based approaches which suffer from mode collapse and unstable training, our diffusion based approach trains stably and generates diverse and high-fidelity samples from the true channel distribution. We also show that we can pretrain the model on a simulated urban macro-cellular channel dataset and fine-tune it on a smaller, out-of-distribution urban micro-cellular dataset, therefore showing that it is feasible to model real world channels using limited data with this approach.
diffusion models, machine learning, wireless channel sampling
## I Introduction
Modeling the wireless channel is an essential step in designing and evaluating the performance of communication systems. In the context of 6G, the wireless environment has become increasingly complex, and the extension of frequency bands and the introduction of large-scale multiple-input multiple-output (MIMO) systems have made it challenging for current channel modeling schemes to accurately reproduce the characteristics of real-world radio channels. Therefore, there is a need for more realistic channel modeling approaches. In addition to accurate channel modeling, the development of deep learning-based wireless communication systems requires large amounts of high-quality wireless channel data to support the training of neural networks (NN). However, the collection of wireless channel data is quite costly and time-consuming, which motivates the development of novel channel data generating solutions to support neural network training.
Previous papers have proposed using machine learning techniques as an alternative to traditional deterministic or stochastic channel models. Markov models have long been used to model the time-varying behaviour of channels [1]. Zander [2] showed that a variational autoencoder can be used to generate coefficients of the channel matrix. In Smith _et al._[3], a generative adversarial network (GAN) based channel modeling strategy was introduced and it was shown that the GAN can reproduce a simple additive white Gaussian noise channel. Xiao _et al._[4] proposed ChannelGAN, a Wasserstein GAN (WGAN) that can generate the multiple-input multiple-output (MIMO) channel matrix. The MIMO-GAN approach [5] also uses a Wasserstein GAN to model the distribution of channels, however unlike the ChannelGAN paper, they use pairs of input-output signals as their data and the channel impulse response is modeled implicitly by the neural network.
Despite multiple studies that have explored GANs as a way of modeling the channel from data, they have several significant drawbacks that have been highlighted in recent research. One of the most significant challenges with GANs, identified first in the original GAN paper itself [6], is mode collapse, where the generator learns to generate only a limited set of samples, ignoring the rest of the data distribution. This results in a lack of diversity in the generated samples, which can be a major problem for a channel simulator. GANs are also notoriously difficult to train, with many models failing to converge or producing low-quality samples. This instability is due to the adversarial nature of the training process, where the generator and discriminator are constantly trying to outwit each other. The Wasserstein GAN used in the ChannelGAN paper [4] employs a different loss function with a gradient penalty to address this issue and has been shown to be more stable in practice, but does not resolve it entirely.
Denoising diffusion probabilistic models are a class of generative models that have recently gained popularity due to their ability to generate high-quality images, video and audio. Diffusion models are based on the principle of diffusing noise through a sequence of invertible transformations, where the noise is initialized as a standard Gaussian distribution and is gradually transformed into the target data distribution. The process can be reversed to generate new samples from the target distribution. One advantage of diffusion models over
GANs is that they do not require a discriminator network to guide the generator network during training. Instead, diffusion models directly optimize the likelihood of the target data distribution. This is why their training is also much more stable. Ho _et al._[7] compared diffusion models and GANs on several image generation tasks and found that diffusion models outperformed GANs in terms of image quality and diversity, especially for high-resolution images. The authors attributed the superior performance of diffusion models to their ability to capture the full distribution of the data, while GANs only learn a single mode or a subset of the modes of the distribution. In this paper, we propose using diffusion models to learn samples from the distribution of the channel impulse response matrices from data.
The existing literature on this topic also does not focus too much on measuring how much the generated channels overlap with the distribution of the original data. To compare the distribution of generated channel matrices with the real ones, we first compute the approximate Wasserstein distance between the distributions of real and generated channel power spectra. We also consider precision and recall as a measure to better separately capture the fidelity and diversity of the generated images.
Simulations can generate copious amounts of channel data but the datasets collected from real-world experimental campaigns are limited in size. This is a problem for data-hungry machine learning approaches to channel modelling. Our proposed solution involves transfer learning: we show that we can pre-train the diffusion model on a simulated dataset and fine-tune it on a smaller, out-of-distribution dataset, therefore showing that it is feasible to model real world channels using limited data with this approach.
## II Methods
### _Comparison with Wasserstein GAN_
As discussed in the introduction, WGANs have so far shown promising results in being able to model the distribution characteristics of real channel matrices, so we compare our diffusion models with a WGAN. We borrow the ChannelGAN architecture [4] with minor modifications. For the generator \(G\), it maps a noise vector \(z\) to a generated channel \(H\in\mathbb{R}^{N_{a}\times N_{f^{\prime}}\times 2}\), where \(N_{a}=N_{t}\times N_{r}\) denotes the number of antenna pairs, \(N_{f^{\prime}}\) denotes the number of frequency bins retained (the high frequency portion of the response is cropped off). Firstly, in order to convert from input noise vector to the feature maps, the noise vector \(z\) is processed by a dense layer along with a reshape. Next, five up-sampling blocks are stacked up. Specifically, each up-sampling block is composed of a \(2\times 2\) up-sampling layer (nearest neighbor interpolation), a \(3\times 3\) convolutional layer (Conv) with \(M\) filters and a batch normalization layer. Two optional activation functions are also attached at the tail of each block with the opting flag \(V\in{0,1}\), where \(V=0\) and \(V=1\) indicate utilization of leaky rectified linear unit (LeakyReLU) and hyperbolic tangent function (tanh), respectively. Here five up-sampling blocks are configured with parameters \(M=\{1024,512,256,128,2\}\) and \(V=\{0,0,0,0,1\}\) sequentially. Note that the tanh in the last Up-Sampling Block limits the amplitude of the elements in fake channel in \([-1,1]\) which matches the real channel, and the LeakyReLU in remaining blocks brings the non-linear transformation to the network. Finally, the two-dimensional cropping layer (Cropping2D) is used to maintain the shape of generated fake channel \(H\) consistent with the real channel \(H\).
For the discriminator \(D\), the input is firstly padded by a two-dimensional zero padding layer (ZeroPadding2D). Then, six down-sampling blocks are stacked up, where the \(5\times\)5 Conv with the strides of \(2\times\)2 and \(M=\{32,64,128,256,512,1024\}\) filters, a LeakyReLU and a dropout layer are deployed sequentially. In the end, a flatten operation and dense layer with single output are utilized.
distribution \(q\), one can define a forward diffusion process by adding noise stepwise. The forward diffusion process is performed for \(T\) steps:
\[\textbf{x}_{t+1}=\sqrt{1-\beta_{t}}\textbf{x}_{t}+\sqrt{\beta_{t}}\epsilon_{t}\]
where \(\textbf{x}_{t}\) is the noisy datapoint at step \(t\), \(\textbf{x}_{t+1}\) is the noisy datapoint at step \(t+1\), \(\beta_{t}\) is the noise level at step \(t\) which is determined according to a cosine noising schedule [13] and \(\epsilon_{t}\) is a Gaussian noise with zero mean and unit variance.
As \(T\) becomes large, the latent \(\textbf{x}_{T}\) approaches an isotropic Gaussian distribution. Therefore, if we manage to learn the reverse distribution \(q(\textbf{x}_{t-1}|\textbf{x}_{t})\) using a neural network, we can sample \(\textbf{x}_{T}\) from the unit normal distribution, run the reverse process and acquire a sample from \(q(\textbf{x}_{0})\), generating novel data points from the original data distribution.
A neural network with a U-Net architecture (Figure 1) is used to predict the added noise \(\epsilon_{t}\) given a noisy input. The U-Net is trained to minimize the mean squared error between the predicted noise and the true noise. The architecture consists of a contracting path and an expanding path. The contracting path consists of a series of convolutional layers with max-pooling. The expanding path consists of a series of convolutional layers with up-sampling. The contracting and expanding paths are connected by skip connections. The skip connections enable the network to learn both low-level and high-level features.
### _Fine-tuning the pretrained model_
Few-shot generation seeks to generate more data of a given domain, with only few available training examples. As it is unreasonable to expect to fully infer the distribution of real channels from just a few observations, we seek to leverage a large, related source domain (simulated urban macro data) for pre-training. Thus, we wish to preserve the diversity of the source domain, while adapting to the appearance of the target. We adapt our pre-trained model, without introducing any additional parameters, to the fewer examples of the target domain, by training on the new data using a lower learning rate. Crucially, we regularize the changes of the weights during this adaptation, in order to best preserve the information of the source dataset, while fitting the target. We demonstrate the effectiveness of this approach by generating high-quality samples from the urban microcellular scenario.
### _Performance metrics_
For generative models, it is important to quantify both the quality and the fidelity of the generated data individually. Let \(P_{g}\) denote the generated channels and \(P_{r}\) denote the test dataset. We compute the normalized power spectra of the channels in the frequency domain and the antenna domain representations. We fit a multivariate Gaussian to the generated and real power spectra. Let (\(\mu_{g}\), \(\Sigma_{g}\)) and (\(\mu_{r}\), \(\Sigma_{r}\)) denote the mean and covariances of the two Gaussians. The metric we will use to measure how close the real and generated distributions are will be the Wasserstein-2 distance between these two distributions:
\[W_{2}(P_{g},P_{r})=||\mu_{g}-\mu_{r}||^{2}+\text{Tr}(\Sigma_{g}+\Sigma_{r}-2( \Sigma_{r}^{1/2}\Sigma_{g}\Sigma_{r}^{1/2})^{1/2}) \tag{1}\]
However, this approximate Wasserstein distance isn't the best measure, because it uses a Gaussian approximation to a non-Gaussian distribution and it also ignores the phase of the complex channel response. It also fails to disentangle the accuracy and variety of generated channels. A metric that better captures the fidelity and diversity of the generated channels is the improved precision and recall metric proposed by Kynkaanniemi _et al._[14]. The key idea is to draw an equal number of samples from real and generated distributions and embed them into a lower dimensional feature space using a pre-trained autoencoder network. We then calculate the pairwise Euclidean distances between all feature vectors in the set and, for each feature vector, form a hypersphere with radius equal to the distance to its nearest neighbor. This hypersphere defines a volume in the feature space that serves as an estimate of the true manifold. To determine precision, we query for each generated channel whether it is within the estimated manifold of real channels. For recall, we query for each real channel whether it is within estimated manifold of generated channels.
## III Dataset description
The channel dataset is generated from a system-level simulator (SLS) with a 3D stochastic channel model. The channel generation procedure follows the steps described in TR 38.901 [15] A total of 120 User Equipments (UEs) are dropped in a hexagonal network layout and associated with the serving Base Station (BS) with maximum reference signal received power (RSRP). For each UE-BS link, the small-scale parameters of the channel angle (AoD, AoA, ZoD, ZoA) and delay information are created independently. Based on these parameters,
Fig. 3: Diagram illustrating precision and recall for distributions [14] We denote the distribution of real data with \(P_{r}\) (blue) and the distribution of generated data with \(P_{g}\) (red). Precision is the probability that a random sample from \(P_{g}\) falls within the support of \(P_{r}\). Recall is the probability that a random sample from \(P_{r}\) falls within the support of \(P_{g}\).
Fig. 2: The U-Net architecture used for denoising in our diffusion models
channel impulse responses are generated depending on BS and UE antenna settings. We also considered the spatial consistency procedure defined in TR 38.901. Therefore, the channel data of adjacent UEs are correlated. Two scenarios UMa and UMi channels are used for different training purposes. 32000 samples of channel impulse response matrices from 96 UEs in each scenario are used as training data and 8000 channel matrices from the remaining 24 UEs are used for testing and validation.
To demonstrate that our model can be pre-trained on a corpus of simulated data and fine-tuned on a limited real-world dataset, we fine tune the diffusion model trained on urban macro-cell scenarios on a dataset of 20000 (different proportions of the dataset are used in the fine-tuning experiments) channel impulse responses from a urban micro-cell simulation. Compared to the simulated urban macro channels, the urban micro channels have a different distribution of arrival/ departure angles as well as delay spreads, and acts as a stand-in for real data.
As shown in Table 1, the inter-site distance (ISD) and BS height of UMa are greater than those of UMi. With wider coverage, UMa is a richly scattered environment with larger channel delay and angle spread characteristics. Both LOS and NLOS conditions are considered in our datasets. At the BS side, we consider a 2D antenna array, \((M,N,P,M_{p},N_{p})=(4,4,2,1,4)\), where \(N\) is the number of columns, \(M\) is the number of antenna elements with the same polarization in each column. \(M_{p}\) and \(N_{p}\) is the number of antenna ports in row and column. \(P=2\) means that polarization antennas are used. Therefore, four vertical antenna elements are virtualized to one antenna port in each column. A total of eight antenna ports are used in the channel model.
The MIMO channel impulse response in the frequency domain can be denoted as a complex tensor \(\mathbf{H}\in\mathbb{C}^{N_{t}\times N_{r}\times N_{f}}\), where \(N_{t}\) is the number of transmitting antennae, \(N_{r}\) is the number of receiving antennae and \(N_{f}\) represents the number of frequency bins. For training our generative models, we reshape and crop the tensor so that it becomes a real-valued tensor with two channels \(\in\mathbb{R}^{N_{a}\times N_{f^{\prime}}\times 2}\), where \(N_{a}=N_{t}\times N_{r}\) denotes the number of antenna pairs, \(N_{f^{\prime}}\) denotes the number of frequency bins retained (the high frequency portion of the response is cropped off) and the last dimension with a value of 2 represents the real and imaginary parts of original complex channel, respectively. Our objective is to generate samples of channel matrices \(\mathbf{H}\sim q(\mathbf{H})\) from the true distribution of \(\mathbf{H}\), given a limited number of channel samples as our training data.
## IV Results
### _Performance and comparison with WGAN_
In this section, we walk through the evaluation of our diffusion model approach and the Wasserstein GAN. Both models were trained and evaluated using the same urban macrocell channel dataset described in Section 3. In figure 4, we compare the training progress of the diffusion model and the Wasserstein GAN by plotting the 2-Wasserstein distance of the antenna-domain power spectra between generated channels and real channels from the test dataset against the number of epochs the model has trained for. As discussed in the methods section, this metric is an approximate measure of how well the distribution of generated channels matches the distribution of real channels. We observe that the diffusion model converges stably to a fairly low Wasserstein distance, but the GAN experiences episodes of periodic instability throughout the training process due to the adversarial nature of the training. As a result, even the GAN with the best set of hyperparameters cannot outperform in terms of 2-Wasserstein distance (refer to Table II).
In table II, we also compare the performance of the WGAN with the diffusion models in terms of precision and recall, to understand how faithfully each model captures real channels and if the variety of the generated samples match that of the data. Unsurprisingly, we find that the WGAN is fairly competitive with the diffusion model when it comes to precision, which measures the fidelity of generated samples. However, the WGAN is much worse when it comes to recall, which corresponds to the diversity of the generated data. This indicates that the Wasserstein GAN has suffered from at least partial mode collapse. This is also apparent from the channel samples generated by each model (Figure 4); the diffusion model generates channels that are visibly more diverse.
### _Performance of pretrained model after fine-tuning_
For the fine-tuning experiments, we use different proportions of urban microcellular data for fine-tuning (\(5\%\), \(10\%\), \(25\%\) and \(100\%\) of the dataset) and compare its performance with a model trained from scratch with the same amount of urban microcellular data. We find that the fine-tuned models achieve considerably better precision and recall, and this effect
is particularly visible when using a small fraction (\(5\%-10\%\)) of the data. Even when using all of the data from the out-of-distribution, the fine-tuned model maintains a small advantage over the trained-from-scratch problem, which indicates that there is no significant negative transfer for this problem. We also observe that fine-tuning has a more pronounced effect on recall (diversity) than precision (fidelity). The plots of precision and recall in figure 6 summarizes the results of the fine-tuning experiments. These results are very encouraging and indicates that a model pre-trained on simulated channel data can learn to generate channels from a related but different distribution, such as real world channels, using much less data.
## V Conclusions
In this paper, a diffusion model based wireless channel modeling framework has been proposed and analyzed. Unlike traditional methods which involve detailed theoretical analysis and data processing to derive key channel parameters from real measurement data, our new method does not require any domain-specific knowledge or technical expertise, and can obtain the target channel model by directly learning from raw MIMO channel data with a diffusion model, which iteratively denoises isotropic Gaussian noise to generates samples from the true data distribution. Using synthetic channel data from a 3D stochastic model as our proving ground, the distribution of generated channel samples has been compared with that of the real channel data using approximate Wasserstein distance between power spectra distributions as well as precision and recall metrics from the generative machine learning literature. These metrics help us measure the fidelity and diversity of the generated channels seperately. We find that the diffusion models not only generate high-fidelity channels, but also manage to capture the diversity of the data very well, unlike the WGAN used in previous studies for channel modelling. This is an important property for a channel emulator to have, since it needs to be able to emulate all possible channel conditions. We also learned that diffusion models are much less temperamental to train compared to the GANs, which suffer from frequent instabilities during training.
Using a simulated out-of-distribution dataset (the Urban Micro scenario) as our stand-in for real channel data, we demonstrated that we can use fine-tuning to help a model pre-trained on simulated data generalize to real world channel data, which have a different distribution. We showed that reaching the same level of precision and recall is possible with the pretrained model using 2-3 times less data compared
Fig. 4: Normalized power spectra of urban macrocellular channel impulse response samples generated by the diffusion model (top row) and the GAN (bottom row). The diffusion model clearly generates more diverse channel samples and this is confirmed by our metrics.
Fig. 5: The evolution of the 2-Wasserstein distance in antenna domain between the real and generated power spectra in the antenna domain for the diffusion model (top) and the WGAN (bottom) during training. The large spikes in the training curve of the WGAN even in later epochs indicate that it suffers from instabilities throughout the training process.
to training from scratch. This is promising as it is much more expensive to collect real channel data, compared to simulations.
Our future work will include more experiments with real MIMO channel datasets and validation of our synthetic channel model on downstream tasks such as CSI feedback, position estimation from channel data and channel equalization. Another interesting challenge is modelling the time-variation of channels. Advances in video diffusion models [16] might hold the key to modelling time-variations in statistical channels.
|
2301.06581 | Report of the 2021 U.S. Community Study on the Future of Particle
Physics (Snowmass 2021) Summary Chapter | The 2021-22 High-Energy Physics Community Planning Exercise (a.k.a.
``Snowmass 2021'') was organized by the Division of Particles and Fields of the
American Physical Society. Snowmass 2021 was a scientific study that provided
an opportunity for the entire U.S. particle physics community, along with its
international partners, to identify the most important scientific questions in
High Energy Physics for the following decade, with an eye to the decade after
that, and the experiments, facilities, infrastructure, and R&D needed to pursue
them. This Snowmass summary report synthesizes the lessons learned and the main
conclusions of the Community Planning Exercise as a whole and presents a
community-informed synopsis of U.S. particle physics at the beginning of 2023.
This document, along with the Snowmass reports from the various subfields, will
provide input to the 2023 Particle Physics Project Prioritization Panel (P5)
subpanel of the U.S. High-Energy Physics Advisory Panel (HEPAP), and will help
to guide and inform the activity of the U.S. particle physics community during
the next decade and beyond. | Joel N. Butler, R. Sekhar Chivukula, André de Gouvêa, Tao Han, Young-Kee Kim, Priscilla Cushman, Glennys R. Farrar, Yury G. Kolomensky, Sergei Nagaitsev, Nicolás Yunes, Stephen Gourlay, Tor Raubenheimer, Vladimir Shiltsev, Kétévi A. Assamagan, Breese Quinn, V. Daniel Elvira, Steven Gottlieb, Benjamin Nachman, Aaron S. Chou, Marcelle Soares-Santos, Tim M. P. Tait, Meenakshi Narain, Laura Reina, Alessandro Tricoli, Phillip S. Barbeau, Petra Merkel, Jinlong Zhang, Patrick Huber, Kate Scholberg, Elizabeth Worcester, Marina Artuso, Robert H. Bernstein, Alexey A. Petrov, Nathaniel Craig, Csaba Csáki, Aida X. El-Khadra, Laura Baudis, Jeter Hall, Kevin T. Lesko, John L. Orrell, Julia Gonski, Fernanda Psihas, Sara M. Simon | 2023-01-16T19:38:52Z | http://arxiv.org/abs/2301.06581v3 | # Report of the 2021 U.S. Community Study
###### Abstract
The 2021-22 High-Energy Physics Community Planning Exercise (a.k.a. "Snowmass 2021") was organized by the Division of Particles and Fields of the American Physical Society. Snowmass 2021 was a scientific study that provided an opportunity for the entire U.S. particle physics community, along with its international partners, to identify the most important scientific questions in High Energy Physics for the following decade, with an eye to the decade after that, and the experiments, facilities, infrastructure, and R&D needed to pursue them. This Snowmass summary report synthesizes the lessons learned and the main conclusions of the Community Planning Exercise as a whole and presents a community-informed synopsis of U.S. particle physics at the beginning of 2023. This document, along with the Snowmass reports from the various subfields, will provide input to the 2023 Particle Physics Project Prioritization Panel (P5) subpanel of the U.S. High-Energy Physics Advisory Panel (HEPAP), and will help to guide and inform the activity of the U.S. particle physics community during the next decade and beyond.
FERMILAB-CONF-23-008
SLAC-PUB-17717
January 2023 |
2308.04845 | Interaction-induced directional transport on periodically driven chains | We study a driven system in which interaction between particles causes their
directional, coupled movement. In that model system, two particles move
alternatingly in time on two coupled chains. Without interaction, both
particles diffuse along their respective chains, independent from one another.
Interaction between them, no matter if attractive or repellent, leads to an
energetic separation of configurations where the particles are close to each
other and those where they are farther separated. The energy difference causes
close-by particles to remain bound together, forming a doublon. Their relative
position in the starting configuration determines whether the doublon moves to
the left or right or remains stationary due to the periodic driving. | Helena Drüeke, Dieter Bauer | 2023-08-09T10:12:51Z | http://arxiv.org/abs/2308.04845v1 | # Interaction-induced directional transport on periodically driven chains
###### Abstract
We study a driven system in which interaction between particles causes their directional, coupled movement. In that model system, two particles move alternatingly in time on two coupled chains. Without interaction, both particles diffuse along their respective chains, independent from one another. Interaction between them, no matter if attractive or repellent, leads to an energetic separation of configurations where the particles are close to each other and those where they are farther separated. The energy difference causes close-by particles to remain bound together, forming a doublon. Their relative position in the starting configuration determines whether the doublon moves to the left or right or remains stationary due to the periodic driving.
## I Introduction
Directional transport in physical systems can be achieved in various ways. The most obvious one is applying an external field, e.g., an electric field that accelerates a charged particle in a particular direction. An alternating electric field can also lead to directional transport. A simple example is an electron emitted at, say, \(t=0\) into a linearly polarized laser field, e.g., by ionization. Depending on the emission time, the electron may drift in opposite directions, parallel to the polarization of the incident laser field. Other ways to achieve directional transport are by topologically protected edge currents through the breaking of time-reversal symmetry, e.g., by a magnetic field or spin-orbit coupling (Hall effect(s) [1; 2; 3; 4]), or by periodic driving and asymmetric potentials ((semi)classical [5; 6] and quantum ratchets [7; 8]). Interactions between the particles will affect the particle dynamics, but as long as the particle interaction is symmetric under particle exchange, one would not expect directional transport to arise. However, in this work, we present a minimal model of a driven two-particle system that shows directional transport due to interaction, even though this interaction is symmetric under particle exchange. Moreover, the drive is spatially symmetric (unlike the laser example above), and no asymmetric potentials are involved (in contrast to the ratchet systems). Instead, the key to directional transport in our system is the alternating driving of the two particles.
While the interaction is always on in our model system, the hopping of each particle is only allowed for half of the driving period. In this case, the initial configuration determines in which direction the bound pair of particles (i.e., doublon) moves. The doublon does not exist without interaction, and the two particles simply diffuse without preferred directionality. The alternating drive where only one of the two particles is allowed to move per half period implies that the two particles are distinguishable and should be independently addressable by external fields. While such quantum systems probably cannot be found in nature, synthetic models exist, such as ultracold atoms in optical lattices [9; 10; 11; 12] or photonic waveguides [13; 14; 15; 16; 17; 18; 19].
The paper consists of the following parts: We introduce the model in Sec. II and explore the behavior of one particle during half its driving period in Sec. III. The doublon dynamics can be conveniently analyzed by mapping onto a 2D system, as discussed in Sec. IV. Finally, we conclude and give an outlook in Sec. V.
Throughout the paper, we use units in which \(\hbar=1\).
## II System
We consider the lattice shown in Fig. 1, consisting of two chains \(a\) and \(b\) of length \(N\) with one particle on each chain (also labeled \(a\) and \(b\)). Each particle may hop along its respective chain; hoppings to the other chain are prohibited. The particles move alternatingly, starting with particle \(a\). The interaction between particles is between nearest neighbors, i.e., across the chains.
The Hamiltonian reads
\[\begin{split}\hat{H}(t)=&\sum_{\langle i,j\rangle} \left(J_{a}(t)\hat{a}_{i}^{\dagger}\hat{a}_{j}+J_{b}(t)\hat{b}_{i}^{\dagger }\hat{b}_{j}\right)\\ &+V\sum_{\langle\langle i,j\rangle\rangle}\hat{n}_{i}^{(a)}\hat{ n}_{j}^{(b)},\end{split} \tag{1}\]
where \(\hat{a}\) and \(\hat{b}\) are annihilation operators on chains \(a\) and \(b\), respectively, \(\hat{a}^{\dagger}\) and \(\hat{b}^{\dagger}\) are the corresponding creation
Figure 1: Chains \(a\) and \(b\) of identical length \(N=4\) (we chose this small \(N\) for illustration purposes, but performed all calculations with much longer chains). The red and black lines indicate the hopping \(J\) of particles \(a\) and \(b\) on their respective chains. Dashed gray lines indicate the interaction \(V\) between nearest-neighbor sites on different chains.
operators, and \(\hat{n}_{i}^{(a)}=\hat{a}_{i}^{\dagger}\hat{a}_{i}\) and \(\hat{n}_{j}^{(b)}=\hat{b}_{j}^{\dagger}\hat{b}_{j}\) are the occupation number operators. \(\langle i,j\rangle\) indicates nearest neighbors within a chain, \(\langle\langle i,j\rangle\rangle\) nearest neighbors across the chains.
The hoppings \(J_{a,b}(t)\) are assumed to be periodic with a period \(T\) and piece-wise constant,
\[J_{a}(t) =\begin{cases}J&0\leq t<T/2\\ 0&T/2\leq t<T\end{cases} \tag{2a}\] \[J_{b}(t) =\begin{cases}0&0\leq t<T/2\\ J&T/2\leq t<T.\end{cases} \tag{2b}\]
We set \(J=1\) in all plots throughout this publication. With the labelling in Fig. 1, we can write
\[\hat{H}(t)= \sum_{i=1}^{N-1}\bigg{(}\left(J_{a}(t)\hat{a}_{i}^{\dagger}\hat{ a}_{i+1}+J_{b}(t)\hat{b}_{i}^{\dagger}\hat{b}_{i+1}\right)+\text{h.c.} \tag{3}\] \[\qquad\quad+V\left(\hat{n}_{i}^{(a)}\hat{n}_{i+1}^{(b)}+\hat{n}_ {i+1}^{(a)}\hat{n}_{i}^{(b)}\right)\bigg{)}\] \[+V\sum_{i=1}^{N}\hat{n}_{i}^{(a)}\hat{n}_{i}^{(b)}\]
## III Movement of one particle during a half period
We investigate particle \(a\)'s movement on its chain \(a\) during the first half-period (\(0\leq t<T/2\)). Particle \(a\) starts in site \(i\) and propagates. Particle \(b\) is located in site \(j\) and remains stationary during this time.
The Hamiltonian during this phase
\[\hat{H}=\hat{H}_{J}+\hat{H}_{V} \tag{4}\]
consists of two parts, one describing the hopping
\[\hat{H}_{J}=\text{tridiag}(J,0,J) \tag{5}\]
and one describing the interaction
\[\hat{H}_{V}=(v_{k,l}) \tag{6}\]
\[v_{k,l}=\begin{cases}V&k=l=j-1,j,j+1\\ 0&\text{else}\end{cases} \tag{7}\]
on sites neighboring the position \(j\) of particle \(b\).
### \(V\gg J\)
Assuming \(|i-j|\leq 1\) with a strong potential \(V\gg J\) confines particle \(a\) to the three sites \(j-1\), \(j\), and \(j+1\) due to the energetic separation of these states from the others. The \(N\times N\) Hamiltonian (4) becomes limited to these three states (\(3\times 3\)),
\[\hat{H}=\begin{pmatrix}V&J&0\\ J&V&J\\ 0&J&V\end{pmatrix} \tag{8}\]
with eigenenergies
\[E_{0}=V,\quad E_{1,2}=V\pm\sqrt{2}J \tag{9}\]
and eigenstates
\[\varphi_{0}=\begin{pmatrix}1\\ 0\\ -1\end{pmatrix},\quad\varphi_{1,2}=\begin{pmatrix}1\\ \pm\sqrt{2}\\ 1\end{pmatrix}. \tag{10}\]
We can now write any time-dependent state as
\[\psi(t)=\sum_{k=0}^{2}c_{k}\exp(-\mathrm{i}E_{k}t)\varphi_{k}. \tag{11}\]
Figure 2: Probabilities of particle \(a\) as a function of time with \(V\gg J\) (a) for starting position \(i=j\) and (b) for starting position \(i=j-1\). The crosses mark the probabilities at the end of the driving phase \(t_{a}=\frac{\pi}{\sqrt{2}J}\).
\(i=j\)
If particle \(a\) starts at \(i=j\), \(\psi_{i=j}(0)=(0,1,0)^{\mathsf{T}}\), the coefficients are \(c_{0}=0\) and \(c_{1,2}=\pm\frac{1}{2\sqrt{2}}\), resulting in
\[\psi_{i=j}(t)=\frac{\exp(-\mathrm{i}Vt)}{\sqrt{2}\mathrm{i}}\begin{pmatrix} \sin\left(\sqrt{2}Jt\right)\\ \sqrt{2}\mathrm{i}\cos\left(\sqrt{2}Jt\right)\\ \sin\left(\sqrt{2}Jt\right)\end{pmatrix}. \tag{12}\]
The probability is
\[p_{i=j}(t)=|\psi_{i=j}(t)|^{2}=\frac{1}{2}\begin{pmatrix}\sin^{2}\left(\sqrt{2} Jt\right)\\ 2\cos^{2}\left(\sqrt{2}Jt\right)\\ \sin^{2}\left(\sqrt{2}Jt\right)\end{pmatrix}, \tag{13}\]
shown in Fig. 2(a). The particle moves symmetrically from the starting site \(j\) to the left and right neighbors \(j\pm 1\), where it reaches a maximum probability of \(0.5\) at time \(t=\frac{\pi}{2\sqrt{2}J}\) before completely returning to site \(j\) at \(t=\frac{\pi}{\sqrt{2}J}\).
#### ii.1.2 \(i=j-1\)
If particle \(a\) starts at \(i=j-1\), \(\psi_{i=j-1}(0)=(1,0,0)^{\mathsf{T}}\), the coefficients are \(c_{0}=\frac{1}{2}\) and \(c_{1,2}=\frac{1}{4}\), resulting in
\[\psi_{i=j-1}(t)=\frac{\exp(-\mathrm{i}Vt)}{2}\begin{pmatrix}1+\cos\left(\sqrt{ 2}Jt\right)\\ -\sqrt{2}\mathrm{i}\sin\left(\sqrt{2}Jt\right)\\ -1+\cos\left(\sqrt{2}Jt\right)\end{pmatrix}. \tag{14}\]
The probability is
\[p_{i=j-1}(t)=\begin{pmatrix}\cos^{4}\left(Jt/\sqrt{2}\right)\\ \sin^{2}\left(\sqrt{2}Jt\right)/2\\ \sin^{4}\left(Jt/\sqrt{2}\right)\end{pmatrix}, \tag{15}\]
shown in Fig. 2(b).
We choose \(t_{a}=\frac{\pi}{\sqrt{2}J}\) to achieve a complete transfer of particle \(a\) from site \(j-1\) to site \(j+1\). Particle \(a\) leapfrogs over particle \(b\) from its left to right neighbor. If we choose the timing of the second phase of the driving cycle as \(t_{b}=\frac{\pi}{\sqrt{2}J}\), particle \(b\) will leapfrog over particle \(a\), leading to directional transport. Effectively, both particles move two sites to the right without spreading. Fig. 3 shows the probabilities \(p_{a}(t)\) and \(p_{b}(t)\) for the complete cycle. Fig. 4 shows a sketch of the particles' movement.
#### ii.1.3 \(i=j+1\)
If particle \(a\) starts at \(i=j+1\), \(\psi_{i=j+1}(0)=(0,0,1)^{\mathsf{T}}\), it will analogously leapfrog over particle \(b\) to site \(j-1\), resulting in directional transport to the left.
### \(V=0\)
For potential \(V=0\), the position \(j\) of particle \(b\) does not influence particle \(a\)'s movement. The Hamiltonian (4) simplifies to
\[\hat{H}=\hat{H}_{J}=\mathrm{tridiag}(J,0,J) \tag{16}\]
As shown in Fig. 5, particle \(a\) spreads symmetrically to the left and the right.
Figure 4: Sketch of the leapfrogging movement of particles \(a\) and \(b\) during a complete driving cycle. During the first phase (\(0\leq t<T/2\)), particle \(a\) jumps over particle \(b\) and two sites to the left. Then, during the second phase (\(T/2\leq t<T\)), particle \(b\) jumps over particle \(a\) and two sites to the left.
Figure 3: Probabilities of particles \(a\) and \(b\) as a function of time. Particle \(a\) moves from site \(i\) via site \(i+1\) to site \(i+2\) during the first phase, then particle \(b\) moves from site \(i+1\) via site \(i+2\) to site \(i+3\) during the second phase.
### \(V\neq 0\)
For potential \(V\neq 0\) but not \(V\gg J\), we must use the whole Hamiltonian (4) to describe the system.
A video in the supplemental material shows the evolution of the probabilities for increasing interaction \(V\) going from the spreading at \(V=0\) shown in Fig. 5 to the periodic returns at \(V\gg J\) shown in Fig. 2. We are mainly interested in the probabilities at the end of phase \(a\), \(t_{a}=\frac{\pi}{\sqrt{2}J}\). These are marked by crosses in Figs. 2 and 5. Fig. 6 shows the probabilities \(p(t_{a})\) as a function of the interaction \(V\). Even at relatively small interactions \(V\gtrapprox 6\), the initial configuration \(i=j\) remains stationary, \(p_{i=j}(t_{a})\approx 1\). The leaprogging state (starting at \(i=j\pm 1\)) needs higher interaction strengths \(V\gtrapprox 20\) to remain localized (\(p_{j\mp 1}(t_{a})\approx 1\)) while jumping from site \(j\pm 1\) to site \(j\mp 1\).
## IV Mapping to 2D
We map the two chains to a square grid, as shown in Fig. 7. The positions of particles \(a\) and \(b\) are plotted along the horizontal and vertical directions, respectively.
### Interacting subsystem for \(V\gg J\)
For strong interactions \(V\gg J\), interacting states (located on sites marked by crosses in Fig. 7) are energetically separated from non-interacting states (located on sites marked by dots). If the initial state is interacting, it will remain an interacting state. Hence, for \(V\gg J\), we only need to consider a subset of the 2D system, as
Figure 5: Probabilities of particle \(a\) in different sites as a function of time with \(V=0\). The crosses mark the probabilities at the end of the driving phase \(t_{a}=\frac{\pi}{\sqrt{2}J}\).
Figure 6: Probabilities of particle \(a\) at time \(t_{a}=\frac{\pi}{\sqrt{2}J}\) as a function of interaction \(V\) (a) for starting position \(i=j\) and (b) for starting position \(i=j-1\).
Figure 7: Mapping of the Hamiltonian to a 2D lattice with the two particles’ indices along the two axes (\(a\)-chain index \(s\) at the \(x\) axis, \(b\)-chain index \(t\) at the \(y\) axis). Red and black lines connecting sites indicate the hoppings \(J_{a}(t)\) and \(J_{b}(t)\). Gray crosses indicate the combinations of lattice sites for which the interaction potential is non-vanishing, i.e., \((s,t)=(1,1),(1,2),(2,1),(2,2),(2,3),(3,2),(3,3),\dots\).
shown in Fig. 8. The unit cell \(m\) contains three sites, labeled by the difference of positions \(a\) and \(b\): \(1\), \(0\), and \(-1\). This reduced system is quasi-1D, effectively a three-site wide ribbon.
#### iii.2.1 Stationary states
A state initially located at site \((m,0)\) will split towards sites \((m-1,-1)\) and \((m,1)\) during the first phase, returning to \((m,0)\) at the end of the phase, \(t_{a}=\frac{\pi}{\sqrt{2}J}\). During the second phase, it will equivalently split towards sites \((m,-1)\) and \((m-1,1)\) before returning to \((m,0)\) at the end of the driving cycle \(T=t_{a}+t_{b}=\frac{\sqrt{2}\pi}{J}\). The state appears to be stationary when looking stroboscopically after complete driving cycles.
#### iii.2.2 Leapfrogging states
A state starting in site \((m,\pm 1)\) moves to site \((m\mp 1,\mp 1)\) during the first phase and then to site \((m\mp 2,\pm 1)\) during the second phase. The states move two unit cells in each cycle.
#### iii.2.3 Reflection at the corner
The two preceding paragraphs described the evolution of states in an infinite system or the bulk of finite chains. Now, we will investigate the effects of borders. Fig. 8 shows the bottom left corner, with the complete unit cell \(m=1\). The upper right corner is a partial unit cell \(m=N\), containing only the site \((N,0)\) with sites \((N,\pm 1)\) absent.
For the Hamiltonian at the edge, one needs to consider only two sites during each driving phase (instead of three for the bulk),
\[\hat{H}=\begin{pmatrix}V&J\\ J&V\end{pmatrix}. \tag{17}\]
The eigenenergies are
\[E_{1,2}=V\pm J, \tag{18}\]
and the eigenstates are
\[\varphi_{1,2}=\begin{pmatrix}1\\ \pm 1\end{pmatrix}. \tag{19}\]
We can now write any time-dependent state during that driving phase as
\[\psi(t)=\sum_{k=1}^{2}c_{k}\exp(-\mathrm{i}E_{k}t)\varphi_{k}. \tag{20}\]
Without loss of generality, we initialize the state as \(\psi(0)=(1,0)^{\mathsf{T}}\). The coefficients become \(c_{1}=c_{2}=1/2\), resulting in
\[\psi(t)=\exp(-\mathrm{i}Vt)\begin{pmatrix}\cos(Jt)\\ -\mathrm{i}\sin(Jt)\end{pmatrix} \tag{21}\]
and the probability
\[p(t)=\left|\psi(t)\right|^{2}=\begin{pmatrix}\cos^{2}(Jt)\\ \sin^{2}(Jt)\end{pmatrix}. \tag{22}\]
Compared to the three-site Hamiltonian in section III, the oscillation frequency of the two-site Hamiltonian is decreased from \(\sqrt{2}J\) to \(J\). Therefore, at the end of the phase \(t_{a}=\frac{\pi}{\sqrt{2}J}\), the state is incompletely transferred from one site to the next.
\[p\left(t_{a}\right)=\begin{pmatrix}\cos^{2}\left(\pi/\sqrt{2}\right)\\ \sin^{2}\left(\pi/\sqrt{2}\right)\end{pmatrix}\approx\begin{pmatrix}0.3669\\ 0.6331\end{pmatrix}. \tag{23}\]
The corner influences the stationary state starting at site \((1,0)\). It leaks into \((1,1)\) in the first phase, from where it continues to \((2,-1)\) in the second phase. It also leaks into \((1,-1)\) in the second phase. The stationary state sends out leapfrogging states until it vanishes. Here, we described the edge at \(m=1\), but the behavior at the other edge is equivalent.
The leapfrogging states split up when they run into an edge, similar to the stationary states.
#### iii.2.4 Interpretation as a spin-1 system
Labeling sites in the unit cell as \(-1\), \(0\), and \(1\) already suggests an analogy to a spin-1 system. The leapfrogging states undergo a spin-flip operation from \(\pm 1\) to \(\mp 1\)
Figure 8: Lattice on which the doublon dynamics takes place if \(V\gg J_{a,b}\), with new labeling and the unit cell indicated in green.
in each phase, accompanied by a spatial movement. The spin 0 states are unaffected by the spin-flip and remain in the same location. Although there is a similarity to the quantum spin Hall effect in the sense that the transport direction depends on spin, there are essential differences. Besides the third spin-degree of freedom 0 without transport, the spin flips during transport in our model system.
### Band structure
To calculate a band structure, we use a unit cell (shown in Fig. 9) which contains non-diagonal sites in addition to the three diagonal sites. The sites are numbered \(\alpha=1,2,3,\ldots,S\) with even \(S\). The unit cell is repeated infinitely in one direction and numbered by an index \(m\). We employ periodic boundary conditions in the other, finite direction, connecting the left and right edges of the unit cell. While this periodicity does not exist in the complete 2D system, the alternative would create diagonal edges, which do not exist in the 2D square system since there are only horizontal and vertical edges. The edge states at these diagonal edges would obfuscate the bands we are interested in.
We can write the Hamiltonians for the two phases of the driving cycle in real space as
\[\begin{split}\hat{H}_{i}=\sum_{m}&\left(J\sum_{ \alpha\text{ odd}}\left(\hat{h}_{i}(m,\alpha)+\text{h.c.}\right)\right.\\ &\left.\qquad\qquad+V\sum_{\alpha=1}^{3}|m,\alpha\rangle\langle m,\alpha|\right)\end{split} \tag{24}\]
with
\[\begin{split}\hat{h}_{a}(m,\alpha)=&\ |m,\alpha\rangle \langle m,(\alpha-1)\mod S|\\ &+|m,\alpha\rangle\langle m+1,(\alpha+1)\mod S|\\ &\hat{h}_{b}(m,\alpha)=&\ |m,\alpha\rangle\langle m,(\alpha+1)\mod S|\\ &+|m,\alpha\rangle\langle m+1,(\alpha-1)\mod S|.\end{split} \tag{25}\]
We transform the Hamiltonians to \(k\)-space by making the Bloch ansatz [20]
\[|m,\alpha\rangle=\frac{a}{2\pi}\int_{\text{BZ}}\mathrm{d}k\,\exp(-\mathrm{i} kma)\,|k,\alpha\rangle, \tag{26}\]
where \(a\) is the lattice constant in the vertical direction in Fig. 9. We obtain
\[\begin{split}\hat{H}_{i}=\frac{a}{2\pi}\int_{\text{BZ}}\mathrm{d }k\,|k\rangle\langle k|&\left(J\sum_{\alpha\text{ odd}}\left(\hat{h}_{i}(k,\alpha)+\text{h.c.}\right)\right.\\ &+V\sum_{\alpha=1}^{3}|\alpha\rangle\langle\alpha|\right)\end{split} \tag{27}\]
with
\[\begin{split}\hat{h}_{a}(k)=&\ |\alpha\rangle \langle\alpha-1|+\exp(\mathrm{i}ka)|\alpha\rangle\langle\alpha+1|\\ \hat{h}_{b}(k)=&\ |\alpha\rangle\langle\alpha+1|+\exp( \mathrm{i}ka)|\alpha\rangle\langle\alpha-1|.\end{split} \tag{28}\]
The time evolution operator is (in units where \(\hbar=1\))
\[\hat{U}(t)=\exp\left(\frac{T}{2\mathrm{i}}\hat{H}_{b}\right)\exp\left(\frac{T }{2\mathrm{i}}\hat{H}_{a}\right). \tag{29}\]
Solving the equation
\[\hat{U}(T)\psi_{\text{F}}=\lambda_{\text{F}}\psi_{\text{F}} \tag{30}\]
gives the Floquet [21] eigenstates \(\psi_{\text{F}}\), and the Floquet energies \(\varepsilon_{\text{F}}\) are calculated from the eigenvalues \(\lambda_{\text{F}}=\exp(-\mathrm{i}\varepsilon_{\text{F}}T)\).
Figure 10: Band structure of a 20-site wide strip with \(V=10\). The red bands reside on the three sites with the modified potential, and the black bands on other sites.
Figure 9: Unit cell of the \(45^{\circ}\)-rotated system. The sites within it are numbered by \(\alpha=1,2,3,\ldots,S\) with even \(S\). The crosses mark the diagonal sites, and the dots mark the non-diagonal sites. The unit cell is infinitely repeated in the vertical direction and numbered by the index \(m\). The height of the unit cell, \(a\), is marked. Periodic boundaries are employed horizontally, connecting sites \(S\) and \(1\) within the same unit cell.
The resulting band structure in Fig. 10 confirms our previous observations on the behavior of the doublons. They are located on the three sites \(\alpha=1,2,3\), and Floquet eigenstates where this is the case are drawn red in Fig. 10. One of these doublon bands is quite flat, corresponding to the stationary doublons. The two sloped bands correspond to doublons moving in opposite directions along the diagonal. The other bands are shown in black and form a continuum for \(N\rightarrow\infty\). These bands are the diffusing states.
Depending on the potential \(V\), some diffusing bands have non-zero energy at the center of the Brillouin zone, \(\varepsilon_{\mathrm{F}}\left(k=\frac{\pi}{a}\right)\neq 0\). These are edge states, localized at the boundary between \(\alpha=3\) and \(4\), and between \(\alpha=S\) and \(1\).
Fig. 11 shows the Floquet energies \(\varepsilon_{\mathrm{F}}\left(k=\frac{\pi}{a}\right)\) as a function of potential \(V\). The bulk states are at constant \(\varepsilon_{\mathrm{F}}\left(k=\frac{\pi}{a}\right)=0\). The energies of the doublons increase linearly with \(V\), as indicated by the reddish shadow \(\varepsilon_{\mathrm{F}}=V\). The energies of the edge states show an interesting behavior: They have a tilted pole at \(V\approx 3\), where they approach the doublon energies. At higher potentials, they approach the energy of the bulk states, \(\lim_{V\rightarrow\infty}\varepsilon_{\mathrm{F}}\left(k=\frac{\pi}{a}\right)=0\). There are crossings between the doublon and edge state energies. We have checked that they are avoided crossings by following the Floquet eigenstates.
## V Conclusion
We investigated two particles on two linear chains in a periodic driving scheme and showed how their interaction influences their temporal evolution. Without interaction, both particles diffuse. With sufficiently strong interaction, they can form a stationary bound state which remains localized without diffusing. They can also form non-stationary non-diffusing states, which propagate in a leapfrogging manner. The relative position of the two particles in the starting configuration determines their behavior. Observing the evolution of the two particles could allow us to measure the strength of the interaction between them and their initial locations.
A possible extension of the system would be going from linear chains to two-dimensional grids on which the particles move. The added dimension would enable vertical and diagonal movement of the particles in addition to the horizontal one on the chains.
|
2307.05306 | All two-dimensional expanding Ricci solitons | The second author and H. Yin have developed a Ricci flow existence theory
that gives a complete Ricci flow starting with a surface equipped with a
conformal structure and a nonatomic Radon measure as a conformal factor. This
led to the discovery of a large array of new expanding Ricci solitons. In this
paper we use the recent uniqueness theory in this context, also developed by
the second author and H. Yin, to give a complete classification of all
expanding Ricci solitons on surfaces. Along the way, we prove a converse to the
existence theory that is not constrained to solitons: every complete Ricci flow
on a surface over a time interval $(0,\varepsilon)$ admits a $t\downarrow 0$
limit within the class of admissible initial data. This makes surfaces the
first nontrivial setting for Ricci flow in which a bijection can be given
between the entire set of complete Ricci flows over maximal time intervals
$(0,T)$, and a class of initial data that induces them. | Luke T. Peachey, Peter M. Topping | 2023-07-11T14:55:44Z | http://arxiv.org/abs/2307.05306v3 | # All Two-Dimensional Expanding Ricci Solitons
###### Abstract
The second author and H. Yin [19] have developed a Ricci flow existence theory that gives a complete Ricci flow starting with a surface equipped with a conformal structure and a nonatomic Radon measure as a conformal factor. This led to the discovery of a large array of new expanding Ricci solitons [19]. In this paper we use the recent uniqueness theory in this context, also developed by the second author and H. Yin [20], to give a complete classification of all expanding Ricci solitons on surfaces. Along the way, we prove a converse to the existence theory: every complete Ricci flow on a surface over a time interval \((0,\varepsilon)\) admits a \(t\downarrow 0\) limit within the class of admissible initial data.
1
Footnote 1: All manifolds are assumed implicitly to be connected.
## 1 Introduction
Amongst all Ricci flows, the ones that evolve in a self-similar manner, modulo scaling and reparametrisation, are distinguished. These so-called Ricci soliton flows can be classified as shrinking, steady or expanding. Shrinking solitons arise in the study of finite-time singularities of Ricci flow (see e.g. [7], [16, SS11], [1]), while expanding solitons are central to our understanding of both the large-time behaviour of the flow (see e.g. [11, Conjecture 16.8], [3], [14]) and the small-time asymptotics of the flow as it desingularises rough initial data (see e.g. [10], [17], [6], [9], [13]).
Given a complete smooth Riemannian manifold1\((M,g)\) and a complete vector field \(X\) on \(M\), let \(\{\phi_{t}:M\to M\}_{t>0}\) be the family of diffeomorphisms generated by the time-dependent vector field \(-\frac{X}{t}\), with \(\phi_{1}=\mathrm{id}_{M}\). Defining \(g(t):=t\phi_{t}^{*}(g)\) for \(t>0\), we can compute
Footnote 1: All manifolds are assumed implicitly to be connected.
\[\frac{\partial}{\partial t}g(t) =\phi_{t}^{*}(g)-\phi_{t}^{*}\mathcal{L}_{X}(g)\] \[=\phi_{t}^{*}\big{[}g-\mathcal{L}_{X}(g)+2\mathrm{Ric}(g)\big{]}- 2\mathrm{Ric}(g(t)),\]
We thus see that such a \(g(t)\) is a Ricci flow if and only if \((M,g,X)\) is an expanding Ricci soliton in the following sense.
**Definition 1.1**.: _An expanding Ricci soliton is a triple \((M,g,X)\) where \((M,g)\) is a complete smooth Riemannian manifold, and \(X\) is a complete smooth vector field on \(M\), such that_
\[2\mathrm{Ric}(g)-\mathcal{L}_{X}(g)+g=0.\]
_We call the soliton trivial if \(X\) is a Killing field._
We will refer to the corresponding \(g(t)=t\phi_{t}^{*}(g)\) as an expanding Ricci soliton flow.
The trivial expanding solitons are clearly just hyperbolic surfaces, scaled to have curvature \(-\frac{1}{2}\). Such solitons account for all expanding solitons on _closed_ surfaces [12].
Special examples of nontrivial solitons can be found by imposing a symmetry ansatz and reducing the soliton equation (1.1) to an ODE. For example, one can consider the \(O(2)\)-symmetric Ricci soliton flow starting with a two-dimensional cone (see e.g. [5, Section 4, Chapter 2] or [10]). In fact, this example is a so-called _gradient_ Ricci soliton, which means that \(X\) is the gradient of some function, possibly modulo a Killing field. When considering solitons on surfaces, the condition of being a gradient soliton imposes a significant simplification because it turns out to _force_ the soliton to satisfy a symmetry ansatz and means that a complete classification can be obtained by solving ODEs (see e.g. [4, Chapter 3]). In the case of solitons in two dimensions that are both gradient and expanding, one can therefore read off all possible examples. This classification will also come for free from our analysis in Section 5.
More sophisticated results about expanding solitons can be derived using PDE methods. We highlight the theorem of Deruelle [6], who showed that any Riemannian cone whose link is a smooth, simply connected, compact Riemannian manifold with curvature operator \(\mathrm{Rm}\geq 1\) of general dimension can be evolved under Ricci flow as an expanding gradient Ricci soliton flow with non-negative curvature operator.
Given earlier results such as Deruelle's theorem, one might develop intuition that expanding solitons tend to arise by running the Ricci flow starting with a cone. One aspect of our work, following earlier work of the second author and H. Yin [19] is that this is very far from the full story.
The objective of this paper is to classify all expanding Ricci solitons, up to isometry, when the underlying manifold \(M\) is two-dimensional. For the purposes of this paper, we say that two expanding Ricci solitons \((M_{1},g_{1},X_{1})\) and \((M_{2},g_{2},X_{2})\) are isometric if there exists an isometry \(\varphi:(M_{1},g_{1})\to(M_{2},g_{2})\) with \(\varphi_{*}X_{1}=X_{2}\). One could instead replace this final condition on \(X_{1},X_{2}\) by the requirement that \(\varphi_{*}X_{1}-X_{2}\) is a Killing field, in which case further identifications of the solitons we find would be required.
The solitons we construct in two dimensions induce solitons in higher dimensions by taking products of finitely many of these examples and finitely many previously-known expanding solitons (including Euclidean space of arbitrary dimension).
In order to classify expanding solitons, we will construct a correspondence between nontrivial expanding Ricci solitons and another geometric structure that is easier to classify. Given a smooth surface \(M\) with a conformal structure \(c\) and a complete conformal vector field \(X\), the correspondence will be between conformal Riemannian metrics \(g\) that make \((M,g,X)\) into a nontrivial expanding Ricci soliton and nontrivial nonatomic Radon measures \(\mu\) on \(M\) that are expanding under the action of \(X\) in the sense that if we integrate the vector field \(X\) to give diffeomorphisms \(\psi_{s}:M\to M\), \(s\in\mathbb{R}\), with \(\psi_{0}\) the identity, then
\[X\cdot\mu=\mu,\text{ where }X\cdot\mu:=\frac{d}{ds}\psi_{s}^{*}(\mu)\bigg{|}_{s=0}, \tag{1.1}\]
or equivalently \(\psi_{s}^{*}(\mu)=e^{s}\mu\). We will refer to the combination of \(M\) (with its conformal structure), \(\mu\) and \(X\) as a _measure expander_.
In order to make this correspondence precise, we need to survey the theory of Ricci flow on surfaces equipped with a conformal structure and a nonatomic Radon measure developed recently by the second author and Yin [19, 20]. We use the notation \(\tilde{M}\) for the universal cover of \(M\), and \(\tilde{\mu}\) for the corresponding lift of \(\mu\) to \(\tilde{M}\).
**Theorem 1.2** (Well-posedness of Ricci flow from measures. From [19, 20]).: _Let \(M\) be a two-dimensional smooth manifold equipped with a conformal structure, and let \(\mu\) be a Radon measure on \(M\) that is nonatomic in the sense that_
\[\mu(\{x\})=0\quad\text{ for all }x\in M.\]
_Define \(T\in[0,\infty]\) by_
* \(T=\infty\) _if_ \(\tilde{M}=D\)_, the unit disc in the plane;_
* \(T=\frac{1}{4\pi}\tilde{\mu}(\tilde{M})\) _if_ \(\tilde{M}=\mathbb{C}\)_;_
* \(T=\frac{1}{8\pi}\tilde{\mu}(\tilde{M})\) _if_ \(\tilde{M}=S^{2}\)_._
_Then there exists a smooth complete conformal Ricci flow \(g(t)\) on \(M\), for \(t\in(0,T)\), attaining \(\mu\) as initial data in the sense that_
\[\mu_{g(t)}\rightharpoonup\mu\text{ as }t\downarrow 0\]
_and so that if \(\tilde{g}(t)\), \(t\in(0,\tilde{T})\), is any other smooth complete conformal Ricci flow on \(M\) that attains \(\mu\) as initial data in the same sense, then \(\tilde{T}\leq T\) and_
\[g(t)\equiv\tilde{g}(t)\qquad\text{ for all }t\in(0,\tilde{T}).\]
_If \(T\in(0,\infty)\) then \(\mu_{g(t)}(M)=(1-\frac{t}{T})\mu(M)\) for all \(t\in(0,T)\)._
This theory allows us to characterise expanding solitons in terms of measures using the following theorem, the first part of which is a refinement of the ideas in [19], based on [20].
**Theorem 1.3** (Expanding solitons correspond to measure expanders).: _Suppose \(M\) is a smooth surface equipped with a conformal structure and a complete conformal vector field \(X\). Suppose that \(\mu\) is a nontrivial nonatomic Radon measure that is expanding under the action of \(X\) in the sense of (1.1). Then the unique complete conformal Ricci flow \(g(t)\) on \(M\) given by Theorem 1.2 with \(\mu\) as initial data exists for all \(t>0\). Moreover, \((M,g(1),X)\) is a nontrivial expanding Ricci soliton in the sense of Definition 1.1 and \(g(t)\) is an expanding Ricci soliton flow._
_Conversely, given any nontrivial two-dimensional expanding Ricci soliton \((M,g,X)\) (which automatically induces a conformal structure) there exists a nontrivial nonatomic Radon measure \(\mu\) on \(M\) that is expanding under the action of \(X\) and induces \((M,g,X)\) in the sense above when we run the Ricci flow until time \(t=1\)._
Note that although \(X\) is not constrained explicitly to be nontrivial (for example) above, it will necessarily have to be so in order for \(\mu\) to be expanding with respect to it.
Theorem 1.3 reduces the nontrivial expanding soliton classification problem in two dimensions to the problem of finding all surfaces \(M\) with a conformal structure, nontrivial complete conformal vector field \(X\) and nontrivial nonatomic measure \(\mu\) that is expanding with respect to \(X\). The task of finding such \((M,\mu,X)\) was initiated in [19], but we now give the full story. We carry this out up to the notion of equivalence corresponding to the notion of equivalence of solitons that we adopted earlier. That is, we say \((M_{1},\mu_{1},X_{1})\) and \((M_{2},\mu_{2},X_{2})\) are isomorphic if there exists a conformal diffeomorphism \(\varphi:M_{1}\to M_{2}\) such that \(\varphi_{*}\mu_{1}=\mu_{2}\) and \(\varphi_{*}X_{1}=X_{2}\).
Before we give the complete list of measure expanders, some examples are in order. Probably the simplest example is to work on the plane and consider the Lebesgue measure scaled by \(e^{x}\), with \(X=\frac{\partial}{\partial x}\). We can equally well take the quotient by a group of vertical translations generated by \((x,y)\mapsto(x,y+2\pi\alpha)\). This latter example is identifiable as the cone of cone angle \(\pi\alpha\), without the vertex at \(x=-\infty\). That is, if we view this cone conformally on the cylinder then its volume measure is the measure we are considering, and the soliton that flows out of it is the well-known soliton that has the topology of a cylinder, with a hyperbolic cusp at one end (scaled to have curvature \(-\frac{1}{2}\)) and with the other end being conical, of cone angle \(\pi\alpha\). See, for example, [2, Theorem 1.1, 3c]. These examples are extremely special cases of families (Bi) and (Bii) in Theorem 1.4 below.
Meanwhile, we could take one of these cones and add in the vertex to generate a different type of soliton. More precisely, given the volume measure of a cone, viewed on a cylinder as above, we can first change viewpoint by seeing the cylinder conformally as a punctured plane, and then add in the origin to change the underlying space. In this case the measure expander on the punctured plane is immediately a measure expander on the full plane, and the new soliton is the familiar soliton that smooths out the vertex of the cone (see, for example, [2, Theorem 1.1, 3a and 3b] and the Gaussian soliton [18, SS1.2.2]). This example is
a very special case of family (Ci) in Theorem 1.4 below.
In the following theorem, we classify measure expanders into families (A), (B) and (C). Elements of family (A) will have the disc as their universal cover. Elements of families (B) and (C) will have the plane as their universal cover, and are distinguished by whether the vector field \(X\) has a stationary point or not.
Throughout this paper, we take \(S^{1}=\mathbb{R}/2\pi\mathbb{Z}\), equipped with the orientation it inherits from \(\mathbb{R}\). For any manifold \(N\), we let \(\mathcal{R}(N)\) denote the collection of nontrivial Radon measures on \(N\).
**Theorem 1.4** (Classification of measure expanders).: _Suppose that \(M\) is a smooth surface equipped with a conformal structure, \(X\) is a complete conformal vector field on \(M\), and \(\mu\) is a nontrivial nonatomic Radon measure on \(M\) satisfying (1.1). Then \((M,\mu,X)\) is isomorphic to an element of one of the following families of measure expanders:_
_(Ai)_
\[\left\{\left(\mathbb{R}\times(0,\infty),e^{x}dx\otimes\nu,\frac{\partial}{ \partial x}\right)\ |\ \nu\in\mathcal{R}((0,\infty))\right\},\]
\(N=(0,\infty)\) _has trivial isometry group._
_(Aii)_
\[\left\{\left(\mathbb{R}\times(0,\pi),e^{\alpha x}dx\otimes\nu,\frac{1}{ \alpha}\frac{\partial}{\partial x}\right)\ |\ \alpha>0,\nu\in\mathcal{R}((0,\pi))\right\},\]
\(N=(0,\pi)\) _has isometry group_ \(\mathbb{Z}_{2}\)_._
_(Bi)_
\[\left\{\left(\mathbb{R}\times\mathbb{R},e^{x}dx\otimes\nu,\frac{\partial}{ \partial x}\right)\ |\ \nu\in\mathcal{R}(\mathbb{R})\right\},\]
\(N=\mathbb{R}\) _has isometry group_ \(\mathbb{R}\rtimes\mathbb{Z}_{2}\)_._
_(Bii)_
\[\left\{\left(\mathbb{R}\times S^{1},e^{\alpha x}dx\otimes\nu,\frac{1}{\alpha} \frac{\partial}{\partial x}\right)\ |\ \alpha>0,\nu\in\mathcal{R}(S^{1})\right\},\]
\(N=S^{1}\) _has isometry group_ \(O(2)\)_._
_(Biii) The same as (Bii), but with the conformal structure on \(\mathbb{R}\times S^{1}\) being the pull-back of the standard product conformal structure by a twisting diffeomorphism \(F_{\beta}\) of \(\mathbb{R}\times S^{1}\) defined by \(F_{\beta}(x,\theta)=(x,\theta+\beta x)\), for some \(\beta>0\), or equivalently_
\[\left\{\left(\mathbb{R}\times S^{1},(F_{\beta})_{*}(e^{\alpha x}dx\otimes\nu),\frac{1}{\alpha}\frac{\partial}{\partial x}+\frac{\beta}{\alpha}\frac{ \partial}{\partial\theta}\right)\ |\ \alpha>0,\beta>0,\nu\in\mathcal{R}(S^{1})\right\}.\]
\(N=S^{1}\) _has orientation-preserving isometry group_ \(SO(2)\)
_._
3. _The same as (Bii) with the puncture at_ \(x=-\infty\) _removed. In other words, the pushforward of any of the measure expanders from (Bii) under the injective conformal map_ \[\mathbb{R}\times S^{1}\to\mathbb{C},\quad(x,y)\mapsto e^{x+iy}.\] (1.2) _Alternatively, if we consider a complex coordinate_ \(z=e^{x+i\theta}\) _on_ \(\mathbb{C}\)_, then we can write these as_ \[\left\{\left(\mathbb{C},e^{\alpha x}dx\otimes\nu,\frac{1}{\alpha}\frac{ \partial}{\partial x}\right)\ |\ \alpha>0,\nu\in\mathcal{R}(S^{1})\right\}.\]
4. _The same as (Biii) with the puncture at_ \(x=-\infty\) _removed. In other words, the pushforward of any of the measure expanders from (Biii) under the injective conformal map (_1.2_). Alternatively, if we consider a complex coordinate_ \(z=e^{x+i\theta}\) _on_ \(\mathbb{C}\)_, then we can write these as_ \[\left\{\left(\mathbb{C},(F_{\beta})_{*}(e^{\alpha x}dx\otimes\nu),\frac{1}{ \alpha}\frac{\partial}{\partial x}+\frac{\beta}{\alpha}\frac{\partial}{ \partial\theta}\right)\ |\ \alpha>0,\beta>0,\nu\in\mathcal{R}(S^{1})\right\}.\]
_Moreover, any two measure expanders \((M_{1},\mu_{1},X_{1})\), \((M_{2},\mu_{2},X_{2})\) in the above list are isomorphic if and only if they both belong to the same family in the list above (so \(M_{1}=M_{2}\)), \(X_{1}=X_{2}\), and the measures on the fibres \(\nu_{1},\nu_{2}\in\mathcal{R}(N)\) satisfy_
\[\varphi_{*}(\nu_{1})=\lambda\cdot\nu_{2},\]
_for some \(\lambda>0\) and an isometry of the fibre \(\varphi\in\mathrm{Isom}(N)\), where \(\varphi\) is assumed to be orientation preserving in cases (Biii) and (Cii)._
**Remark 1.5**.: _To clarify, in parts (Ci) and (Cii), we push forward both the measure and the vector field to \(\mathbb{C}\). In both cases the pushed forward vector field extends across the origin as a smooth complete vector field on the whole of \(\mathbb{C}\) and we are left with new measure expanders on the larger space._
Implicit in Theorem 1.4 is that every element of every family is indeed a measure expander. This will follow instantly from Corollary 4.1, keeping in mind Remark 4.2. It seems that no expanding soliton in the twisted family (Biii) has ever been considered before.
The second part of Theorem 1.3 is proved by showing that after using a Ricci soliton \((M,g,X)\) to generate a Ricci soliton flow \(g(t)\), we can take a limit of the flow as \(t\downarrow 0\) to obtain a measure that encodes the soliton flow. In fact, we prove something much stronger in this paper that is nothing to do with solitons.
**Theorem 1.6** (Time zero limits of complete Ricci flows).: _Suppose \(M\) is a smooth surface and \(g(t)\) is any smooth complete Ricci flow on \(M\) for \(t\in(0,\varepsilon)\), for some \(\varepsilon>0\). Then there exists a nonatomic Radon measure \(\mu\) on \(M\) such that_
\[\mu_{g(t)}\rightharpoonup\mu \tag{1.3}\]
_as \(t\downarrow 0\). The measure \(\mu\) is nontrivial unless the universal cover of \(M\) is the disc and \(g(t)=2th\) for \(h\) a complete hyperbolic metric on \(M\)._
We emphasise that for _any_ complete Ricci flow \(g(t)\) on a surface, for \(t\in(0,\varepsilon)\), we are obtaining a \(t\downarrow 0\) limit that is of the generality that the existence and uniqueness theorem 1.2 can handle. For each smooth surface \(M\) equipped with a conformal structure, this is implying a one-to-one correspondence between complete conformal Ricci flows on \(M\) over some time interval \((0,\varepsilon)\) (where we equate two flows on different such time intervals if they agree while they are both defined) and the initial data that generates them via Theorem 1.2. More precisely, if \(M\) has \(D\) as its universal cover then this space of flows is in one-to-one correspondence with the space of nonatomic Radon measures, while for every other \(M\) this space of flows is in one-to-one correspondence with the space of _nontrivial_ nonatomic Radon measures. This begs the question as to what the analogue of our correspondence will be in higher dimensions.
_Acknowledgements:_ This work was supported by EPSRC grant EP/T019824/1. The second author thanks Hao Yin for conversations on this topic. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any author accepted manuscript version arising.
## 2 Time zero limits of complete Ricci flows
In this section we prove Theorem 1.6. Many of the ingredients are refinements of estimates taken from [19]. The existence of some limiting Radon measure (not necessarily nonatomic) can be found in [15].
We begin by noting that if we can prove the existence of \(\mu\) then the final claim, that \(\mu\) is nontrivial except possibly if the universal cover of \(M\) is the disc, follows immediately from Theorem 1.2.
We next prove the existence of some Radon measure \(\mu\) satisfying (1.3), without worrying whether or not it is nonatomic. It suffices to prove that each point \(p\in M\) admits a neighbourhood in which we can find such a limit. Note that we are not fully localising the problem here because we will appeal later to the global completeness of \(g(t)\).
By lifting to the universal cover, we may as well assume that \(M\) is either \(\mathbb{R}^{2}\), \(B_{2}\) or \(S^{2}\simeq\mathbb{C}\cup\{\infty\}\). By adjusting by a biholomorphic transformation, we may as well assume that the point \(p\) corresponds to the origin. We then task ourselves with proving (1.3) on the unit disc \(D=B_{1}\).
The first step to achieve this will be to prove volume estimates. We need to verify that the volume \(\mu_{g(t)}(D)\) is controlled from above for small time \(t\):
**Claim 2.1**.: \[\limsup_{t\downarrow 0}\mu_{g(t)}(D)<\infty.\]
This claim will follow if we can show that extremely large volume for very small time \(t>0\) will imply very large volume for later times. This type of control
was proved in [19] when working on \(\mathbb{R}^{2}\); we adjust the argument to work on \(S^{2}\simeq\mathbb{C}\cup\{\infty\}\).
**Lemma 2.2** (cf. [19, Lemma 3.1]).: _Let \(\mu\) be a nonatomic Radon measure on \(S^{2}\simeq\mathbb{C}\cup\{\infty\}\) and \(g(t)\) for \(t\in(0,T)\) be the complete Ricci flow on the sphere starting from \(\mu\) as given by Theorem 1.2. Then for \(0<r<R<\infty\) and \(0<t<\frac{\mu(B_{r})}{8\pi}\), we have_
\[\mu(B_{r})\leq\mu_{g(t)}(B_{R})+\frac{8\pi t}{1-(\frac{r}{R})^{2}}.\]
We will repeatedly need the following theorem of the second author and Yin [20] that establishes a maximally stretched property that substitutes for a maximum principle.
**Theorem 2.3** ([20, Theorem 1.2]).: _Let \(M\), \(\mu\) and \(T\) be as in Theorem 1.2, and let \(g(t)\) be the unique Ricci flow constructed in that theorem. Suppose now that \(\nu\) is any other Radon measure on \(M\) with \(\nu\leq\mu\), and \(\tilde{g}(t)\), \(t\in(0,\tilde{T})\), is any smooth conformal Ricci flow on \(M\) attaining \(\nu\) as initial data. Then \(\tilde{T}\leq T\) and_
\[g(t)\geq\tilde{g}(t)\]
_for all \(t\in(0,\tilde{T})\)._
Proof of Lemma 2.2.: We may as well assume that \(\mu(B_{r})>0\) since otherwise the lemma is vacuous. Since the restriction of \(\mu\) to \(B_{r}\) is also a nontrivial nonatomic Radon measure on the sphere, we can start the Ricci flow \(G(t)\) on \(S^{2}\) from this measure, for \(t\in(0,\frac{\mu(B_{r})}{8\pi})\), using Theorem 1.2. During this time we have
\[\mu_{G(t)}(S^{2})=\mu(B_{r})-8\pi t.\]
By Theorem 2.3, we have that \(G(t)\leq g(t)\) and \(G(t)\leq 2tH_{r}\), where \(H_{r}\) denotes the complete conformal hyperbolic metric on \(S^{2}\setminus B_{r}\). Together these give the inequality
\[\mu_{g(t)}(B_{R}) \geq\mu_{G(t)}(B_{R})=\mu_{G(t)}(S^{2})-\mu_{G(t)}(S^{2}\setminus B _{R})\] \[\geq\mu(B_{r})-8\pi t-2t\mu_{H_{r}}(S^{2}\setminus B_{R}).\]
The result follows from the direct calculation
\[\mu_{H_{r}}(S^{2}\setminus B_{R})=\frac{4\pi r^{2}}{R^{2}-r^{2}}.\qed\]
Proof of Claim 2.1.: If not, then there exists a decreasing and null sequence \(t_{n}\) such that
\[\lim_{n\to\infty}\mu_{g(t_{n})}(D)=\infty.\]
Consider \(n\) large enough so that \(\mu_{g(t_{n})}(D)>8\pi\). By appealing to Theorem 1.2, we can extend \(g(t)\), originally defined for \(t\in(0,\varepsilon)\), to at least the time interval \((0,t_{n}+1)\), for each \(n\). Let \(\mu_{n}\) denote the restriction of the measure \(\mu_{g(t_{n})}\) to \(D\), viewed as a measure on \(S^{2}\simeq\mathbb{C}\cup\{\infty\}\). Let \(G_{n}(t)\) be the Ricci flow on \(S^{2}\) for
\(t\in(0,1]\) starting from \(\mu_{n}\). By Theorem 2.3 we have that \(G_{n}(t)\leq g(t_{n}+t)\) for \(t\in(0,1]\), and so by Lemma 2.2 applied to \(G_{n}(t)\), we have
\[\mu_{g(t_{n})}(D)=\mu_{n}(D) \leq\mu_{G_{n}(1)}(B_{\sqrt{2}})+16\pi\] \[\leq\mu_{g(1+t_{n})}(B_{\sqrt{2}})+16\pi.\]
This yields a contradiction, as taking \(n\to\infty\) makes the left-hand side diverge to infinity, whereas the right-hand side converges to \(\mu_{g(1)}(B_{\sqrt{2}})+16\pi<\infty\). _This completes the proof of Claim 2.1._
By Claim 2.1, combined with weak compactness for measures [8, Theorem 1.41], every sequence \(t_{n}\downarrow 0\) has a subsequence so that
\[\mu_{g(t_{n})}\rightharpoonup\mu\]
for some Radon measure \(\mu\) on \(D\), as \(n\to\infty\). We would like to establish that the sequence \(t_{n}\) is not significant in that \(\mu_{g(t)}\rightharpoonup\mu\) as \(t\downarrow 0\), i.e.
\[\int_{D}\psi d\mu_{g(t)}\to\int_{D}\psi d\mu\quad\text{ as }t\downarrow 0 \tag{2.1}\]
for all \(\psi\in C^{0}_{c}(D)\).
**Claim 2.4**.: _The limit (2.1) holds for all \(\psi\in C^{\infty}_{c}(D)\)._
If Claim 2.4 holds then for all \(\psi\in C^{0}_{c}(D)\) we can consider the mollification \(\psi_{h}\) and estimate
\[\left|\int\psi d\mu_{g(t)}-\int\psi d\mu\right|\leq\left|\int\psi_{h}d\mu_{g(t )}-\int\psi_{h}d\mu\right|+\|\psi-\psi_{h}\|_{C^{0}}\big{(}\mu_{g(t)}(D)+\mu(D )\big{)}. \tag{2.2}\]
Invoking Claims 2.4 and 2.1 gives
\[\limsup_{t\downarrow 0}\left|\int\psi d\mu_{g(t)}-\int\psi d\mu\right|\leq 0 +C\|\psi-\psi_{h}\|_{C^{0}},\]
for \(C\) independent of \(h\) and \(\psi\). Sending \(h\downarrow 0\) then gives (2.1) for all \(\psi\in C^{0}_{c}(D)\) as desired.
Proof of Claim 2.4.: We already know that
\[\int_{D}\psi d\mu_{g(t_{n})}\to\int_{D}\psi d\mu\quad\text{ as }n\to\infty\]
for our null sequence \(t_{n}\), so it suffices to establish the integrability of the function
\[t\mapsto\frac{\partial}{\partial t}\int\psi d\mu_{g(t)}\]
over some small time interval \((0,\delta)\). Writing \(u(t)\) for the conformal factor of \(g(t)\), with respect to the standard flat metric on \(D\), we know that the function
\(t\mapsto\frac{u(\cdot,t)}{t}\) is monotonic (see, e.g. [20, Remark 2.5]) and so there exists some \(\eta>0\) such that \(u(t)\geq\eta t\) on \(D\times(0,\frac{\delta}{2}]\). Using the same argument as in [19, Lemma 4.5], we compute for our \(\psi\in C_{c}^{\infty}(D)\) that
\[\left|\partial_{t}\int\psi d\mu_{g(t)}\right|=\left|\partial_{t} \int\psi udx\right|=\left|\int\psi\Delta\log u\,dx\right| =\left|\int\Delta\psi\log u\,dx\right|\] \[\leq\left\|\Delta\psi\right\|_{C^{0}}\int_{D}|\log u(t)|dx.\]
We can then estimate for \(\eta t\in(0,1]\) that
\[\int_{D}|\log u(t)|dx =\int_{D\cap\{u\geq 1\}}\log u(t)dx+\int_{D\cap\{u<1\}}(-\log u(t))dx\] \[\leq\int_{D}u(t)\,dx+\pi(-\log\eta t)\] \[\leq C+\pi(-\log\eta t)\]
by Claim 2.1, which is integrable. _This completes the proof of Claim 2.4._
We have shown that there exists a Radon measure \(\mu\) on \(D\) such that \(\mu_{g(t)}\rightharpoonup\mu\) as \(t\downarrow 0\). It remains to show that \(\mu\) is nonatomic, which will also require Lemma 2.2. Suppose instead that \(\mu\) has a point of positive mass. After modifying by an automorphism of the disc \(D\), we may assume that \(\mu(\{0\})=\delta>0\). Choose \(t_{0}:=\min\{\frac{\delta}{64\pi},\frac{\varepsilon}{2}\}\). We shall show that the volume measure at this positive time \(t_{0}\) also has a point of positive mass, which will be a contradiction. For any \(r\in(0,\frac{1}{2})\) we have
\[\liminf_{s\downarrow 0}\mu_{g(s)}(B_{r})\geq\delta,\]
and hence for \(0<s_{0}<t_{0}\) sufficiently small
\[\mu_{g(s_{0})}(B_{r})\geq\frac{\delta}{2}.\]
If \(\mu_{r}\) denotes the restriction of \(\mu_{g(s_{0})}\) to \(B_{r}\), then by Theorem 1.2, there exists a Ricci flow \(G_{r}(t)\) for \(t\in(s_{0},s_{0}+\frac{\delta}{16\pi})\) on \(S^{2}\) starting weakly from \(\mu_{r}\) at time \(s_{0}\). Moreover, by Theorem 2.3, we have that \(G_{r}(t)\leq g(t)\). Applying Lemma 2.2 with \(R=\sqrt{2}r\) gives
\[\mu_{g(t_{0})}(B_{R})\geq\mu_{G_{r}(t_{0})}(B_{R})\geq\mu_{g(s_{0})}(B_{r})-1 6\pi t_{0}\geq\frac{\delta}{4},\]
and therefore
\[\mu_{g(t_{0})}(\{0\}):=\lim_{R\downarrow 0}\mu_{g(t_{0})}(B_{R})\geq\frac{ \delta}{4},\]
which gives a contradiction and completes the proof that \(\mu\) is nonatomic. _This completes the proof of Theorem 1.6._
## 3 Expanders versus measures
In this section we prove Theorem 1.3.
Suppose \(M\) is a surface equipped with a conformal structure, \(X\) is a complete conformal vector field on \(M\), and \(\mu\) is a nontrivial nonatomic Radon measure on \(M\) that is expanding under the action of \(X\). Let \(g(t)\) for \(t\in(0,T)\) be the unique complete conformal Ricci flow on \(M\) starting weakly from \(\mu\), given by Theorem 1.2.
Fixing \(\lambda>0\), we first note that the rescaled Ricci flow \(\lambda^{-1}g(t\lambda)\) for \(t\in(0,\lambda^{-1}T)\) starts weakly from the rescaled measure \(\lambda^{-1}\mu\).
Alternatively, let \(\phi_{t}:M\to M\) be again the flow of the vector field \(-\frac{X}{t}\) for all \(t>0\), with \(\phi_{1}=id_{M}\), and let \(\psi_{s}:M\to M\) be again the flow of the vector field \(X\) for all \(s\in\mathbb{R}\), with \(\psi_{0}=id_{M}\), so
\[\psi_{s}=\phi_{e^{-s}}.\]
The fact that \(\mu\) is expanding with respect to \(X\), i.e. (1.1), can be equivalently written as \(\phi_{\lambda}^{*}(\mu)=\psi_{-\log\lambda}^{*}(\mu)=\lambda^{-1}\mu\) for all \(\lambda>0\). Thus, the Ricci flow \(\phi_{\lambda}^{*}(g(t))\) for \(t\in(0,T)\) also starts weakly from \(\phi_{\lambda}^{*}(\mu)=\lambda^{-1}\mu\), and therefore, by Theorem 1.2, these two flows must agree. In particular, we can deduce that not only is our flow immortal, but it must also satisfy the relation
\[g(t)=t\phi_{t}^{*}(g(1)),\quad\forall t>0,\]
so \((M,g(1),X)\) is an expanding Ricci soliton and \(g(t)\) is an expanding Ricci soliton flow. Moreover, the soliton is nontrivial because otherwise \(\mu\) would have to be trivial.
Conversely, suppose we start with a nontrivial expanding Ricci soliton \((M,g,X)\) with \(g(t)=t\phi_{t}^{*}(g)\) the associated Ricci soliton flow. By Theorem 1.6, there exists a nonatomic Radon measure \(\mu\) such that \(g(t)\) starts weakly from \(\mu\). As above, we note that for any \(s\in\mathbb{R}\)
\[\psi_{s}^{*}(g(t))=\psi_{s}^{*}(t\psi_{-\log t}^{*}(g))=t\psi_{s-\log t}^{*}( g)=t\phi_{te^{-s}}^{*}(g)=e^{s}g(te^{-s}),\quad\forall t>0,\]
and hence taking \(t\downarrow 0\), we have that \(\psi_{s}^{*}(\mu)=e^{s}\mu\), for any \(s\in\mathbb{R}\), that is, \(X\cdot\mu=\mu\), which means that \(\mu\) is expanding with respect to \(X\). We see that \(\mu\) is nontrivial because, for example, Theorem 1.6 tells us that this is only possible when \((M,g,X)\) is trivial.
## 4 Classification of measure expanders
In this section we prove the classification of measure expanders claimed in Theorem 1.4. In light of Theorem 1.3, and the discussion in the introduction of trivial solitons, this will complete the classification of expanding Ricci solitons in two dimensions.
### Simply connected measure expanders
We begin with the case that \(M\) is a simply connected smooth surface equipped with a conformal structure, which can only be \(\mathbb{C}\), \(D\) or \(S^{2}\). The final case \(S^{2}\) can instantly be discounted since it is obvious that it cannot admit a measure expander (or, equivalently, an expanding soliton) because the total volume is finite (or because all expanding Ricci solitons on closed surfaces have constant curvature \(-\frac{1}{2}\) as mentioned in the introduction). This leaves us with the cases that \(M=\mathbb{C}\) or \(M\) is the disc, or equivalently a half-space.
The next ingredient for a simply connected measure expander is a nontrivial complete conformal vector field \(X\). Such vector fields are highly constrained. On \(\mathbb{C}\), the first possibility is a constant vector field, which after modification by a biholomorphism \(z\mapsto az\) can be considered to be \(X=\frac{\partial}{\partial x}\), or it could be an infinitesimal dilation/rotation, which after modification by a biholomorphism \(z\mapsto z+b\) could be considered to be a vector field \(X=\rho r\frac{\partial}{\partial r}+\sigma\frac{\partial}{\partial\theta}\), for real \(\rho,\sigma\in\mathbb{R}\). To be part of a measure expander, we must have \(\rho>0\), and we can then define \(\alpha=\frac{1}{\rho}>0\). This is because otherwise we would have \(\psi_{s}(D)\subset D\) for all \(s\geq 0\), implying that \(\psi_{s}^{*}\mu(D)\leq\mu(D)\) for \(s\geq 0\), whereas for \(\mu\) to be expanding with respect to \(X\) we need that \(\psi_{s}^{*}\mu(D)=e^{s}\mu(D)\). Moreover, after modification by a reflection, we may assume that \(\sigma\geq 0\).
Similarly, on the upper half-plane \(\mathbb{H}\), after modification by a biholomorphism we can reduce to the cases that \(X=\frac{\partial}{\partial x}\) or \(X=\frac{r}{\alpha}\frac{\partial}{\partial r}\) for \(\alpha>0\).
The final ingredient for a simply connected measure expander is a nontrivial nonatomic Radon measure expanding under the action of \(X\).
**Lemma 4.1**.: _Let \(N\) be either \(S^{1}\) or a connected open subset of \(\mathbb{R}\), and let \(\mu\) be a nontrivial nonatomic Radon measure on \(\mathbb{R}\times N\). Fix \(\alpha>0\). Then \(\mu\) is expanding under the action of the translating vector field \(X=\frac{1}{\alpha}\frac{\partial}{\partial x}\) in the sense of (1.1) if and only if \(\mu\) is a product measure_
\[\mu=e^{\alpha x}dx\otimes\nu, \tag{4.1}\]
_where \(dx\) denotes the Lebesgue measure on \(\mathbb{R}\), and \(\nu\) is a nontrivial (not necessarily nonatomic) Radon measure on \(N\)._
In practice, \(N\) will be either \(S^{1}\), \(\mathbb{R}\), \((0,\infty)\) or \((0,\pi)\).
**Remark 4.2**.: _It will be useful to keep in mind that Lemma 4.1 does not mind with which conformal structure that \(\mathbb{R}\times N\) is endowed, although in practice we will want the vector field to be conformal._
Proof of Lemma 4.1.: Given any nonatomic nontrivial Radon measure \(\mu\) on \(\mathbb{R}\times N\), define a new nonatomic nontrivial Radon measure \(\tilde{\mu}\) via the relation \(\mu=e^{\alpha x}\tilde{\mu}\). Since \(\psi_{s}^{*}(e^{\alpha x}\tilde{\mu})=e^{\alpha x+s}\psi_{s}^{*}(\tilde{\mu})\), we have that
\[X\cdot\mu=X\cdot(e^{\alpha x}\tilde{\mu})=\frac{d}{ds}\psi_{s}^{*}(e^{\alpha x }\tilde{\mu})|_{s=0}=\mu+e^{\alpha x}(X\cdot\tilde{\mu}).\]
In particular, \(\mu\) is expanding (i.e, \(X\cdot\mu=\mu\)) if and only if \(\tilde{\mu}\) is invariant under \(X\) (i.e, \(X\cdot\tilde{\mu}=0\)). Furthermore, since a measure \(\tilde{\mu}\) is invariant under horizontal translations if and only if it decomposes as a product measure
\[\tilde{\mu}=dx\otimes\nu,\]
where \(\nu\) is the Borel measure on \(N\) defined by
\[\nu(A)=\tilde{\mu}(A\times[0,1)),\]
the result follows.
**Lemma 4.3**.: _Any simply connected measure expander is isomorphic to one of the measure expanders from cases (A), (Bi) or (C) in Theorem 1.4._
Proof of Lemma 4.3.: If \(M=\mathbb{H}\), as discussed earlier we may assume that \(X\) is \(\frac{\partial}{\partial x}\) or \(\frac{\tau}{\alpha}\frac{\partial}{\partial r}\). In the first case, applying Lemma 4.1 with \(N=(0,\infty)\), we must have a measure expander from (Ai). In the second case, we push forward by the conformal diffeomorphism
\[\mathbb{H}\to\mathbb{R}\times(0,\pi),\quad z=re^{i\theta}\mapsto(\log r, \theta),\]
to give a measure expander \((\mathbb{R}\times(0,\pi),\mu,\frac{1}{\alpha}\frac{\partial}{\partial x})\). Applying Lemma 4.1 with \(N=(0,\pi)\), we must have a measure expander from (Aii). These are all possible measure expanders on \(\mathbb{H}\) up to isomorphism.
If \(M=\mathbb{C}\) and \(X=\frac{\partial}{\partial x}\), applying Lemma 4.1 with \(N=\mathbb{R}\), we have a measure expander from (Bi).
Finally, we need to consider the case \(M=\mathbb{C}\) and \(X=\frac{r}{\alpha}\frac{\partial}{\partial r}+\frac{\beta}{\alpha}\frac{ \partial}{\partial\theta}\), for some \(\alpha>0\) and \(\beta\geq 0\). Note that if we remove the origin, then we still have a measure expander, and this new measure expander uniquely determines the original. Pulling back under the conformal transformation
\[\mathbb{R}\times S^{1}\to\mathbb{C}\setminus\{0\},\quad(x,\theta)\mapsto e^{x +i\theta},\]
we have a measure expander \((\mathbb{R}\times S^{1},\mu,\frac{1}{\alpha}\frac{\partial}{\partial x}+\frac {\beta}{\alpha}\frac{\partial}{\partial\theta})\). If \(\beta=0\), we can immediately apply Lemma 4.1 with \(N=S^{1}\) to conclude that our punctured measure expander is in case (Bii), and hence our original measure expander is in case (Ci). Otherwise \(\beta>0\), so that when we pull back by the twisting diffeomorphism \(F_{\beta}\) as defined in Theorem 1.4, we have a measure expander
\[(F_{\beta}^{*}(\mathbb{R}\times S^{1}),F_{\beta}^{*}\mu,\frac{1}{\alpha}\frac {\partial}{\partial x}),\]
to which we can apply Lemma 4.1, giving \(F_{\beta}^{*}\mu=e^{\alpha x}dx\otimes\nu\), for some \(\nu\in\mathcal{R}(S^{1})\). Here we use the notation \(F_{\beta}^{*}(\mathbb{R}\times S^{1})\) to indicate that we are pulling back the conformal structure of \(\mathbb{R}\times S^{1}\).
Pushing forward by \(F_{\beta}\), our punctured measure expander
\[\left(\mathbb{R}\times S^{1},(F_{\beta})_{*}(e^{\alpha x}dx\otimes\nu),\frac{ 1}{\alpha}\frac{\partial}{\partial x}+\frac{\beta}{\alpha}\frac{\partial}{ \partial\theta}\right)\]
is in (Biii), and so our original measure expander is in (Cii).
At this point we have found all possible simply connected measure expanders.
### Quotients of measure expanders
For any measure expander \((M,\mu,X)\), its universal cover \((\tilde{M},\tilde{\mu},\tilde{X})\) is also a measure expander. Moreover, the deck transformations of the covering \(G\) form a discrete subgroup of the group of conformal automorphisms of \(\tilde{M}\), such that all nontrivial elements of \(G\) are fixed point free, and each element of \(G\) preserves both \(\tilde{\mu}\) and \(\tilde{X}\):
\[g_{*}\tilde{\mu}=\tilde{\mu},\quad g_{*}\tilde{X}=\tilde{X},\quad\forall g\in G.\]
In this section we identify all possible ways in which we can go the other way and take a quotient of one of the simply connected measure expanders found in Section 4.1 to give a new measure expander.
The only nontrivial conformal automorphisms of \(\mathbb{H}\) preserving \(\frac{\partial}{\partial x}\) are horizontal translations, but these would scale any measure from (Ai). Moreover, the only nontrivial conformal automorphisms of \(\mathbb{H}\) preserving \(\frac{r}{\alpha}\frac{\partial}{\partial r}\) are dilations, which after pushing forward to \(\mathbb{R}\times(0,\pi)\) via the biholomorphism \(z=re^{i\theta}\mapsto(\log r,\theta)\), correspond to horizontal translations preserving the vector field \(\frac{1}{\alpha}\frac{\partial}{\partial x}\). But any horizontal translation would scale a measure from (Aii). We also note that the only fixed point free automorphisms of \(\mathbb{C}\) are translations. Therefore, up to isomorphism, we may assume that any measure expander that is not simply connected has universal cover \(\tilde{M}=\mathbb{C}\), with \(\tilde{X}=\frac{\partial}{\partial x}\), putting the universal cover in case (Bi).
In order for a translation of \(\mathbb{C}\) to preserve a measure \(\tilde{\mu}\) from case (Bi), it must have a nontrivial vertical component. We first consider the situation that such a translation is purely vertical, so that after dilating, we may assume our measure expander is of the form
\[(\mathbb{C},\tilde{\mu},\frac{1}{\alpha}\frac{\partial}{\partial x}),\]
for some \(\alpha>0\), and is invariant under the normalised translations
\[(x,y)\mapsto(x,y+2\pi n),\quad n\in\mathbb{Z}.\]
Passing to the quotient we have a measure expander
\[(\mathbb{R}\times S^{1},\mu,\frac{1}{\alpha}\frac{\partial}{\partial x}).\]
By either looking at the structure of \(\tilde{\mu}\) or applying Lemma 4.1 directly, we see that \(\mu\) is of the form
\[\mu=e^{\alpha x}dx\otimes\nu,\]
for some \(\nu\in\mathcal{R}(S^{1})\), and therefore the quotiented measure expander is in case (Bii).
In the more general case that our translations are not orthogonal to our vector field, we first dilate and rotate to put our measure expander in the form
\[(\mathbb{C},\tilde{\mu},\frac{1}{\alpha}\frac{\partial}{\partial x}+\frac{ \beta}{\alpha}\frac{\partial}{\partial y}),\]
for some \(\alpha>0\) and \(\beta\in\mathbb{R}\setminus\{0\}\), and so it is invariant under the normalised translations
\[(x,y)\mapsto(x,y+2\pi n),\quad n\in\mathbb{Z}.\]
After possibly reflecting, we may also assume that \(\beta>0\).
The quotient is then a measure expander \((\mathbb{R}\times S^{1},\mu,\frac{1}{\alpha}\frac{\partial}{\partial x}+\frac{ \beta}{\alpha}\frac{\partial}{\partial\theta})\). Pulling back by the twisting diffeomorphism \(F_{\beta}\) as defined in Theorem 1.4 gives a measure expander
\[((F_{\beta})^{*}(\mathbb{R}\times S^{1}),F_{\beta}^{*}\mu,\frac{1}{\alpha} \frac{\partial}{\partial x}).\]
Applying Lemma 4.1, we find that
\[F_{\beta}^{*}\mu=e^{\alpha x}dx\otimes\nu,\]
for some \(\nu\in\mathcal{R}(S^{1})\), and therefore the quotiented measure expander is in (Biii). This completes the proof of the first part of Theorem 1.4.
### Isomorphic measure expanders
We now know that every measure expander is isomorphic to a measure expander appearing in the list from Theorem 1.4. It remains to establish when two measure expanders \((M_{1},\mu_{1},X_{1})\), \((M_{2},\mu_{2},X_{2})\) from the list are isomorphic.
We would like to show first that they belong to the same family (in particular we have \(M_{1}=M_{2}\), not just modulo conformal automorphism) and the vector fields are identical.
It is clear that \(M_{1}\) and \(M_{2}\) have to have the same underlying conformal type, which immediately separates families (Ai) and (Aii) from the others.
To distinguish between (Ai) and (Aii), which are both conformal to the disc, observe that in case (Ai), the automorphisms generated by the vector field are parabolic, whereas in (Aii) they are not. If the two measure expanders are in case (Ai) then they automatically have the same vector field \(X=\frac{\partial}{\partial x}\) by definition of that family. If the two measure expanders are in case (Aii), with \(X_{1}=\frac{1}{\alpha_{1}}\frac{\partial}{\partial x}\) and \(X_{2}=\frac{1}{\alpha_{2}}\frac{\partial}{\partial x}\), for \(\alpha_{1},\alpha_{2}>0\), then the only conformal automorphisms of the strip \(\mathbb{R}\times(0,\pi)\) that could push forward \(X_{1}\) to \(X_{2}\) are a combination of a horizontal translation and possibly the reflection about the line \(\mathbb{R}\times\{\frac{\pi}{2}\}\), in which case we are forced to have \(X_{1}=X_{2}\) as desired.
Even more basic is that \(M_{1}\) and \(M_{2}\) have to have the same topology. This further separates (Bii) and (Biii) from the others.
To distinguish (Bi) from (Ci) and (Cii), observe that in the latter cases \(X\) has a fixed point, but in case (Bi) it does not. To distinguish between (Ci) and (Cii), we note that a conformal isomorphism must fix the origin, and so is a combination of a rotation and homothety, but any of the vector fields in each of these cases is invariant under the push-forward by such a transformation. As a by-product, we find also that \(X_{1}=X_{2}\).
Finally, in cases (Bii) and (Biii), automorphisms of \(\mathbb{R}\times S^{1}\) are generated by translations and reflections in both \(x\) and \(\theta\). Since we always have \(\alpha>0\), reflections reversing the orientation of the line are not permitted, and any automorphism must be a combination of a translation (in \(x\) and \(\theta\)) and possibly a reflection of the form \((x,\theta)\mapsto(x,-\theta)\).
In case (Bii), any vector field \(\frac{1}{\alpha}\frac{\partial}{\partial x}\) is invariant under the push forward by such an automorphism. Finally, since \(\beta>0\) in case (Biii), the only permitted automorphisms are translations. But any vector field \(\frac{1}{\alpha}\frac{\partial}{\partial x}+\frac{\beta}{\alpha}\frac{\partial} {\partial\theta}\) is invariant under the push-forward by a translation.
We have thus shown that the two isomorphic measure expanders are in the same family, and have the same vector field \(X\). It remains to show that the measures of the measure expanders are related in the way claimed in the theorem.
**Lemma 4.4**.: _Let \(N\) be either \(S^{1}\) or a connected open subset of \(\mathbb{R}\). Suppose that \(\nu_{1},\nu_{2}\in\mathcal{R}(N)\), \(\alpha>0\), and that the corresponding nontrivial Radon measures_
\[\mu_{i}=e^{\alpha x}dx\otimes\nu_{i},\quad i\in\{1,2\},\]
_on \(\mathbb{R}\times N\) are expanding under the action of the translating vector field \(\frac{1}{\alpha}\frac{\partial}{\partial x}\). If \(\mathbb{R}\times N\) has the conformal structure of the Cartesian product, then the measure expanders \((\mathbb{R}\times N,\mu_{i},\frac{1}{\alpha}\frac{\partial}{\partial x})\) for \(i=1,2\) are isomorphic if and only if there exists an isometry \(\varphi:N\to N\), and some constant \(\lambda>0\), such that_
\[\varphi_{*}(\nu_{1})=\lambda\cdot\nu_{2}. \tag{4.2}\]
Proof.: If \(\Phi:\mathbb{R}\times N\to\mathbb{R}\times N\) is a conformal diffeomorphism such that \(\Phi_{*}(\frac{1}{\alpha}\frac{\partial}{\partial x})=\frac{1}{\alpha}\frac{ \partial}{\partial x}\), then \(\Phi_{*}(\frac{\partial}{\partial y})=\pm\frac{\partial}{\partial y}\). Therefore,
\[\Phi(x,y):=(x+\alpha^{-1}\log(\lambda),\varphi(y)),\quad\forall(x,y)\in \mathbb{R}\times N,\]
for some \(\lambda>0\), and \(\varphi:N\to N\) an isometry. If \(\Phi\) is additionally an isomorphism between the measure expanders \((\mathbb{R}\times N,\mu_{i},\frac{1}{\alpha}\frac{\partial}{\partial x})\), then
\[e^{\alpha x}dx\otimes\nu_{2}=\mu_{2}=\Phi_{*}(\mu_{1})=\Phi_{*} (e^{\alpha x}dx\otimes\nu_{1})\\ =e^{\alpha x-\log\lambda}dx\otimes\varphi_{*}(\nu_{1})=e^{\alpha x }dx\otimes\lambda^{-1}\varphi_{*}(\nu_{1}),\]
and so \(\nu_{1}\) and \(\nu_{2}\) must satisfy (4.2).
The last step in the proof of Theorem 1.4 is to show that the measures are related in the way described in the theorem. For cases (A), (Bi), and (Bii), it follows immediately from Lemma 4.4.
In case (Biii), recall that any conformal automorphism of \(\mathbb{R}\times S^{1}\) fixing a vector field of the form
\[\frac{1}{\alpha}\frac{\partial}{\partial x}+\frac{\beta}{\alpha}\frac{ \partial}{\partial\theta},\quad\alpha,\beta>0,\]
must be a translation. Since the conjugation of a translation by a twisting diffeomorphism \(F_{\beta}\) is still a translation, any two isomorphic measure expanders
\((F_{\beta}^{*}(\mathbb{R}\times S^{1}),e^{\alpha x}dx\otimes\nu_{i},\frac{1}{ \alpha}\frac{\partial}{\partial x})\), must be isomorphic via a translation. Repeating the calculation in the proof of Lemma 4.4, we deduce that there is some \(\lambda>0\) and an _orientation preserving_ isometry \(\varphi:S^{1}\to S^{1}\) such that \(\nu_{1}\) and \(\nu_{2}\) satisfy (4.2).
For cases (Ci) and (Cii), any conformal automorphism fixing the vector field must also fix the origin, and hence will restrict to a conformal automorphism on the punctured plane. These cases therefore follow from (Bii) and (Biii).
Combining the entirety of Section 4, this completes the proof of Theorem 1.4.
## 5 Gradient expanding solitons
Suppose \((M,g,\nabla f)\) is an expanding gradient Ricci soliton. If we rotate the vector field by \(90\) degrees then we get a Killing field; see e.g. [4, Lemma 3.1]. Therefore, any measure expander that corresponds to a gradient soliton must have the property that this rotated vector field leaves the measure invariant. Since Killing fields on complete manifolds are necessarily complete themselves, this immediately rules out any of the measure expanders from (A) of Theorem 1.4.
In cases (Biii) and (Cii), for the corresponding soliton to be gradient, the rotated vector field \(-\frac{\beta}{\alpha}\frac{\partial}{\partial x}+\frac{1}{\alpha}\frac{ \partial}{\partial\theta}\) must be a Killing field, which has flow \(f_{s}(x,\theta):=(x-\frac{\beta s}{\alpha},\theta+\frac{s}{\alpha})\) for \(s\in\mathbb{R}\). Since the measure \(\mu\) must be invariant under this flow, we can deduce that
\[\mu\left((0,1)\times S^{1}\right)=(f_{\alpha})_{*}\mu\left((0,1)\times S^{1} \right)=\mu\left((\beta,1+\beta)\times S^{1}\right). \tag{5.1}\]
Furthermore, using that \(\mu\) has the form
\[\mu=(F_{\beta})_{*}\left(e^{\alpha x}dx\otimes\nu\right),\]
for some \(\nu\in\mathcal{R}(S^{1})\), and that \(F_{\beta}^{-1}\left(I\times S^{1}\right)=I\times S^{1}\) for any interval \(I\), when substituted into (5.1) we deduce that
\[\frac{1}{\alpha}(e^{\alpha}-1)\nu(S^{1})=\mu\left((0,1)\times S^{1}\right)= \mu\left((\beta,1+\beta)\times S^{1}\right)=\frac{1}{\alpha}e^{\alpha\beta}(e ^{\alpha}-1)\nu(S^{1}),\]
and hence \(\nu=0\), which is a contradiction. Therefore, the only cases in Theorem 1.4 that could correspond to gradient solitons are (Bi), (Bii) and (Ci).
In case (Bi), the rotated vector field \(\frac{\partial}{\partial y}\) must be a Killing field, and so the fibre measure \(\nu\) must be invariant under translation. Therefore, up to isomorphism of the measure expander, \(\nu\) is just the Lebesgue measure \(dy\) on \(\mathbb{R}\).
Similarly, in cases (Bii) and (Ci) the fibre measure \(\nu\) must be invariant under rotation, and so up to isomorphism of the measure expander is just the quotient of the Lebesgue measure \(d\theta\) on \(S^{1}\).
We have proved the following.
**Theorem 5.1**.: _Suppose that \((M,g,\nabla f)\) is a nontrivial expanding gradient Ricci soliton, and \((M,\mu,\nabla f)\) the corresponding measure expander as in Theorem 1.3. Then \((M,\mu,\nabla f)\) is isomorphic to one of the following distinct measure expanders:_
1. \(\left(\mathbb{C},e^{x}dx\otimes dy,\frac{\partial}{\partial x}\right)\)_, inducing the universal cover of the soliton emanating from the punctured plane._
2. \(\left(\mathbb{R}\times S^{1},e^{\alpha x}dx\otimes d\theta,\frac{1}{\alpha} \frac{\partial}{\partial x}\right)\)_, for each_ \(\alpha>0\)_, inducing solitons with one cusp end and one conical end._
3. \(\left(\mathbb{C},e^{\alpha x}dx\otimes d\theta,\frac{1}{\alpha}\frac{\partial }{\partial x}\right)\)_, for each_ \(\alpha>0\)_, inducing solitons having one conical end, with_ \(\alpha=2\) _giving the Gaussian soliton._
The argument above shows that any nontrivial expanding gradient soliton is induced by one of the examples (1), (2) or (3), but it is worth verifying that every measure expander in this list really does induce a _gradient_ soliton. This follows because the solitons will inherit the invariance under vertical translations from the measures \(\mu\), and this allows us to define a soliton potential function \(f\), depending on \(x\) but not on \(y\) (or \(\theta\)), simply by integrating the ODE \(\nabla f=X\) along a horizontal line.
This recovers the known classification of expanding _gradient_ solitons in two dimensions (e.g. [4, Chapter 3]).
|
2310.11964 | AMR Parsing with Causal Hierarchical Attention and Pointers | Translation-based AMR parsers have recently gained popularity due to their
simplicity and effectiveness. They predict linearized graphs as free texts,
avoiding explicit structure modeling. However, this simplicity neglects
structural locality in AMR graphs and introduces unnecessary tokens to
represent coreferences. In this paper, we introduce new target forms of AMR
parsing and a novel model, CHAP, which is equipped with causal hierarchical
attention and the pointer mechanism, enabling the integration of structures
into the Transformer decoder. We empirically explore various alternative
modeling options. Experiments show that our model outperforms baseline models
on four out of five benchmarks in the setting of no additional data. | Chao Lou, Kewei Tu | 2023-10-18T13:44:26Z | http://arxiv.org/abs/2310.11964v1 | # AMR Parsing with Causal Hierarchical Attention and Pointers
###### Abstract
Translation-based AMR parsers have recently gained popularity due to their simplicity and effectiveness. They predict linearized graphs as free texts, avoiding explicit structure modeling. However, this simplicity neglects structural locality in AMR graphs and introduces unnecessary tokens to represent coreferences. In this paper, we introduce new target forms of AMR parsing and a novel model, CHAP, which is equipped with causal hierarchical attention and the pointer mechanism, enabling the integration of structures into the Transformer decoder. We empirically explore various alternative modeling options. Experiments show that our model outperforms baseline models on four out of five benchmarks in the setting of no additional data.
## 1 Introduction
Abstract Meaning Representation Banarescu et al. (2013) is a semantic representation of natural language sentences typically depicted as directed acyclic graphs, as illustrated in Fig. 0(a). This representation is both readable and broad-coverage, attracting considerable research attention across various domains, including information extraction Zhang and Ji (2021); Xu et al. (2022), summarization Hardy and Vlachos (2018); Liao et al. (2018), and vision-language understanding Schuster et al. (2015); Choi et al. (2022). However, the inherent flexibility of graph structures makes AMR parsing, i.e., translating natural language sentences into AMR graphs, a challenging task.
The development of AMR parsers has been boosted by recent research on pretrained sequence-to-sequence (seq2seq) models. Several studies, categorized as translation-based models, show that fine-tuning pretrained seq2seq models to predict linearized graphs as if they are free texts (e.g., examples in Tab.1.ab) can achieve competitive or even superior performance Konstas et al. (2017); Xu et al. (2020); Bevilacqua et al. (2021); Lee et al. (2023). This finding has spurred a wave of subsequent efforts to design more effective training strategies that maximize the potential of pretrained decoders Bai et al. (2022); Cheng et al. (2022); Wang et al. (2022); Chen et al. (2022), thereby sidelining the exploration of more suitable decoders for graph generation. Contrary to preceding translation-based models, we contend that explicit structure modeling within pretrained decoders remains beneficial in AMR parsing. To our knowledge, the Ancestor parser Yu and Gildea (2022) is the only translation-based model contributing to explicit structure modeling, which introduces shortcuts to access ancestors in the graph. However, AMR graphs contain more information than just ancestors, such as siblings and coreferences, resulting in suboptimal modeling.
In this paper, we propose CHAP, a novel translation-based AMR parser distinguished by three innovations. Firstly, we introduce new target forms of AMR parsing. As demonstrated in Tab. 1.c-e, we use multiple layers to capture different semantics, such that each layer is simple and concise. Particularly, the base layer, which encapsulates all meanings except for coreferences (or reentrancies), is a tree-structured representation, enabling more convenient structure modeling than the graph structure of AMR. Meanwhile, coreferences are presented through pointers, circumventing several shortcomings associated with the variable-based coreference representation (See Sec. 3 for more details) used in all previous translation-based models. Secondly, we propose Causal Hierarchical Attention (CHA), the core mechanism of our incremental structure modeling, inspired by Transformer Grammars Sartran et al. (2022). CHA describes a procedure of continuously composing child nodes to their parent nodes and encoding new nodes with all uncomposed nodes, as illustrated in Fig. 2. Un
like the causal attention in translation-based models, which allows a token to interact with all its preceding tokens, CHA incorporates a strong inductive bias of recursion, composition, and graph topology. Thirdly, deriving from transition-based AMR parsers Zhou et al. (2021, 2021), we introduce a pointer encoder for encoding histories and a pointer net for predicting coreferences, which is proven to be an effective solution for generalizing to a variable-size output space Vinyals et al. (2015); See et al. (2017).
We propose various alternative modeling options of CHA and strategies for integrating CHA with existing pretrained seq2seq models and investigate them via extensive experiments. Ultimately, our model CHAP achieves superior performance on two in-distribution and three out-of-distribution benchmarks. Our code is available at [https://github.com/LouChao98/chap_amr_parser](https://github.com/LouChao98/chap_amr_parser).
## 2 Related Work
### AMR Parsing
Most recent AMR parsing models generate AMR graphs via a series of local decisions. Transition-based models Ballesteros and Al-Onaizan (2017); Naseem et al. (2019); Fernandez Astudillo et al. (2020); Zhou et al. (2021, 2021) and translation-based models Konstas et al. (2017); Xu et al. (2020); Bevilacqua et al. (2021); Lee et al. (2023) epitomize local models as they are trained with teacher forcing, optimizing only next-step predictions, and rely on greedy decoding algorithms, such as greedy search and beam search. Particularly, transition-based models predict actions permitted by a transition system, while translation-based models predict AMR graph tokens as free texts. Some factorization-based models are also local Cai and Lam (2019, 2020), sequentially composing subgraphs into bigger ones. We discern differences in four properties among previous local models and our model in Tab. 2:
TrainabilityWhether additional information is required for training. Transition-based models rely on word-node alignment to define the gold action sequence.
Structure modelingWhether structures are modeled explicitly in the decoder. Transition-based models encode action histories like texts without considering graph structures, except for a few exceptions Zhou et al. (2021). Besides, translation-based models opt for compatibility with pretrained
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline
**ID** & **Name** & **Representation** \\ \hline a & PM & ( a / alpha :arg@ ( b / beta ) :arg1 ( g / gamma :arg2 b ) ) \\ b & S-DFS & (\textless{}@\textgreater{} alpha :arg@ ( \textless{}@\textgreater{}1\textgreater{} beta ) :arg1 ( \textless{}@\textgreater{}2\textgreater{} gamma :arg2 \textless{}@\textgreater{}1\textgreater{} y ) ) \\ c & \(\textless{}@\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{} \textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{} \textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{} \textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{} \textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{} \textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{} \textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{} \textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{} \textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{}\textgreater{} \textgreater{}\textgreater{
decoders, prioritizing this over explicit structure modeling.
Pretrained decoderWhether pretrained decoders can be leveraged.
Variable-freeWhether there are variable tokens in the target representation. Transition-based models, factorization-based models and ours generate coreference pointers, obviating the need of introducing variables.
### Transformer Grammar
Transformer Grammars (TGs; Sartran et al., 2022) are a novel class of language models that simultaneously generate sentences and constituency parse trees, in the fashion of transition-based parsing. The base layer of Tab. 0(c) can be viewed as an example action sequence. There are three types of actions in TG: (1) the token "(" represents the action \(\mathsf{ONT}\), opening a nonterminal; (2) the token ")" represents the action \(\mathsf{CNT}\), closing the nearest open nonterminal; and (3) all other tokens (e.g., a and :arg0) represent the action \(\mathsf{T}\), generating a terminal. TG carries out top-down generation, where a nonterminal is allocated before its children. We will also explore a bottom-up variant in Sec. 3.4. Several studies have already attempted to generate syntax-augmented sequences (Aharoni and Goldberg, 2017; Qian et al., 2021). However, TG differentiates itself from prior research through its unique simulation of stack operations in transition-based parsing, which is implemented by enforcing a specific instance of CHA. A TG-like CHA is referred to as \(\Downarrow\)double in this paper and we will present technical details in Sec. 3.3 along with other variants.
## 3 Structure Modeling
We primarily highlight two advantages of incorporating structured modeling into the decoder. Firstly, the sequential order and adjacency of previous linearized form mismatch the locality of real graph structures, making the Transformer decoder hard to understand graph data. Specifically, adjacent nodes in an AMR graph exhibit strong semantic relationships, but they could be distant in the linearized form (e.g., person and tour-01 in Fig. 0(a)). Conversely, tokens closely positioned in the linearized form may be far apart in the AMR graph (e.g., employ-01 and tour-01 in Fig. 0(a)). Secondly, previous models embed variables into the linearized form (e.g., b in Tab. 0(a) and <R1> in Tab. 0(b)) and represent coreferences (or reentrencies) by reusing the same variables. However, the literal value of variables is inconsequential. For example, in the PENMAN form, (a / alpha :arg0 (b / beta)) conveys the same meaning as (n1 / alpha :arg0 (n2 / beta)). Furthermore, the usage of variables brings up problems regarding generalization (Wong and Mooney, 2007; Poelman et al., 2022). For instance, in the SPRING\({}_{\mathsf{DFS}}\) form, <R@> invariably comes first and appears in all training samples, while <R10@> is considerably less frequent.
Figure 2: Demonstration of Causal Hierarchical Attention. We draw the aggregation on graphs performed at four steps in (a)-(d) and highlight the corresponding token for each step in (e) with green boxes. the The generation order is depth-first and left-to-right: alpha\(\rightarrow\)beta\(\rightarrow\)delta\(\rightarrow\)epsilon\(\rightarrow\)gamma\(\rightarrow\)zeta\(\rightarrow\)eta\(\rightarrow\)eta\(\rightarrow\)eta. The node of interest at each step is highlighted in blue, gathering information from all solid nodes. Gray dashed nodes, on the other hand, are invisible.
### Multi-layer Target Form
As shown in Tab. 1.c, we incorporate a new layer, named the coreference (coref) layer, on top of the conventional one produced by a DFS linearization, named the base layer1. The coref layer serves to represent coreferences, in which a pointer points from a mention to its nearest preceding mention, and the base layer encapsulates all other meanings. From a graph perspective, a referent is replicated to new nodes with an amount equal to the reference count that are linked by newly introduced coref pointers, as illustrated in Fig. 0(b). We argue that our forms are more promising because our forms can avoid meaningless tokens (i.e., variables) from cluttering up the base layer, yielding several beneficial byproducts: (1) it shortens the representation length; (2) it aligns the representation more closely with natural language; and (3) it allows the base layer to be interpreted as trees, a vital characteristic for our structure modeling. Tab. 1.d and e are two variants of Tab. 1.c. These three forms are designed to support different variants of CHA, which will be introduced in Sec. 3.3 and 3.4.
Footnote 1: We use the DFS order provided in the AMR datasets.
### Causal Hierarchical Attention
Causal Hierarchical Attention (CHA) is situated in the decoder and maintains structures during generation. For each token, CHA performs one of the two actions, namely _compose_ and _expand_, as demonstrated in Fig. 2. The _compose_ operation is performed once all children of a parent node have been generated. It aggregates these children to obtain a comprehensive representation of the subtree under the parent node, subsequently setting the children invisible in future attention. On the other hand, the _expand_ operation aggregates all visible tokens to derive the representation of the subsequent token.
We note the subtle distinction between the term _parent_ in the DAG representation (e.g., Fig. 0(a)) and in target forms (e.g., Tab. 1 and Fig. 2(a)).2 Recall that TG uses "("(ONT) to indicate a new nonterminal, which is the parent node of all following tokens before a matched ")" (CNT). This implies that alpha in Tab. 1.c is considered as a child node, being a sibling of : arg0, rather than a parent node governing them. This discrepancy does not impact our modeling because we can treat labels of nonterminals as a particular type of child nodes, which are absorbed into the parent node when drawing the DAG representation.
Footnote 2: More complicated examples can be found in Appx. A.1, which are \(\Downarrow\)single and \(\Uparrow\) trees of the base layer in Fig. 2.
The two actions can be implemented by modifying attention masks conveniently. Specially, the _compose_ operation masks out attention to tokens that are not destined to be composed, as depicted in the fourth row of Fig. 2(c). Moreover, the _expand_ operation masks out attention to tokens that have been composed in previous steps, as depicted in the top three rows and the fifth row of Fig. 2(c).
In subsequent sections, we will explore two classes of generation procedures that utilize different target forms and variants of CHA, akin to top-down (Dyer et al., 2016; Nguyen et al., 2021; Sartran et al., 2022) and bottom-up (Yang and Tu, 2022) parsing. Note that the top-down and bottom-up directions are defined with respect to the syntax tree of linearized AMR, instead of the AMR graph. In a syntax tree, all tokens in an AMR graph (e.g., alpha and : arg0) are leaves and their compositions are non-leaf nodes.
Figure 3: The target forms and the attention mask of the three variants of CHA for the tree in (a). Orange cells represent the _compose_ operation, while blue cells with plaid represent the _expand_ operation. White cells are masked out. The vertical and horizontal axis represent attending and attended tokens, respectively.
### Top-down generation
Most prior studies utilize paired brackets to denote nested structures, as shown in Tab. 1.a-d. Section 2.2 outlines that, in left-to-right decoding, due to the prefix "(", this type of representation results in top-down tree generation.
We consider two modeling options of top-down generation, \(\Downarrow\)single (Fig. 2(c) and \(\Downarrow\)double (Fig. 2(b)), varying on actions triggered by ")". More precisely, upon seeing a ")", \(\Downarrow\)single executes a _compose_, whereas \(\Downarrow\)double executes an additional _expand_ after the _compose_. For other tokens, both \(\Downarrow\)single and \(\Downarrow\)double execute an _expand_ operation. Because the decoder performs one attention for each token, in \(\Downarrow\)double, each ")" is duplicated to represent _compose_ and _expand_ respectively, i.e., ")" becomes ")\({}_{1}\) )\({}_{2}\)". We detail the procedure of generating \(M_{\text{CHA}}\) for \(\Downarrow\)single in Alg. 1. The procedure for \(\Downarrow\)double can be found in Sartran et al.'s (2022) Alg. 1.
The motivation for the two variants is as follows. In a strict leaf-to-root information aggregation procedure, which is adopted in many studies on tree encoding (Tai et al., 2015; Drozdov et al., 2019; Hu et al., 2021; Zhou et al., 2022), a parent node only aggregates information from its children, remaining unaware of other generated structures (e.g., beta is unaware of alpha in Fig. 2(b)). However, when new nodes are being expanded, utilizing all available information could be a more reasonable approach (e.g., gamma in Fig. 2(c)). Thus, an _expand_ process is introduced to handle this task. The situation with CHA becomes more flexible. Recall that all child nodes are encoded with the _expand_ action, which aggregates information from all visible nodes, such that information of non-child nodes is leaked to the parent node during composition. \(\Downarrow\)single relies on the neural network's capability to encode all necessary information through this leakage, while \(\Downarrow\)double employs an explicit _expand_ to allow models to directly revisit their histories.
```
Data: sequence of token \(t\) with length \(N\) Result: attention mask \(M_{\text{CHA}}\in\mathbb{R}^{N\times N}\) \(S\leftarrow[\:]\)\(\rhd\) Empty stack \(M_{\text{CHA}}\leftarrow-\infty\) for\(i\gets 1\)to\(N\)do if\(t[i]\) = ')' then\(\rhd\) compose \(j\gets i\) while\(t[j]\neq\) '(' do \(M_{\text{CHA}}[ij]\gets 0\) \(j\gets S.pop()\) end for \(M_{\text{CHA}}[ij]\gets 0\) \(S.push(i)\) else \(S.push(i)\) for\(j\in S\)do\(\rhd\) expand \(M_{\text{CHA}}[ij]\gets 0\) end for end for return\(M_{\text{CHA}}\)
```
**Algorithm 1**\(M_{\text{CHA}}\) for \(\Downarrow\)single.
### Bottom-up generation
In the bottom-up generation, the parent node is allocated after all child nodes have been generated. This process enables the model to review all yet-to-be-composed tokens before deciding which ones should be composed into a subtree, in contrast to the top-down generation, where the model is required to predict the existence of a parent node without seeing its children. The corresponding target form, as illustrated in Tab. 1.e, contains no brackets. Instead, a special token \(\blacksquare\) is placed after the rightmost child node of each parent node, with a pointer pointing to the leftmost child node. We execute the _compose_ operation for \(\blacksquare\) and the _expand_ operation for other tokens. The generation of the attention mask (Fig. 2(d)) is analogous to \(\Downarrow\)single, but we utilize pointers in place of left brackets to determine the left boundaries of subtrees. The exact procedure can be found in Appx. B.
## 4 Parsing Model
Our parser is based on BART (Lewis et al., 2020), a pretrained seq2seq model. We make three modifications to BART: (1) we add a new module in the decoder to encode generated pointers, (2) we enhance decoder layers with CHA, and (3) we use the pointer net to predict pointers.
### Encoding Pointers
The target form can be represented as a tuple of \((t,p)\)3, where \(t\) and \(p\) are the sequence of the base layer and the coref layer, respectively, such that each \(p_{i}\) is the index of the pointed token. We define \(p_{i}=-1\) if there is no pointer at index \(i\).
In the BART model, \(t\) is encoded using the token embedding. However, no suitable module exists for encoding \(p\). To address this issue, we introduce a multi-layer perceptron, denoted as \(\text{MLP}_{p}\), which takes in the token and position embeddings of the pointed tokens and then outputs the embedding of \(p\). Notably, if \(p_{i}=-1\), the embedding is set to \(0\). All embeddings, including that of \(t\), \(p\) and positions, are added together before being fed into subsequential modules.
### Augmenting Decoder with CHA
We explore three ways to integrate CHA in the decoder layer, as shown in Fig. 4. The _inplace_ architecture replaces the attention mask of some attention heads with \(M_{\text{CHA}}\) in the original self-attention module without introducing new parameters. However, this affects the normal functioning of the replaced heads such that the pretrained model is disrupted.
Alternatively, we can introduce adapters into decoder layers (Houlsby et al., 2019). In the _parallel_ architecture, an adapter is introduced in parallel to the original self-attention module. In contrast, an adapter is positioned subsequent to the original module in the _pipeline_ architecture. Our adapter is defined as follows:
\[x_{1} =\text{FFN}_{1}(h_{i}),\] \[x_{2} =\text{Attention}(W^{Q}x_{1},W^{K}x_{1},W^{V}x_{1},M_{\text{CHA}}),\] \[h_{o} =\text{FFN}_{2}(\text{LayerNorm}(x_{1}+x_{2})),\]
where \(W^{Q},W^{K},W_{V}\) are query/key/value projection matrices, \(\text{FFN}_{1}/\text{FFN}_{2}\) are down/up projection, \(h_{i}\) is the input hidden states and \(h_{o}\) is the output hidden states.
### Predicting Pointer
Following previous work (Vinyals et al., 2015; Zhou et al., 2021), we reinterpret decoder self-attention heads as a pointer net. However, unlike the previous work, we use the average attention probabilities from multiple heads as the pointer probabilities instead of relying on a single head. Our preliminary experiments indicate that this modification results in a slight improvement.
A cross-entropy loss between the predicted pointer probabilities and the ground truth pointers is used for training. We disregard the associated loss at positions that do not have pointers and exclude their probabilities when calculating the entire pointer sequence's probability.
### Training and Inference
We optimize the sum of the standard sequence generation loss and the pointer loss:
\[L=L_{\text{seq2seq}}+\alpha L_{\text{pointer}},\]
where \(\alpha\) is a scalar hyperparameter.
For decoding, the probability of a hypothesis is the product of the probabilities of the base layer, the coref sequence, and the optional struct layer. We enforce a constraint during decoding to ensure the validity of \(M_{CHA}\): the number of ) should not surpass the number of (, and two constraints to ensure the well-formedness of pointer: (1) coreference pointers can only point to positions with the same token, and (2) left boundary pointers in bottom-up generation cannot point to AMR relations (e.g., :ARG0).
## 5 Experiment
### Setup
DatasetsWe conduct experiments on two in-distribution benchmarks: (1) AMR 2.0 (Knight et al., 2017), which contains 36521, 1368 and 1371 samples in the training, development and test set, and (2) AMR 3.0 (Knight et al., 2020), which has 55635, 1722 and 1898 samples in the training, development and test set, as well as three out-of-distribution benchmarks: (1) _The Little Prince_ (TLP), (2) BioAMR and (3) New3. Besides, we also explore the effects of using silver training data following previous work. To obtain silver data, we sample 200k sentences from the One Billion Word Benchmark data (Chelba et al., 2014) and use a trained CHAP parser to annotate AMR graphs.
MetricsWe report the Smatch score (Cai and Knight, 2013) and other fine-grained metrics (Damonte et al., 2017) averaged over three runs with different random seeds4. All these metrics are invariant to different graph linearizations and indicate better performance when they are higher. Additionally, to provide a more accurate comparison, we include the standard deviation (std dev) if Smatch scores are close.
Footnote 4: We use the amr-evaluation-enhanced software to compute scores, which is available at [https://github.com/Chunchuan/amr-evaluation-tool-enhanced](https://github.com/Chunchuan/amr-evaluation-tool-enhanced).
Pre-/post-processingOwing to the sparsity of wiki tags5 in the training set, we follow previous work to remove wiki tags from AMR graphs
in the pre-processing, and use the BLINK entity linker Wu et al. (2020) to add wiki tags in the post-processing6. In the post-processing, we also use the amlib software7 to ensure graph validity.
Footnote 6: We do not add wiki tags in analytical experiments.
Footnote 7: [https://github.com/bjascob/amrlib](https://github.com/bjascob/amrlib)
Implementation detailsWe use the BART-base model in analytical experiments and the BART-large model in comparison with baselines. We modify all decoder layers when using the BART-base model, while only modifying the top two layers when using the BART-large model8. For the parallel and pipeline architectures, attention modules in adapters have four heads and a hidden size 512. For the inplace architecture, four attention heads are set to perform CHA. We reinterpret four self-attention heads of the top decoder layer as a pointer net. The weight for the pointer loss \(\alpha\) is set to \(0.075\). We use a zero initialization for \(\text{FFN}_{2}\) and \(\text{MLP}_{p}\), such that the modified models are equivalent to the original BART model at the beginning of training. More details are available at Appx. D
Footnote 8: The training becomes unstable if we modify all decoder layers of the BART-large model.
BaselinesSPRING Bevilacqua et al. (2021) is a BART model fine-tuned with an augmented vocabulary and improved graph representations (as shown in Tab. 1.b). Ancestor Yu and Gildea (2022) enhances the decoder of SPRING by incorporating ancestral information of graph nodes. BiBL Cheng et al. (2022) and AMRBART Bai et al. (2022) augment SPRING with supplementary training losses. LeakDistill Vasylenko et al. (2023)9 trains a SPRING using leaked information and then distills it into a standard SPRING.
Footnote 9: Contemporary work.
All these baselines are translation-based models. Transition-based and factorization-based models are not included due to their inferior performance.
### Results on Alternative Modeling Options
Structural modelingWe report the results of different CHA options in Tab. 3. \(\Downarrow\)double exhibits a slightly better performance than \(\Uparrow\) and \(\Downarrow\)single. Besides, we find that breaking structural localities, i.e., (1) allowing parent nodes to attend to nodes other than their immediate children (row 3, \(-0.13\)) and (2) allowing non-parent nodes to attend to nodes that have been composed (row 2, \(-0.07\)), negatively impacts the performance. We present the attention masks of these two cases in Appx. A.2.
ArchitectureIn Tab. 4, we can see that the inplace architecture has little improvement over the baseline, w/o CHA. This suggests that changing the functions of pretrained heads can be harmful. We also observe that the parallel architecture performs slightly better than the pipeline architecture.
Based on the above results, we present CHAP,
\begin{table}
\begin{tabular}{l|c c} \hline \hline
**Architecture** & **Smatch** & **std dev** \\ \hline Parallel & 82.63 & 0.02 \\ Pipeline & 82.59 & 0.05 \\ Inplace & 82.43 & 0.04 \\ w/o CHA & 82.38 & 0.12 \\ \hline \hline \end{tabular}
\end{table}
Table 4: The influence of different architectures.
Figure 4: Three architectures for applying CHA to pretrained decoder layers. Residual connections and layernorms are omitted.
\begin{table}
\begin{tabular}{l|c} \hline \hline
**CHA** & **Smatch** \\ \hline \(\Downarrow\)single & 82.60 \\ expand \(\rightarrow\) causal & 82.53 \\ compose \(\rightarrow\) expand & 82.47 \\ \(\Downarrow\)double & 82.63 \\ \(\Uparrow\) & 82.57 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The influence of different CHA.
which adopts the parallel adapter and uses \(\Downarrow\)double.
### Main Results
Tab. 5 shows results on in-distribution benchmarks. In the setting of no additional data (such that LeakDistill is excluded), CHAP outperforms all previous models by a 0.3 Smatch score on AMR 2.0 and 0.5 on AMR 3.0. Regarding fine-grained metrics, CHAP performs best on five metrics for AMR 3.0 and three for AMR 2.0. Compared to previous work, which uses alignment, CHAP matches LeakDistill on AMR 3.0 but falls behind it on AMR 2.0. One possible reason is that alignment as additional data is particularly valuable for a relatively small training set of AMR 2.0. We note that the contribution of LeakDistill is orthogonal to ours, and we can expect an enhanced performance by integrating their method with our parser. When using silver data, the performance of CHAP on AMR 2.0 can be significantly improved, achieving similar performance to LeakDistill. This result supports the above conjecture. However, on AMR 3.0, the gain from silver data is marginal as in previous work, possibly because AMR 3.0 is sufficiently large to train a model based on BART-large.
In out-of-distribution evaluation, CHAP is competitive with all baselines on both TLP and Bio, as shown in Tab. 6, indicating CHAP's strong ability of generalization thanks to the explicit structure modeling.
### Ablation Study
An ablation study is presented in Table 7. The first four rows demonstrate that, when we exclude the
\begin{table}
\begin{tabular}{l c|c c c} \hline \hline
**Model** & **Extra Data** & **TLP** & **Bio** & **New3** \\ \hline SPRING & \(-\) & 77.3 & 59.7 & 73.7 \\ BiBL & \(-\) & 78.6 & 61.0 & 75.4 \\ BiBL & 200K & 78.3 & 61.1 & 75.4 \\ AMRBART & 200K & 76.9 & 63.2 & 76.9 \\ LeakDistill & A, 140K & 82.6 & 64.5 & \(-\) \\ \hline CHAP (ours) & \(-\) & 79.0 & 62.7 & 74.8 \\ CHAP (ours) & 200K & 79.8 & 63.5 & 75.1 \\ CHAP (ours) & \(-\) & 81.8 & 65.1 & \(-\) \\ CHAP (ours) & 200K & 82.7\({}^{\alpha}\) & 66.1\({}^{\beta}\) & \(-\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: Test results on out-of-distribution benchmarks. The scores represented in grey cells derive from a model trained on AMR 2.0, whereas the remaining scores come from a model trained on AMR 3.0. \(-\): New3 is part of AMR 3.0, so these settings are excluded from OOD evaluation. \({}^{\alpha}\)Std dev on TLP is 0.14. \({}^{\beta}\)Std dev on Bio is 0.46.
\begin{table}
\begin{tabular}{l c|c|c c c c c c c} \hline \hline
**Model** & **Extra Data** & **Smatch** & **NoWSD** & **Wiki.** & **Conc.** & **NER** & **Neg.** & **Unlab** & **Recent.** & **SRL** \\ \hline \hline _AMR 2.0_ & & & & & & & & & & \\ SPRING & \(-\) & 83.8 & 84.4 & 84.3 & 90.2 & 90.6 & 74.4 & 86.1 & 70.8 & 79.6 \\ Ancestor & \(-\) & 84.8 & 85.3 & 84.1 & 90.5 & 91.8 & 74.0 & 88.1 & **75.1** & **83.4** \\ BiBL & \(-\) & 84.6 & 85.1 & 83.6 & 90.3 & **92.5** & 73.9 & 87.8 & 74.4 & 83.1 \\ LeakDistill & A & **85.7** & **86.2** & 83.9 & **91.0** & 91.1 & **76.8** & **88.6** & 74.2 & 81.8 \\ CHAP (ours) & \(-\) & 85.1 & 85.6 & **86.4** & 90.9 & 90.4 & 73.4 & 88.0 & 73.0 & 81.0 \\ \hline AMRBART & 200K & 85.4 & 85.8 & 81.4 & 91.2 & 91.5 & 74.0 & 88.3 & 73.5 & 81.5 \\ LeakDistill & A, 140K & **86.1** & **86.5** & 83.9 & **91.4** & **91.6** & 76.6 & **88.8** & **75.1** & **82.4** \\ CHAP (ours) & 200K & 85.8 & 86.1 & **86.3** & **91.4** & 80.4 & **78.3** & 88.6 & 73.9 & 81.8 \\ \hline _AMR 3.0_ & & & & & & & & & \\ SPRING & \(-\) & 83.0 & 83.5 & 82.7 & 89.8 & 87.2 & 73.0 & 85.4 & 70.4 & 78.9 \\ Ancestor & \(-\) & 83.5 & 84.0 & 81.5 & 89.5 & 88.9 & 72.6 & 86.6 & **74.2** & **82.2** \\ BiBL & \(-\) & 83.9 & 84.3 & 83.7 & 89.8 & **93.2** & 68.1 & 87.2 & 73.8 & 81.9 \\ LeakDistill & A & **84.5** & **84.9** & 80.7 & **90.5** & 88.5 & **73.7** & **87.5** & 73.1 & 80.7 \\ CHAP (ours) & \(-\) & 84.4\({}^{*}\) & 84.8 & **84.7** & **90.5** & 87.9 & 73.5 & 87.3 & 72.6 & 80.1 \\ \hline AMRBART & 200K & 84.2 & 84.6 & 78.9 & 90.2 & **88.5** & 72.1 & 87.1 & 72.4 & 80.3 \\ LeakDistill & A, 140K & **84.6** & 84.9 & 81.3 & **90.7** & 87.8 & 73.0 & **87.5** & **73.4** & **80.9** \\ CHAP (ours) & 200K & **84.6** & **85.0** & **84.5** & **90.7** & 88.4 & **75.2** & **87.5** & 73.1 & 80.7 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Fine-grained Smatch scores on in-domain benchmarks. Bold and underlined numbers represent the best and the second-best results, respectively. \({}^{\alpha}\)A\({}^{\prime}\) in the Extra Data column denotes alignment. *Std dev is 0.04. |
2305.04661 | Unleashing 3D Connectivity in Beyond 5G Networks with Reconfigurable
Intelligent Surfaces | Reconfigurable intelligent surfaces (RISs) bring various benefits to the
current and upcoming wireless networks, including enhanced spectrum and energy
efficiency, soft handover, transmission reliability, and even localization
accuracy. These remarkable improvements result from the reconfigurability,
programmability, and adaptation capabilities of RISs for fine-tuning radio
propagation environments, which can be realized in a cost- and energy-efficient
manner. In this paper, we focus on the upgrade of the existing fifth-generation
(5G) cellular network with the introduction of an RIS owning a full-dimensional
uniform planar array structure for unleashing advanced three-dimensional
connectivity. The deployed RIS is exploited for serving unmanned aerial
vehicles (UAVs) flying in the sky with ultra-high data rate, a challenging task
to be achieved with conventional base stations (BSs) that are designed mainly
to serve ground users. By taking into account the line-of-sight probability for
the RIS-UAV and BS-UAV links, we formulate the average achievable rate, analyze
the effect of environmental parameters, and make insightful performance
comparisons. Simulation results show that the deployment of RISs can bring
impressive gains and significantly outperform conventional RIS-free 5G
networks. | Jiguang He, Aymen Fakhreddine, Arthur S. de Sena, Yu Tian, Merouane Debbah | 2023-05-08T12:28:23Z | http://arxiv.org/abs/2305.04661v2 | # Unleashing 3D Connectivity in Beyond 5G Networks with Reconfigurable Intelligent Surfaces
###### Abstract
Reconfigurable intelligent surfaces (RISs) bring various benefits to the current and upcoming wireless networks, including enhanced spectrum and energy efficiency, soft handover, transmission reliability, and even localization accuracy. These remarkable improvements result from the reconfigurability, programmability, and adaptation capabilities of RISs for fine-tuning radio propagation environments, which can be realized in a cost- and energy-efficient manner. In this paper, we focus on the upgrade of the existing fifth-generation (5G) cellular network with the introduction of an RIS owning a full-dimensional uniform planar array structure for unleashing advanced three-dimensional connectivity. The deployed RIS is exploited for serving unmanned aerial vehicles (UAVs) flying in the sky with ultra-high data rate, a challenging task to be achieved with conventional base stations (BSs) that are designed mainly to serve ground users. By taking into account the line-of-sight probability for the RIS-UAV and BS-UAV links, we formulate the average achievable rate, analyze the effect of environmental parameters, and make insightful performance comparisons. Simulation results show that the deployment of RISs can bring impressive gains and significantly outperform conventional RIS-free 5G networks.
Beyond 5G, 3D connectivity, achievable rate, UAV, RIS
## I Introduction
Drones, also known as unmanned aerial vehicles (UAVs), have recently undergone a tremendous expansion, generating a wide range of emerging applications, such as goods delivery, urban air taxis, remote surveillance, border control, agricultural or industrial monitoring, and disaster relief [1]. Even though the aforementioned applications span distinct domains, the commonality among them is the demanding need for three-dimensional (3D) wireless connectivity for the transfer of real-time sensor data and control commands. Such connectivity must be reliable, secure, and supports high data rates, up to several hundred megabits per second (Mbps).
The current commercial fifth generation (5G) base stations (BSs) are customized with the primary purpose to provide a two-dimensional (2D) coverage to users on the ground. Providing ubiquitous three-dimensional (3D) coverage and even full earth coverage is a hot topic for the next generations of wireless networks, i.e., beyond 5G and sixth generation (6G) [2]. To mitigate the lack of a dedicated infrastructure to serve flying UAVs, approaches that do not require substantial hardware upgrades or additional deployments to be borne by mobile network operators should be adopted. In other words, one should heavily rely on the existing 5G cellular networks and focus on its upgrade to meet the quality-of-service (QoS) requirements of UAV communications. To the best of our knowledge, the adoption of reconfigurable intelligent surfaces (RISs) to extend the 3D coverage of cellular networks for serving aerial users has not yet been fully explored.
In the recent literature, the works in [3, 4, 5, 6] focus on how the RIS can be applied to enhance UAV-enabled wireless networks. However, their purpose is also to serve ground users. To be specific, a terrestrial BS is able to communicate with distant ground users in the absence of line-of-sight (LoS) links by mounting an RIS on a UAV, which enables intelligent reflection from the sky [3]. In [4, 5, 6], UAVs act as aerial BSs with flexible deployment. The authors in [7] dedicated a whole section reviewing UAV and RIS, and listed relevant publications that explored UAV-mounted RISs.
In this paper, we leverage the reconfigurability capabilities of RISs to enhance the existing cellular networks by beamforming toward the sky to provide 3D connectivity to the UAVs. This offers an "easy to implement" upgrade to the existing 5G cellular infrastructure, rendering beyond 5G (B5G), and enables an improved cellular architecture that incorporates large RISs in key locations at altitudes slightly lower than those of the serving BS antennas of currently deployed cellular networks. In this sense, unlike the existing works that consider RIS-mounted UAVs as part of the network either as relays or aerial BSs, our study considers the UAVs that are in fact cellular user equipment (UE) served by the network. Moreover, we take into account the LoS probability and study the average achievable rate under different propagation environments. Simulation results illustrate how the RIS-assisted B5G networks outperform their RIS-free counterparts in terms of average achievable rate and sky coverage.
_Notations_: A bold lowercase letter \(\mathbf{a}\) denotes a vector, and a bold capital letter \(\mathbf{A}\) denotes a matrix. \((\cdot)^{\mathsf{T}}\) and \((\cdot)^{\mathsf{H}}\) denote the matrix or vector transpose and Hermitian transpose, respectively. \(\mathrm{Tr}(\cdot)\) denotes the trace operator, \(\mathrm{diag}(\mathbf{a})\) and \(\mathrm{det}(\cdot)\) denote a diagonal matrix with the entries of \(\mathbf{a}\) on its diagonal and the determinant of a matrix, \(\underline{\mathbf{a}}\) returns the phase of the complex scalar \(a\), \(\mathbf{a}\otimes\mathbf{b}\) denotes the Kronecker product of \(\mathbf{a}\) and \(\mathbf{b}\), \(\mathbf{I}_{M}\) denotes the \(M\times M\) identity matrix, \(\mathbf{0}\) is an all-zero matrix, \(j=\sqrt{-1}\), and \(\|\cdot\|_{2}\) denotes the Euclidean norm of a vector. \([\mathbf{a}]_{i}\) and \([\mathbf{A}]_{ij}\) denote the \(i\)th element of \(\mathbf{a}\) and the \((i,j)\)th element of \(\mathbf{A}\), respectively. Finally, \(|\cdot|\) returns the absolute value of a complex number.
## II System model
We focus on the 3D connectivity for UAV communications with the aid of a static RIS along with a legacy 5G BS, depicted in Fig. 1. The BS is equipped with a uniform linear array (ULA) structure, consisting of \(N_{\text{B}}\) vertical antenna elements, down-tilted by a clockwise rotation of \(\beta\) radians. To offer 3D connectivity for the flying UAV without additional deployment of costly next-generation BSs, we rely on the cost-efficient deployment of an RIS, which is capable of performing 3D beamforming thanks to its massive number of sub-wavelength meta-atoms. The RIS can generate a group of candidate beams to cover the whole sky, where the UAV is supposed to be located. Specifically, one RIS composed of \(N_{\text{R}}=N_{\text{R},x}N_{\text{R},y}\) meta-atoms is deployed in the close proximity of the BS with LoS availability [8], owning a uniform planar array (UPA) structure, which is parallel to the \(x\)-\(y\) plane; \(N_{\text{R},x}\) and \(N_{\text{R},y}\) denote the number of RIS meta-atoms across \(x\) and \(y\) axes, respectively. The flying UAV is also supposed to employ a UPA with \(N_{\text{U}}=N_{\text{U},x}N_{\text{U},y}\) antennas, where \(N_{\text{U},x}\) and \(N_{\text{U},y}\) are the number of antennas across \(x\) and \(y\) axes, respectively. We assume that the antenna planes of RIS and UAV are parallel to each other. The coordinates of the BS, the RIS, and the UAV are \(\mathbf{p_{\text{B}}}=(x_{\text{R}},y_{\text{B}},z_{\text{B}})^{\mathsf{T}} \in\mathbb{R}^{3}\), \(\mathbf{p_{\text{R}}}=(x_{\text{R}},y_{\text{R}},z_{\text{R}})^{\mathsf{T}} \in\mathbb{R}^{3}\), and \(\mathbf{p_{\text{U}}}=(x_{\text{U}},y_{\text{U}},z_{\text{U}})^{\mathsf{T}} \in\mathbb{R}^{3}\), respectively.
### _Channel Model_
We consider the far-field propagation for all the channels, i.e., BS-RIS, RIS-UAV, and BS-UAV channels, which are modeled by taking into consideration the channel parameters, calculated by following the geometrical relationship between any pair of network nodes. For the purpose of simplicity, we only consider the LoS path component for each channel, which is well justified for UAV communications [9]. Even though the multi-path components may exist, their sum power is not comparable to the power in the LoS path, especially in millimeter wave (mmWave) frequency bands [10]. Characterized by the extended Saleh-Valenzuela channel model [11], the BS-RIS channel \(\mathbf{H_{\text{B,R}}}\in\mathbb{C}^{N_{\text{R}}\times N_{\text{R}}}\) is modeled as
\[\mathbf{H_{\text{B,R}}}\mathbf{=}\sqrt{\frac{N_{\text{B}}N_{\text{R}}}{\rho_{ \text{B,R}}}}\exp(-j2\pi\tau_{\text{B,R}})\boldsymbol{\alpha_{\text{R}}}( \theta_{\text{B,R}}^{r},\phi_{\text{B,R}}^{r})\boldsymbol{\alpha_{\text{B}}} ^{\text{H}}(\phi_{\text{B,R}}^{t}-\beta), \tag{1}\]
where \(\rho_{\text{B,R}}\in\mathbb{R}\) is the path loss, dependent on the BS-RIS distance, carrier frequency, and shadowing effect, \(\tau_{\text{B,R}}\) is the propagation delay, \(\theta_{\text{B,R}}^{r}\in\mathbb{R}\), \(\phi_{\text{B,R}}^{r}\in\mathbb{R}\), and \(\phi_{\text{B,R}}^{t}\in\mathbb{R}\) are the azimuth, the elevation angle of arrival (AoA), and the elevation angle of departure (AoD), respectively. The channel parameters are calculated based on the centroids of the BS antenna array and the RIS plane, as [12]
\[x_{\text{R}} =x_{\text{B}}+d_{\text{B,R}}\cos(\theta_{\text{B,R}}^{r})\cos( \phi_{\text{B,R}}^{r}), \tag{2}\] \[y_{\text{R}} =y_{\text{B}}+d_{\text{B,R}}\sin(\theta_{\text{B,R}}^{r})\cos( \phi_{\text{B,R}}^{r}),\] (3) \[z_{\text{R}} =z_{\text{B}}+d_{\text{B,R}}\sin(\phi_{\text{B,R}}^{r}),\] (4) \[\phi_{\text{B,R}}^{t} =\phi_{\text{B,R}}^{r}, \tag{5}\]
where \(d_{\text{B,R}}=\|\mathbf{p_{\text{B}}}-\mathbf{p_{\text{R}}}\|_{2}\in\mathbb{R}\) is the distance between the BS and the RIS. The normalized array response vectors \(\boldsymbol{\alpha_{\text{R}}}(\theta_{\text{B,R}}^{r},\phi_{\text{B,R}}^{r}) \in\mathbb{C}^{N_{\text{R}}}\), at the RIS, and \(\boldsymbol{\alpha_{\text{B}}}(\phi_{\text{B,R}}^{t}-\beta)\in\mathbb{C}^{N_{ \text{R}}}\), at the BS, i.e., \(\|\boldsymbol{\alpha_{\text{R}}}(\theta_{\text{B,R}}^{r},\phi_{\text{B,R}}^{r}) \|_{2}=\|\boldsymbol{\alpha_{\text{B}}}(\phi_{\text{B,R}}^{t}-\beta)\|_{2}=1\), are
\[\boldsymbol{\alpha_{\text{R}}}(\theta_{\text{B,R}}^{r},\phi_{ \text{B,R}}^{r})\triangleq\frac{1}{\sqrt{N_{\text{R}}}}\Big{[}1,\exp\big{(}j \frac{2\pi d_{x}}{\lambda}\cos(\theta_{\text{B,R}}^{r})\sin(\phi_{\text{B,R}}^ {r})\big{)},\] \[\quad\cdots,\exp\big{(}j\frac{2\pi d_{x}}{\lambda}(N_{\text{R},x} -1)\cos(\theta_{\text{B,R}}^{r})\sin(\phi_{\text{B,R}}^{r})\big{)}\Big{]}^{ \mathsf{T}}\] \[\otimes\Big{[}1,\exp\big{(}j\frac{2\pi d_{y}}{\lambda}\sin(\theta_ {\text{B,R}}^{r})\sin(\phi_{\text{B,R}}^{r})\big{)},\] \[\quad\cdots,\exp\big{(}j\frac{2\pi d_{y}}{\lambda}(N_{\text{R},y} -1)\sin(\theta_{\text{B,R}}^{r})\sin(\phi_{\text{B,R}}^{r})\big{)}\Big{]}^{ \mathsf{T}}, \tag{6}\] \[\boldsymbol{\alpha_{\text{B}}}(\phi_{\text{B,R}}^{t}-\beta) \triangleq\frac{1}{\sqrt{N_{\text{B}}}}\Big{[}1,\exp\big{(}j\frac{2\pi d_{z} }{\lambda}\cos(\phi_{\text{B,R}}^{t}-\beta)\big{)},\] \[\quad\cdots,\exp\big{(}j\frac{2\pi d_{z}}{\lambda}(N_{\text{B}} -1)\cos(\phi_{\text{B,R}}^{t}-\beta)\big{)}\Big{]}^{\mathsf{T}}, \tag{7}\]
where \(d_{x}\in\mathbb{R}\), \(d_{y}\in\mathbb{R}\), and \(d_{z}\in\mathbb{R}\) denote the inter-element spacing across \(x\), \(y\), and \(z\) axes, respectively, and \(\lambda\) is the wavelength. The effect of orientation angle \(\beta\in\mathbb{R}\) is integrated into the array response vector \(\boldsymbol{\alpha_{\text{B}}}(\cdot)\). The RIS-UAV channel \(\mathbf{H_{\text{R,U}}}\in\mathbb{C}^{N_{\text{U}}\times N_{\text{R}}}\) and BS-UAV channel \(\mathbf{H_{\text{B,U}}}\in\mathbb{C}^{N_{\text{U}}\times N_{\text{R}}}\) can be modeled in the same manner, as
\[\mathbf{H_{\text{B,U}}} =\sqrt{\frac{N_{\text{B}}N_{\text{U}}}{\rho_{\text{B,U}}}}\exp(-j2 \pi\tau_{\text{B,U}})\boldsymbol{\alpha_{\text{U}}}(\theta_{\text{B,U}}^{r}, \phi_{\text{B,U}}^{r})\] \[\times\boldsymbol{\alpha_{\text{B}}}^{\text{H}}(\phi_{\text{B,U}}^ {t}-\beta), \tag{8}\] \[\mathbf{H_{\text{R,U}}} =\sqrt{\frac{N_{\text{R}}N_{\text{U}}}{\rho_{\text{R,U}}}}\exp(-j2 \pi\tau_{\text{R,U}})\boldsymbol{\alpha_{\text{U}}}(\theta_{\text{R,U}}^{r}, \phi_{\text{R,U}}^{r})\] \[\times\boldsymbol{\alpha_{\text{R}}}^{\text{H}}(\theta_{\text{R,U}}^ {t},\phi_{\text{R,U}}^{t}), \tag{9}\]
where \(\boldsymbol{\alpha_{\text{U}}}(\cdot)\in\mathbb{C}^{N_{\text{U}}}\) is the array response vector at the UAV, \(\rho_{\text{B,U}}\), \(\rho_{\text{R,U}}\), \(\tau_{\text{B,U}}\), \(\tau_{\text{R,U}}\), \(\theta_{\text{B,U}}^{r}\), \(\phi_{\text{B,U}}^{r}\), \(\phi_{\text{B,U}}^{t}\), \(\theta_{\text{B,U}}^{r}\), \(\phi_{\text{R,U}}^{r}\), \(\theta_{\text{R,U}}^{t}\), and \(\phi_{\text{R,U}}^{t}\) are defined in the same way as those in (1). Since we assume that the antenna plane of the UAV is parallel to that of the RIS, we have \(\theta_{
### _LoS Probability_
The LoS probability of the channels relies on the complex environmental propagation conditions, e.g., the height of the buildings, density of the buildings, and spatial distribution of the scatterers. Therefore, different LoS probabilities should be applied for different scenarios, such as urban, suburban, and rural. We, in this work, resort to the International Telecommunication Union Radiocommunication (ITU-R) for the modeling of LoS availability, introduced in [13, 14] as
\[\small\begin{split}\text{$\mathsf{p}_{\text{LoS}}^{\text{ITU}}$}(h _{\text{T}},h_{\text{R}},\gamma)\!=\!\prod_{m=0}^{M}\!\!\left(1\!-\!\exp\left(- \frac{\left(h_{\text{T}}-\frac{(m+0.5)(x_{\text{R}}-h_{\text{R}})}{M+1}\right) ^{2}}{2\gamma^{2}}\right)\!\right)\!,\end{split} \tag{10}\]
where \(M=\lfloor r\sqrt{\alpha\kappa}\rfloor-1\) denotes the number of buildings between the pair of network nodes. The variables \(r\), \(\alpha\), and \(\kappa\) denote the ground distance in kilometers between the pair of nodes, the fraction of the area covered by buildings to the total area, and the average number of buildings per unit area, respectively, \(\gamma\) is a height distribution parameter, and \(h_{\text{T}}\) and \(h_{\text{R}}\) are the heights for the transmitter and receiver, respectively. By tuning the parameters \(\alpha\), \(\kappa\), and \(\gamma\), we can model the LoS probabilities for all the aforementioned scenarios. Thus, we apply the expression (10) to both of the channels \(\mathbf{H}_{\text{B},\text{U}}\) and \(\mathbf{H}_{\text{R},\text{U}}\). In addition, we assume that the RIS is placed in the proximity of the BS, so the LoS availability is always guaranteed. Note that for characterizing \(\mathbf{H}_{\text{R},\text{U}}\), the RIS is deemed as a virtual transmitter.
## III Achievable Rate and Its Average
### _Achievable Rate_
The benefits of introducing RIS for 3D connectivity are multi-fold: i) When the direct BS-UAV channel exists, tremendous multiplexing gains can be obtained aiming for extremely high data-rate transmission. ii) When the BS-UAV channel is temporally unavailable, the BS can maintain the connectivity via the RIS. We treat the two cases separately in this subsection. The entire end-to-end channel between the BS and the UAV is summarized as
\[\mathbf{H}=\mathbb{I}(\mathbf{H}_{\text{B},\text{U}})\mathbf{H}_{\text{B}, \text{U}}+\mathbb{I}(\mathbf{H}_{\text{R},\text{U}})\mathbf{H}_{\text{R}, \text{U}}\mathbf{\Omega}\mathbf{H}_{\text{B},\text{R}}, \tag{11}\]
where \(\mathbb{I}(\cdot)\in\{0,1\}\) is the indicator function; if \(\mathbf{H}_{\text{B},\text{U}}\neq\mathbf{0}\), \(\mathbb{I}(\mathbf{H}_{\text{B},\text{U}})=1\); otherwise, \(\mathbb{I}(\mathbf{H}_{\text{B},\text{U}})=0\). The same principle is applied to \(\mathbb{I}(\mathbf{H}_{\text{R},\text{U}})\). The diagonal matrix \(\mathbf{\Omega}\in\mathbb{C}^{N_{\text{R}}\times N_{\text{R}}}\) is the RIS phase control matrix. Unlike the conventional amplify-and-forward (AF) relays, the RIS is supposed to change only the phase shifts of the impinging signals, realizing analog/passive beamforming [15]. Namely, strict constraints are imposed for the diagonal entries of \(\mathbf{\Omega}\), i.e., \(|[\mathbf{\Omega}]_{kk}|=1\), \(\forall k\in\{1,2,\cdots,N_{\text{R}}\}\). By referring to (10), the LoS probabilities of \(\mathbf{H}_{\text{B},\text{U}}\) and \(\mathbf{H}_{\text{R},\text{U}}\) are
\[\small\begin{split}\text{Pr}(\mathbb{I}(\mathbf{H}_{\text{B}, \text{U}})\!=\!1)\!=\\ \prod_{m=0}^{M}\!\!\left(1-\exp\bigg{(}-\frac{\left(z_{\text{B}}- \frac{(m+0.5)(z_{\text{B}}-z_{\text{U}})}{M+1}\right)^{2}}{2\gamma^{2}}\bigg{)} \!\right)\!,\\ \text{Pr}(\mathbb{I}(\mathbf{H}_{\text{R},\text{U}})\!=\!1)\!=\\ \prod_{m=0}^{M}\!\!\left(1-\exp\bigg{(}-\frac{\left(z_{\text{R}}- \frac{(m+0.5)(z_{\text{B}}-z_{\text{U}})}{M+1}\right)^{2}}{2\gamma^{2}}\bigg{)} \!\right)\!.\end{split} \tag{12}\]
The achievable rate between the BS and the UAV depends on the LoS availability of \(\mathbf{H}_{\text{B},\text{U}}\) and \(\mathbf{H}_{\text{R},\text{U}}\). Thus, a segmented function is considered for the achievable rate in bps/Hz, as shown in (14) on the top of the next page. For case i) \(\mathbb{I}(\mathbf{H}_{\text{B},\text{U}})=\mathbb{I}(\mathbf{H}_{\text{R}, \text{U}})=1\), it also depends on the design of the combining matrix \(\mathbf{W}\in\mathbb{C}^{N_{\text{U}}\times 2}\), the precoder \(\mathbf{F}\in\mathbb{C}^{N_{\text{R}}\times 2}\), and \(\mathbf{\Omega}\). For case ii) \(\mathbb{I}(\mathbf{H}_{\text{B},\text{U}})=1,\mathbb{I}(\mathbf{H}_{\text{R}, \text{U}})=0\), it also depends on the design of the combining vector \(\mathbf{w}\in\mathbb{C}^{N_{\text{U}}}\) and the beamforming vector \(\mathbf{f}\in\mathbb{C}^{N_{\text{B}}}\). For the case iii) \(\mathbb{I}(\mathbf{H}_{\text{B},\text{U}})=0,\mathbb{I}(\mathbf{H}_{\text{R}, \text{U}})=1\), it also depends on the design of \(\mathbf{w}\), \(\mathbf{f}\), and \(\mathbf{\Omega}\). The notations \(P\) and \(\sigma^{2}\) denote the transmit power at the BS and the noise variance at the UAV.
Iii-A1 Case i) \(\mathbb{I}(\mathbf{H}_{\text{B},\text{U}})=\mathbb{I}(\mathbf{H}_{\text{R}, \text{U}})=1\)
In this case, spatial multiplexing gains can be achieved, since two different routes from the BS to the UAV can be concurrently available with the end-to-end channel \(\mathbf{H}=\mathbf{H}_{\text{B},\text{U}}+\mathbf{H}_{\text{R},\text{U}} \mathbf{\Omega}\mathbf{H}_{\text{B},\text{R}}\), where \(\mathbf{H}_{\text{B},\text{U}}\neq\mathbf{0}\) and \(\mathbf{H}_{\text{R},\text{U}}\neq\mathbf{0}\). We focus on the design of \(\mathbf{W}\), \(\mathbf{F}\), and \(\mathbf{\Omega}\) as to maximize the achievable rate. The optimization problem is formulated as [16]
\[\small\begin{split}\mathcal{P}_{1}:\max_{\mathbf{W},\mathbf{F}, \mathbf{\Omega}}&\log_{2}\Big{(}\det\left(\mathbf{I}_{2}+\frac{P}{ \sigma^{2}}\mathbf{W}^{\text{H}}\mathbf{H}\mathbf{F}\mathbf{F}^{\text{H}} \mathbf{H}^{\text{H}}\mathbf{W}\right)\!\Big{)},\end{split} \tag{15a}\] \[\small\begin{split}\text{s.t.}&\operatorname{Tr}( \mathbf{W}^{\text{H}}\mathbf{W})=1,\end{split}\] (15b) \[\small\begin{split}\text{Tr}(\mathbf{F}^{\text{H}} \mathbf{F})=1,\end{split}\] (15c) \[\small\begin{split}|[\mathbf{\Omega}]_{kk}|=1,\forall k, \end{split} \tag{15d}\]
which is non-convex due to the non-convex constraint in (15d). Therefore, it is challenging to find the optimal solution. In order to ease the analysis that follows, we use a simplified yet intuitive two-stage approach to sequentially optimize \(\mathbf{\Omega}\) and \(\{\mathbf{W},\mathbf{F}\}\). We first focus on the term \(\mathbf{H}_{\text{R},\text{U}}\mathbf{\Omega}\mathbf{H}_{\text{B},\text{R}}\), and define \(\eta=\sqrt{\frac{N_{\text{R}}N_{\text{U}}}{\rho_{\text{R},\text{U}}}}\sqrt{ \frac{N_{\text{R}}N_{\text{R}}}{\rho_{\text{R},\text{R}}}}\exp(-j2\pi\tau_{ \text{R},\text{U}})\exp(-j2\pi\tau_{\text{B},\text{R}})\mathbf{\alpha}_{\text{R}} ^{\text{H}}(\theta_{\text{R},\text{U}}^{t},\phi_{\text{R},\text{U}}^{t})\)\(\mathbf{\Omega}\mathbf{\alpha}_{\text{R}}(\theta_{\text{R},\text{R}}^{t},\phi_{ \text{R},\text{R}}^{r})\). In this regard, \(\mathbf{H}\) in (11) can be reformulated as
\[\small\mathbf{H}(\eta)=\underbrace{\mathbf{H}_{\text{B},\text{U}}}_{\text{ Rank One}}+\underbrace{\eta\mathbf{\alpha}_{\text{U}}(\theta_{\text{R},\text{U}}^{r},\phi_{\text{R}, \text{U}}^{r})\mathbf{\alpha}_{\text{R}}^{\text{H}}(\phi_{\text{B},\text{R}}^{t} -\beta)}_{\text{Rank One}}, \tag{16}\]
which is a summation of two rank-one matrices. Here, we replace \(\mathbf{H}\) as \(\mathbf{H}(\eta)\) to show its dependence on \(\eta\). Based on the following facts:
\[\small\lim_{N_{\text{B}}\to\infty}\mathbf{\alpha}_{\text{B}}^{\text{H}}(\phi_{ \text{B},\text{U}}^{t}\!-\!\beta)
\[R=\begin{cases}\log_{2}\Big{(}\det\big{(}\mathbf{I}+\frac{P}{\sigma^{2}} \mathbf{W}^{\mathsf{H}}\mathbf{H}\mathbf{F}\mathbf{F}^{\mathsf{H}}\mathbf{H}^{ \mathsf{H}}\mathbf{W}\big{)}\Big{)},&\text{if }\mathbb{I}(\mathbf{H}_{\mathsf{B},\mathsf{U}})=1, \mathbb{I}(\mathbf{H}_{\mathsf{R},\mathsf{U}})=1,\\ \log_{2}\Big{(}\det\big{(}1+\frac{P}{\sigma^{2}}\mathbf{W}^{\mathsf{H}} \mathbf{H}_{\mathsf{B},\mathsf{U}}\mathbf{f}^{\mathsf{H}}\mathbf{H}^{ \mathsf{H}}\mathbf{H}^{\mathsf{H}}_{\mathsf{B},\mathsf{U}}\mathbf{w}\big{)} \Big{)},&\text{if }\mathbb{I}(\mathbf{H}_{\mathsf{B},\mathsf{U}})=1,\mathbb{I}( \mathbf{H}_{\mathsf{R},\mathsf{U}})=0,\\ \log_{2}\Big{(}\det\big{(}1+\frac{P}{\sigma^{2}}\mathbf{W}^{\mathsf{H}} \mathbf{H}_{\mathsf{R},\mathsf{U}}\mathbf{\Omega}\mathbf{H}_{\mathsf{B}, \mathsf{R}}\mathbf{f}^{\mathsf{H}}\mathbf{H}^{\mathsf{H}}_{\mathsf{B},\mathsf{ R}}\mathbf{H}^{\mathsf{H}}_{\mathsf{R},\mathsf{U}}\mathbf{\Omega}^{\mathsf{H}} \mathbf{w}\big{)}\Big{)},&\text{if }\mathbb{I}(\mathbf{H}_{\mathsf{B},\mathsf{U}})=0, \mathbb{I}(\mathbf{H}_{\mathsf{R},\mathsf{U}})=1,\\ 0,&\text{if }\mathbb{I}(\mathbf{H}_{\mathsf{B},\mathsf{U}})=0,\mathbb{I}( \mathbf{H}_{\mathsf{R},\mathsf{U}})=0.\end{cases} \tag{18}\]
which can be readily proven. Thus, the singular value decomposition (SVD) of \(\mathbf{H}(\eta)\) is approximated by
\[\mathbf{H}(\eta)\approx\mathbf{U}\mathrm{diag}([\sqrt{N_{\mathsf{B}}N_{ \mathsf{U}}/\rho_{\mathsf{B},\mathsf{U}}}\ |\ \eta|])\mathbf{V}^{\mathsf{H}}, \tag{19}\]
where \(\mathbf{U}=[\exp(-j2\pi\tau_{\mathsf{B},\mathsf{U}})\mathbf{\alpha}_{\mathsf{U}}( \theta^{r}_{\mathsf{B},\mathsf{U}},\phi^{r}_{\mathsf{B},\mathsf{U}});\mathbf{ \alpha}_{\mathsf{U}}(\theta^{r}_{\mathsf{R},\mathsf{U}},\phi^{r}_{\mathsf{R},\mathsf{U}})]\) and \(\mathbf{V}=[\mathbf{\alpha}_{\mathsf{B}}(\phi^{t}_{\mathsf{B},\mathsf{U}}-\beta); \mathbf{\alpha}_{\mathsf{B}}(\phi^{t}_{\mathsf{B},\mathsf{R}}-\beta)]\). According to (17) and (18), \(\mathbf{U}\) and \(\mathbf{V}\) are nearly partial of unitary matrices, i.e., \(\mathbf{U}^{\mathsf{H}}\mathbf{U}\approx\mathbf{I}_{2}\) and \(\mathbf{V}^{\mathsf{H}}\mathbf{V}\approx\mathbf{I}_{2}\).
To this end, we first maximize \(|\eta|\) in the first stage, which is equal to \(\sqrt{\frac{N_{\mathsf{B}}N_{\mathsf{U}}}{\rho_{\mathsf{B},\mathsf{U}}}}\sqrt {\frac{N_{\mathsf{B}}N_{\mathsf{B}}}{\rho_{\mathsf{B},\mathsf{U}}}}\). Afterwards, we perform SVD of \(\mathbf{H}(\eta=\sqrt{\frac{N_{\mathsf{B}}N_{\mathsf{U}}}{\rho_{\mathsf{B}, \mathsf{U}}}}\sqrt{\frac{N_{\mathsf{B}}N_{\mathsf{B}}}{\rho_{\mathsf{B}, \mathsf{U}}}})\) to design the optimal \(\mathbf{W}\) and \(\mathbf{F}\) in the second stage. The optimal closed-form solutions for \(\mathbf{\Omega}\), \(\mathbf{W}\), and \(\mathbf{F}\) are summarized as
\[[\mathbf{\Omega}]_{kk} =\exp\bigg{\{}j\Big{(}\![\mathbf{\alpha}_{\mathsf{R}}(\theta^{t}_{ \mathsf{R},\mathsf{U}},\phi^{t}_{\mathsf{R},\mathsf{U}})]_{k}-\!\Big{/}\![\mathbf{ \alpha}_{\mathsf{R}}(\theta^{r}_{\mathsf{B},\mathsf{R}},\phi^{r}_{\mathsf{B}, \mathsf{R}})]_{k}\Big{)}\bigg{\}}\] \[\times\exp\big{(}j2\pi(\tau_{\mathsf{R},\mathsf{U}}+\tau_{ \mathsf{B},\mathsf{R}})\big{)}, \tag{20}\] \[\mathbf{W} =\frac{\sqrt{2}}{2}\mathbf{U},\mathbf{F}=\frac{\sqrt{2}}{2} \mathbf{V}, \tag{21}\]
where \(\mathbf{W}\) and \(\mathbf{F}\) meet the constraints specified in (15b).
The achievable rate, rewritten as \(R_{1}\), for this case is
\[R_{1}\approx\log_{2}\Big{(}1+\frac{P}{4\sigma^{2}}\frac{N_{\mathsf{B}}N_{ \mathsf{U}}}{\rho_{\mathsf{B},\mathsf{U}}}\Big{)}+\log_{2}\Big{(}1+\frac{P}{4 \sigma^{2}}\frac{N_{\mathsf{R}}N_{\mathsf{U}}}{\rho_{\mathsf{R},\mathsf{U}}} \frac{N_{\mathsf{B}}N_{\mathsf{R}}}{\rho_{\mathsf{B},\mathsf{R}}}\Big{)}. \tag{22}\]
Iii-B2 Case ii) \(\mathbb{I}(\mathbf{H}_{\mathsf{B},\mathsf{U}})=1,\mathbb{I}(\mathbf{H}_{ \mathsf{R},\mathsf{U}})=0\)
For this case, we only need to perform SVD of \(\mathbf{H}_{\mathsf{B},\mathsf{U}}\) to design the optimal beamforming vectors \(\mathbf{w}\) and \(\mathbf{f}\). The calculation of the achievable rate is straightforward. The achievable rate, redefined as \(R_{2}\), for this case is \(R_{2}=\log_{2}\Big{(}1+\frac{P}{\sigma^{2}}\frac{N_{\mathsf{B}}N_{\mathsf{U}}}{ \rho_{\mathsf{B},\mathsf{U}}}\Big{)}\).
Iii-B3 Case iii) \(\mathbb{I}(\mathbf{H}_{\mathsf{B},\mathsf{U}})=0,\mathbb{I}(\mathbf{H}_{ \mathsf{R},\mathsf{U}})=1\)
For this case, we can follow the two-stage approach, already applied for case i). The achievable rate defined as \(R_{3}\), for this case is \(R_{3}=\log_{2}\Big{(}1+\frac{P}{\sigma^{2}}\frac{N_{\mathsf{B}}N_{\mathsf{U}}}{ \rho_{\mathsf{B},\mathsf{U}}}\frac{N_{\mathsf{B}}N_{\mathsf{B}}}{\rho_{\mathsf{B}, \mathsf{R}}}\Big{)}\).
### _Average Achievable Rate_
We take the LoS probabilities and their associated achievable rates into consideration and calculate the average achievable rate as
\[\bar{R} =R_{1}\text{Pr}(\mathbb{I}(\mathbf{H}_{\mathsf{B},\mathsf{U}})=1) \text{Pr}(\mathbb{I}(\mathbf{H}_{\mathsf{R},\mathsf{U}})=1)\] \[+R_{2}\text{Pr}(\mathbb{I}(\mathbf{H}_{\mathsf{B},\mathsf{U}})=1) \text{Pr}(\mathbb{I}(\mathbf{H}_{\mathsf{R},\mathsf{U}})=0)\] \[+R_{3}\text{Pr}(\mathbb{I}(\mathbf{H}_{\mathsf{B},\mathsf{U}})=0) \text{Pr}(\mathbb{I}(\mathbf{H}_{\mathsf{R},\mathsf{U}})=1). \tag{23}\]
## IV Simulation Results
In this section, we study the average achievable rate for an RIS-assisted B5G network and compare it with its RIS-free counterpart. For simplicity, we only consider the free-space path loss, which is modeled as: \(\rho=d^{2}f_{c}^{2}/10^{8.755}\), where \(f_{c}\) (in \(\mathrm{KHz}\)) is the carrier frequency, defined as \(f_{c}=\frac{\kappa}{\lambda}\) with \(c\) being the speed of light (\(3\times 10^{8}\) m/s). We omit the subscripts of \(\rho\) and \(d\) to make them general and applicable to all the path losses and distances. The system parameters are set as: \(f_{c}\in\{28,3.5\}\) GHz, \(\beta=\pi/3\), \(N_{\mathsf{B}}=8\times 8\), \(N_{\mathsf{R}}=20\times 20\), \(N_{\mathsf{U}}=8\times 8\), \(\text{p}_{\mathsf{B}}=(0,0,10)\), \(\text{p}_{\mathsf{R}}=(0.5,0.5,9.5)\), and bandwidth \(B=20\) MHz. The UAV is supposed to be located at any possible position with \(x_{\mathrm{U}}>0,y_{\mathrm{U}}>0,z_{\mathrm{U}}>0\). To model the LoS probability, we follow [13] for the setup for \((\alpha,\kappa,\gamma)\): suburban (0.1, 750, 8), urban (0.3, 500, 15), dense urban (0.5, 300, 20), and highrise urban (0.5, 300, 50). We fix the height for the UAV as \(100\) meters, i.e., \(z_{\mathrm{U}}=100\), while its \(x\)-axis and \(y\)-axis are uniformly distributed within \([100,1000]\) meters when \(f_{c}=28\) GHz and \([100,2000]\) meters when \(f_{c}=3.5\) GHz. The cumulative distribution functions
Comparisons of the average achievable rates of the RIS-assisted and RIS-free UAV communication networks are shown in Figs. 3 and 4 for both carrier frequencies, i.e., mmWave and C band. As we can see, significant rate gains can be obtained by introducing the RIS to the existing 5G cellular networks, confirming the 3D connectivity enhancements that can be unleashed by RISs to UAVs. It also becomes clear that higher frequencies offer a shorter communication range due to more server path loss.
## V Conclusions
In this paper, we have studied the 3D connectivity enabled by exploiting the existing 5G BS together with an RIS. We have investigated the average achievable rate while taking into consideration of the LoS probabilities under different propagation environments. Simulation results have shown that with the introduction of RIS, substantially higher average achievable rates can be obtained compared to its RIS-free counterpart.
## Acknowledgement
This contribution of Aymen Fakhreddine has been partly funded by FWF - Der Wissenschaftsfonds (Austrian Science Fund) ESPRIT program under grant number ESP-54.
|
2307.04018 | Revisiting Cross-Lingual Summarization: A Corpus-based Study and A New
Benchmark with Improved Annotation | Most existing cross-lingual summarization (CLS) work constructs CLS corpora
by simply and directly translating pre-annotated summaries from one language to
another, which can contain errors from both summarization and translation
processes. To address this issue, we propose ConvSumX, a cross-lingual
conversation summarization benchmark, through a new annotation schema that
explicitly considers source input context. ConvSumX consists of 2 sub-tasks
under different real-world scenarios, with each covering 3 language directions.
We conduct thorough analysis on ConvSumX and 3 widely-used manually annotated
CLS corpora and empirically find that ConvSumX is more faithful towards input
text. Additionally, based on the same intuition, we propose a 2-Step method,
which takes both conversation and summary as input to simulate human annotation
process. Experimental results show that 2-Step method surpasses strong
baselines on ConvSumX under both automatic and human evaluation. Analysis shows
that both source input text and summary are crucial for modeling cross-lingual
summaries. | Yulong Chen, Huajian Zhang, Yijie Zhou, Xuefeng Bai, Yueguan Wang, Ming Zhong, Jianhao Yan, Yafu Li, Judy Li, Michael Zhu, Yue Zhang | 2023-07-08T17:20:56Z | http://arxiv.org/abs/2307.04018v1 | Revisiting Cross-Lingual Summarization: A Corpus-based Study and A New Benchmark with Improved Annotation
###### Abstract
Most existing cross-lingual summarization (CLS) work constructs CLS corpora by simply and directly translating pre-annotated summaries from one language to another, which can contain errors from both summarization and translation processes. To address this issue, we propose ConvSumX, a cross-lingual conversation summarization benchmark, through a new annotation schema that explicitly considers source input context. ConvSumX consists of 2 sub-tasks under different real-world scenarios, with each covering 3 language directions. We conduct thorough analysis on ConvSumX and 3 widely-used manually annotated CLS corpora and empirically find that ConvSumX is more faithful towards input text. Additionally, based on the same intuition, we propose a 2-Step method, which takes both conversation and summary as input to simulate human annotation process. Experimental results show that 2-Step method surpasses strong baselines on ConvSumX under both automatic and human evaluation. Analysis shows that both source input text and summary are crucial for modeling cross-lingual summaries.
## 1 Introduction
With the advance in deep learning and pre-trained language models (PLMs) Devlin et al. (2019); Lewis et al. (2020); Raffel et al. (2020), much recent progress has been made in text summarization Liu and Lapata (2019); Zhong et al. (2022); Chen et al. (2022). However, most work focuses on English (En) data Zhong et al. (2021); Gliwa et al. (2019); Chen et al. (2021), which does not consider cross-lingual sources for summarization Wang et al. (2022). To address this limitation, cross-lingual summarization (CLS) aims to generate summaries in a target language given texts from a source language Zhu et al. (2019), which has shown values to both academic and industrial communities Bai et al. (2021); Perez-Beltrachini and Lapata (2021).
Most existing work Zhu et al. (2019); Bai et al. (2021); Feng et al. (2022) constructs CLS corpora by translating summaries from existing mono-lingual summarization datasets into other languages, which is de facto a "_pipeline_" annotation protocol (first _summarize_, then _translate_) as shown in Figure 1. However, such an annotation method can suffer from two major problems: First, summaries from mono-lingual summarization corpora (summarization process) can contain errors Liu et al. (2022), which are likely to be preserved in translated summaries. For example, the English summary in Figure 1-(a)
Figure 1: An En-Zh summary from Wang et al. (2022) (best viewed in color). We compare “_pipeline_: (a)\(\rightarrow\)(b)” annotation protocol and our annotation (c) protocol. Pipeline annotation results in errors from both summarization (red: unmentioned content/hallucination) and translation (cyan: incorrect translation) processes. To address this issue, we explicitly annotate target-language summaries with faithfulness rectification (green) based on input context, with the guidance of mono-lingual summaries.
tent/hallucination (red text), which leads to the same discrepancy as in the translated summary (Figure 1-(b), red text). Second, the translation process can further introduce errors, in particular for polysemous words. For example, in Figure 1-(b), the term, "_Ex-Viking_" (which refers to previous members of the Minnesota Vikings team), is mistakenly translated into " " (which means "_expirate/buccaneer_"). To determine proper translation, it require more information beyond the scope of short summaries.
To qualitatively understand the above problems, we conduct human evaluation and error analysis on existing popular CLS corpora. Empirical results show that existing corpora suffer from the two aforementioned problems, containing significantly many hallucinations and factual errors.1 In particular, we find that overall \(20\sim 67\%\) of summaries in CLS datasets contain errors, where \(7\sim 46\%\) and \(13\sim 47\%\) of summaries suffer from summarization and translation processes, respectively. This suggests that the pipeline protocol, which is widely used in CLS research, can result in low-quality data and negatively impact the validity of modeling research. In addition, fine-grained error analysis shows that \(55.6\sim 89.1\%\) of translation errors can be resolved with the help of input context.
Footnote 1: The term _error_ later in this paper refers to errors that are hallucinations or can cause factual misunderstandings, except when otherwise specified.
Motivated by the above findings and to address this issue, we propose the protocol that cross-lingual summaries should be sourced from original input text where mono-lingual summaries can serve as a quick review for salient information. With this concept, we annotate cross-lingual summaries (\(S^{tgt}\)) by relying on source text (\(D^{src}\)) and source-language summaries (\(S^{src}\)) as shown in Figure 1-(c). Such an annotation protocol brings three advantages: First, compared with translation only given \(S^{src}\), rich context information from \(D^{src}\) helps annotators to disambiguate word senses and comprehend \(S^{src}\) accurately, e.g., " " " " (which means "_ex-viking team player_") in Figure 1-(c); Second, \(D^{src}\) is more reliable and can provide ground-truth information to correct potential errors in \(S^{src}\), e.g., red text in Figure 1-(a); Third, compared with writing \(S^{tgt}\) only given \(D^{src}\), \(S^{src}\) can serve as supplement guidance to help annotators be aware of what should be involved in the summaries, ensuring that salient information in \(S^{src}\) and \(S^{tgt}\) is aligned.
Using CLS protocol, we build ConvSumX, a new benchmark to facilitate future CLS research. ConvSumX focuses on conversational text in a few-shot setting. Compared with monologue (e.g., news), conversational text is less explored yet is also practically useful in real-world scenarios (Chen et al., 2022). ConvSumX contains two sub-tasks, namely DialogSumX and QMSumX, based on two English conversation summarization datasets DialogSum(Chen et al., 2021) and QMSum (Zhong et al., 2021), respectively. Each covers three language-directions, taking En as the source, and Mandarin (Zh), French (Fr) and Ukrainian (Ukr) as target languages. We empirically compare different annotations using the pipeline protocol and our CLS protocol with human evaluation. Analysis shows that by considering input context, our protocol can significantly reduce annotation errors, suggesting ConvSumX is a high-quality benchmark in terms of cross-lingual faithfulness.
Based on the same intuition that \(D^{src}\) and \(S^{src}\) can serve as a critical complement to each other, we propose a 2-Step framework for CLS, which fine-tunes a multi-lingual PLM using concatenated \(S^{src}\) and \(D^{src}\) as input, and \(S^{tgt}\) as output. Experimental results show that our conceptual framework yields surprisingly better performance over strong baselines on ConvSumX. Analysis and human evaluation show that our method can effectively generate more faithful cross-lingual summaries in a low-resource setting, and verify that source input text and summaries are supplementary to each other in modeling cross-lingual summaries.
To summarize, our contributions are the following:
1. We systematically review the pipeline annotation protocol and show that such a protocol can result in low-quality data (SS 2);
2. We propose the concept that CLS should be sourced from both source input text and source-language summaries and under our protocol, we present ConvSumX benchmark (SS 3), where QMSumX is the first query-focused CLS dataset.
3. Under the same concept, we propose a simple yet effective 2-Step framework for CLS (SS 4), which demonstrates the necessity of both source input text and mono-lingual summary for CLS modeling.
We release ConvSumX at [https://github.com/cylnlp/ConvSumX](https://github.com/cylnlp/ConvSumX).
## 2 Analyzing Existing CLS Corpora
We conduct a corpus-based study on existing popular human-annotated CLS corpora, namely NCLS, XSAMSum and XMediaSum, covering both monologue and dialogue texts.
Ncls(Zhu et al., 2019) is the first large cross-lingual news summarization corpus, which is constructed by automatically translating existing monological summarization datasets and using a round-trip strategy with human post-editing on test sets.
XSAMSumandXMediaSum are both from CidSum(Wang et al., 2022), where they manually translate summaries from two English dialogue summarization datasets, namely SAMSum Gliwa et al. (2019) and MediaSum Zhu et al. (2021), into Mandarin and German.
### Error Analysis on _Pipeline_ Annotation
Since all 3 corpora have the task of summarizing English (En) documents into Mandarin (Zh) summaries, we perform human evaluation on this language direction. For each corpus, we randomly extract \(100\) instances from its training and testing sets, respectively, resulting in a total of \(600\) instances to evaluate. Each instance consists of English document (\(D^{en}\)) and summary (\(S^{en}\)), and Mandarin summary (\(S^{zh}\)).
We invite two expert translators, who are native in Mandarin and professional in English, as our judges and ask them to first evaluate whether the \(S^{zh}\) contains errors or not, by evaluating the \(S^{zh}\) against \(D^{en}\) (IAA2: \(0.67\), substantial agreement). If \(S^{zh}\) is found errors, the judges are asked to identify where such errors come from (IAA: \(0.80\), substantial agreement). Specifically, if this error is also found in \(S^{en}\), we regard that it is caused by the mono-lingual summarization process; if this error is only found in \(S^{zh}\) but not in \(S^{en}\), we regard that it is caused by the translation process. In this process, we only focus on factual errors, and minor syntax errors are ignored.
Footnote 2: We measure Inter-Annotator Agreement (IAA) by calculating their Pair-wise Cohen kappa score on \(60\) quiz instances.
Table 1 shows the evaluation result. Overall, we see that all CLS corpora show high error frequencies (\(20\sim 67\%\)), indicating existing CLS can be less accurate. In particular, all mono-lingual summarization annotation contains errors (\(7\sim 46\%\)), which are preserved in the CLS corpora. Moreover, the cross-lingual annotation process can invite more errors (\(13\sim 47\%\)). This verifies our assumption that the pipeline annotation protocol, which ignores valuable input context, can lead to poor data quality.
In particular, NCLS contains the most errors, which can be because in addition to the different quality of their original mono-lingual summaries, \(S^{zh}\) in NCLS are automatically translated by MT systems. Although human post-editing is conducted on the test set, factual errors are still frequent in the test set compared with the training set. This can be because their post-editing focuses on poor fluency and translationese, while correcting factual errors or hallucinations requires information from the source text, which is not presented to human editors. In addition, the averaged number of words in NCLS is much larger than in XMediaSum and XSAMSum,3 making translation more difficult.
Footnote 3: Avg. token. length in English summaries: NCLS (\(55.2\)) XMediaSum (\(14.4\)), XSAMSum(\(20.3\)).
The major contradiction between frequent errors according to our analysis and the high data quality reported by Zhu et al. (2019) and Wang et al. (2022) can be explained by different reference sources, where our results show that these datasets have limitations in the choice of source for reference. For example, when only given \(S^{en}\) ("_Fifty Five Percent... Ex-Viking..._") as reference, an \(S^{zh}\) ("\(55\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%\%%\%\%\%\%\%\%\%\%\%\%\%%\%\%\%\%\%\%\%\%\%\%\%%\%\%\%%\%\%%\%\%\%\%%\%\%\%\%%\%\%\%%\%\%%\%\%%\%%\%\%%\%%\%%\%%\%%\%%%\%%%\%%%\
### In-depth Analysis on Translation Errors
To further understand why directly translating English summaries can invite so many errors, we perform an error analysis on summaries containing translation errors and categorize them. In particular, the two judges first identify whether the translation error can be resolved by considering the input context, or not, assuming that the errors can be caused by lacking input context (e.g., polyseme translation), and other translation errors (e.g., inconsistent translation). We categorize the former error types based on their linguistic typologies (avg. IAA: \(0.62\), substantial agreement):
**Word Sense (W.S.)**: the translation of a word/phrase is incorrect under source input context.
**Terminology (Ter.)**: the translation of a word/phrase can be semantically correct but is improper in source input domains.
**Coreference (C.)**: the translation of coreference expressions refer to incorrect objectives.
**Sentence Relation (S.R.)**: The relation between two sentences/clauses is induced incorrectly or the translation of a sentence is incorrect because of misunderstanding the interrelation/structure of a sentence.
**Others (Oth.)**: simple errors such as typos or less accurate translation.
Table 2 presents the error types and their error counts. First, we see that errors (W.S, Tem., C. and S.R. together: \(8\sim 41\)) caused by lacking input context are more than other translation errors (Oth.: \(5\sim 12\)). This further suggests the necessity of considering input text when annotating CLS corpora. In addition, word sense sees overall most errors (\(26.32\sim 51.02\%\), avg. \(41.81\%\)), which is in line with the intuition that lacking context can mostly lead to word sense ambiguity. Moreover, all categories see error instances, suggesting that such problematic summaries can confuse humans at multiple levels of language understanding.
Appendix A shows detailed information about our judges and Appendix B shows cases of different translation error types and their analysis.
## 3 ConvSumX
To address the aforementioned issues in pipeline annotation, we propose ConvSumX with a new annotation protocol, focusing on _few-shot_ CLS. ConvSumX contains two cross-lingual summarization scenarios, namely daily dialogue summarization, and query-based summarization, covering 3 language directions: En2Zh, En2Fr and En2Ukr.
### Data Source
We choose DialogSum(Chen et al., 2021) and QMSum(Zhong et al., 2021) for ConvSumX by considering their potential to build real-world applications, and annotating their test and dev sets.
**DialogSum**DialogSum(Chen et al., 2021) is a real-life scenario dialogue summarization dataset, including various types of task-oriented dialogues.
**QMSum**QMSum(Zhong et al., 2021) is a query-based meeting summarization dataset, covering the academic, product and committee domains. We select data from academic and product for annotation.
### Annotation
As discussed in SS 2, the final quality of CLS corpora can be influenced by both summarization process and translation process, most of which can be resolved with the information from input documents. Therefore, instead of merely focusing on summaries in source languages, we ask annotators to write summaries in target languages (\(S^{tgt}\)) directly by considering both input documents (\(D^{src}\)) and pre-annotated summaries (\(S^{src}\)). We refer to our protocol as CLS protocol.
We take English as the source language and choose Mandarin, French and Ukrainian as target languages because they are from different language families, and have different morphological variations and syntactic structures, with the potential to benefit other languages in their families. We invite expert translators, who are native in target
\begin{table}
\begin{tabular}{c c|c c c c c} \hline \hline \multicolumn{2}{c|}{**Corpora**} & \multicolumn{5}{c}{**Translation Errors**} \\ & & **W.S.** & **Ter.** & **C.** & **S.R.** & **Oth.** & **All** \\ \hline \multirow{2}{*}{NCLS} & Train & \(25\) & \(6\) & \(2\) & \(4\) & \(12\) & \(49\) \\ & Test & \(23\) & \(5\) & \(5\) & \(8\) & \(5\) & \(46\) \\ \hline \multirow{2}{*}{XMS} & Train & \(8\) & \(3\) & \(1\) & \(3\) & \(8\) & \(23\) \\ & Test & \(5\) & \(3\) & \(0\) & \(3\) & \(8\) & \(19\) \\ \hline \multirow{2}{*}{XSS} & Train & \(9\) & \(5\) & \(4\) & \(4\) & \(5\) & \(27\) \\ & Test & \(4\) & \(1\) & \(1\) & \(2\) & \(5\) & \(13\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Fine-grained categorization of translation errors. Here we report the error count of each type. W.S, Ter., C., S.R., and Oth. stand for Word Sense, Terminology, Coreference, Sentence Relation, and Others. Note that one summary can have multiple errors.
languages and professional in English, as our annotators (Appendix A). We ask annotators to first comprehend \(D^{src}\), and then write \(S^{tgt}\) with the help of \(S^{src}\). In addition to the standard annotation criteria of DialogSum and QMSum, we ask our annotators specifically pay attention to the following aspects featuring the CLS:
* Cross-lingual Consistency: Although being in different languages, the core semantic information of \(S^{tgt}\) should be consistent with \(D^{src}\), in particular for polysemous words or phrases.
* Language Style and Terminology: Annotators should write \(S^{tgt}\) in the same language style of \(S^{src}\), and use proper terminologies in some certain domains, such as academic meetings.
* Translationese: The annotated summaries should be natural in the target languages.
For QMSum, annotators are additionally asked to write a query in target languages (\(Q^{tgt}\)) with the help of the query in source language (\(Q^{src}\)), where \(Q^{tgt}\) and \(S^{tgt}\) form a QA pair.
Before annotation, we ask each annotator to label training samples (\(10\%\) of each dataset) until all annotated instances meet our requirements. After annotation, each instance is reviewed by an editor, who is also an expert translator. Editors are asked to first read the annotated summary to identify whether it is natural and readable in target languages, and then evaluate it against source input document to identify whether there are any factual errors. If any errors are found, we ask the corresponding annotator to re-annotate the whole batch and repeat this checking and re-annotation process until all summaries are correct. As mono-lingual summarization process can also contain large errors (SS 2.1), we additionally require annotators to modify English summaries/queries if any errors are found. Table 3 presents the percentage of summaries that contain errors in the original datasets.
Finally, we split the original dev sets into our new training and dev sets and keep the test set unchanged (DialogSumX: \(400/100/500\) and QMSumX: \(157/40/209\)).
### Comparison between ConvSumX with _Pipeline_ Annotation Data
To qualitatively compare CLS and pipeline annotation protocols in a fair setting (e.g., to remove the influence of different data sources), we additionally annotate instances using the pipeline approach, i.e., directly translating English summaries into Mandarin. We randomly sample \(100\) instances from dev/test sets of DialogSum and QMSum, referring to them as DialogSum-P and QMSum-P, respectively. Overall, we have \(400\) instances to annotate and \(800\) instances to evaluate.
These data are annotated by the same annotators, using the same quality control process as ConvSumX. To avoid priori knowledge from input context for pipeline annotation, this process is conducted _before_ ConvSumX annotation. Then, we perform human evaluation on those translated data and corresponding data in ConvSumX using the same method as described in SS 2.1 in an anonymous way. For ConvSumX, we take corrected English summaries as _pseudo_ translation for evaluation. Table 4 shows the human evaluation results.
Consistent with our findings (SS2.1), DialogSum-P and QMSum-P contain errors (\(11\sim 31\)) from both the summarization and translation processes. In contrast, ConvSumX contains fewer errors (\(0\sim 2\)),4 indicating the necessity of our CLS annotation protocol.
\begin{table}
\begin{tabular}{l l|c c c} \hline \hline \multicolumn{2}{c|}{**Corpora**} & \multicolumn{2}{c}{**Overall Summ. Trans.**} \\ \hline \multirow{2}{*}{DialogSumX} & T+D & \(2\) & \(0\) & \(2\) \\ & Test & \(0\) & \(0\) & \(0\) \\ \hline \multirow{2}{*}{QMSumX} & T+D & \(2\) & \(0\) & \(2\) \\ & Test & \(1\) & \(0\) & \(1\) \\ \hline \multirow{2}{*}{DialogSum-P} & T+D & \(16\) & \(9\) & \(9\) \\ & Test & \(11\) & \(5\) & \(7\) \\ \hline \multirow{2}{*}{QMSum-P} & T+D & \(31\) & \(19\) & \(18\) \\ & Test & \(19\) & \(9\) & \(13\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison between CLS and pipeline annotation protocols. We count the number of different errors on \(100\) instances, respectively. T+D: Training and Dev sets, which are the original dev set.
\begin{table}
\begin{tabular}{l l|c c} \hline \hline \multicolumn{2}{c|}{**Corpora**} & **Summ.** & **Query** \\ \hline \multirow{2}{*}{DialogSum} & Dev & \(34/500\) & \(-\) \\ & Test & \(21/500\) & \(-\) \\ \hline \multirow{2}{*}{QMSum} & Dev & \(33/199\) & \(7/199\) \\ & Test & \(11/209\) & \(0/209\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Error analysis on QMSum and DialogSum. we show the number of error summaries/data size.
### Characteristics of ConvSumX
Table 5 presents a comparison between ConvSumX and other CLS corpora, highlighting the unique features of ConvSumX. Firstly, ConvSumX is designed for spoken conversation summarization and encompasses two real-world scenarios. Notably, QMSumX is the first corpus addressing query-based CLS. Secondly, ConvSumX includes multiple languages from diverse families (French: Romance; Mandarin: Chinese; Ukrainian: Slavic; English: Germanic), positioning it as a valuable resource for studying cross-lingual generalization and language transfer. Furthermore, ConvSumX is the pioneering benchmark for CLS research involving the low-resource language, Ukrainian. Last, ConvSumX is the first CLS benchmark that for-sakes the pipeline annotation protocol, which is essentially different from all existing human-crafted corpora. The low error frequencies demonstrate its cross-lingual faithfulness.
## 4 Method
### Setting
Generally, the task of _few-shot CLS_ is defined as: given a source input text \(D^{src}\), few-shot CLS is to generate a summary in a target language \(S^{tgt}\) by learning a limited number of gold-annotated \(\langle D^{src},S^{tgt}\rangle\) data, with the help of external knowledge, which can be from mono-lingual summarization data, machine translation data and PLMs.
Specifically, for _query-focused CLS_, the system is asked to generate \(S^{tgt}\) given \(D^{src}\) with a query in the target language \(Q^{tgt}\).
### Models
We evaluate two standard CLS baselines, namely pipeline method and End2End method, and propose a novel 2-Step framework, which differ from each other in the way the cross-lingual summary is generated. Figure 2 summarizes the main difference between their workflows.
Pipeline MethodPrevious work decomposes CLS into mono-lingual summarization and machine translation Zhu et al. (2019), by deploying _first-summarize, then-translate_ (_S-T_) or _first-translate, then-summarize_ (_T-S_) strategies.
We compare with _S-T_ as it can benefit from large mono-lingual summarization and monologue translation data, while _T-S_ has been proven much worse Feng et al. (2022) as both dialogue translation and non-English summarization data are very limited. For QMSumX, we additionally translate \(Q^{tgt}\) into \(Q^{src}\) before mono-lingual summarization and translation, to which we refer as _T-S-T_.
End2End MethodPrevious work models the CLS task and has shown better performance on previous datasets compared with pipeline methods Zhu et al. (2019); Xu et al. (2019).
We compare two End2End methods: First, we directly fine-tune a multi-lingual model on \(\langle D^{src},S^{tgt}\rangle\) (DialogSumX) and \(\langle\{Q^{tgt},D^{src}\},S^{tgt}\rangle\) (QMSumX), marked as E2E; Second, inspired by Bai et al. (2021), where an End2End model first generates mono-lingual summary and then cross-lingual summary in an auto-regressive way and shows good performance in few-shot setting, we fine-tune a multi-lingual model on \(\langle D^{src},\{S^{src};S^{tgt}\}\rangle\) (DialogSumX) and \(\langle\{Q^{tgt},D^{src}\},\{S^{src};S^{tgt}\}\rangle\) (QMSumX), marked as E2M (M means mixed).
2-Step MethodInspired by our data analysis (SS 2) that mono-lingual summary can help guiding salient information for cross-lingual summary,
\begin{table}
\begin{tabular}{l|c c c c c c c} \hline \hline
**Corpora** & **Domain** & **Lan. Direct** & **Annotation** & \(D^{src}\) & \(S^{src}\) & \(S^{tgt}\) & **\% E.** \\ \hline En2ThSum & News & En2Zh & \(D^{src}\sim S^{src}\sim S^{tgt}\) & 755.0 & 55.2 & 96.0 & 33.5 \\ Zh2EnSum & News & Zh2En & \(D^{src}\sim S^{src}\sim S^{tgt}\) & 103.7 & 17.9 & 13.7 & - \\ En2DeSum & News & De2En & \(D^{src}\sim S^{src}\sim S^{tgt}\) & 31.0 & 8.5 & 7.5 & - \\ \hline XSAMSum & Written chit-chat & En2Zh/De & \(D^{src}\sim S^{src}\sim S^{tgt}\) & 83.9 & 20.3 & 33.0/19.9 & 27.5/- \\ XMediaSum & Interview & En2Zh/De & \(D^{src}\sim S^{src}\sim S^{tgt}\) & 1553.4 & 14.4 & 30.0/14.8 & 27.0/- \\ \hline DialogSumX & Real-life dialog & En2Zh/Fr/Ukr & \(\{D^{src},S^{src}\}\sim S^{tgt}\) & 131.9 & 19.9 & 53.0/22.0/17.3 & 1.0/- \\ QMSumX & Q-F meeting & En2Zh/Fr/Ukr & \(\{D^{src},S^{src}\}\sim S^{tgt}\) & 1916.2 & 63.5 & 114.4/72.1/49.9 & 1.5/- \\ \hline \hline \end{tabular}
\end{table}
Table 5: Statistics of ConvSumX and other human-crafted CLS datasets. Lan. Direct: language direction. #: averaged length. \(D^{src}\),\(S^{src}\) and \(S^{tgt}\) are text length. We calculate character length for Mandarin and token length for others. Q-f: Query-focused. % E.: averaged sampled error rate. Both Zh2EnSum Zhu et al. (2019) and En2DeSum Bai et al. (2021) are constructed using the same method of En2ZhSum Zhu et al. (2019). “\(\rightarrow\)”: human annotation. “\(\sim\)”: automatic generation with human post-editing.
and generating proper translation requires information from source input text, we proposed a 2-Step method. Conceptually, 2-Step is designed to simulate human annotation, where we ask an end2end model to generate \(S^{tgt}\) given concatenated \(S^{src}\) and \(D^{src}\). Compared with pipeline methods, 2-Step method can explicitly make use of information from source input. Compared with End2End methods, 2-Step can focus on relevant information with the help of mono-lingual summaries.
Similarly, for QMSumX, we obtain the source language summaries by first translating \(Q^{tgt}\) into \(Q^{src}\) and then using mono-lingual summarizers. During inference, we use model-generated source summaries as \(S^{src}\), which are obtained using the same way of pipeline methods.
Note all individual models are in a seq2seq manner. The terms "_pipeline_", "End2End" and "2-Step" are stated from the perspective between source input text and output cross-lingual summaries.
## 5 Experiments
MetricsFor automatic evaluation, we use Rouge Lin (2004)5 and BERTScore Zhang et al. (2020)6. Rouge measures the \(n\)-gram overlap between generated and reference summaries. BERTScore calculates the pairwise cosine similarity between BERT Devlin et al. (2019) token embeddings of generated and reference summaries. We report the \(F\)-1 scores of Rouge-1 (R1), Rouge-2 (R2), Rouge-L (RL) and BERTScore (BS).
Footnote 5: [https://github.com/csebuentlp/xl-sum](https://github.com/csebuentlp/xl-sum)
Footnote 6: [https://github.com/Tiiiger/bert_score](https://github.com/Tiiiger/bert_score)
Implementation DetailsFor mono-lingual generation, we use UniSumm7 for model initialization, further pre-training it on original training sets of DialogSum and QMSum, and then prefix-tuning it on our few-shot training data. For cross-lingual generation (MT or CLS), we use mBART-large-50-many-to-many-mmt8 for model initialization and then fine-tune it on our cross-lingual data. All experiments are conducted on NVIDIA A100 GPU. We conduct a hyper-parameter search for learning rate and batch size, from [1.5e-4, 1e-4, 5e-5, 3e-5, 1e-5] and [8, 16, 32, 64], and choose the best checkpoint based on R2 score on our few-shot dev sets.
Footnote 7: [https://github.com/microsoft/UniSumm](https://github.com/microsoft/UniSumm)
Footnote 8: [https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt)
### Main Results
The main results on DialogSumX (_DX_) and QMSumX (_QX_) are shown in Table 6. In general, we find that our 2-Step system achieves the best results in most languages and the best averaged results on both tasks. In particular, 2-Step system outperforms pipeline method (_S-T_) (avg. improvement: \(0.19\) R2 and \(0.24\) BS scores on _DX_; \(0.61\) R2 and \(1.39\) BS scores on _QX_). It also outperforms End2End models by a large margin (avg. improvement: \(4.73\sim 5.78\) R2 and \(2.36\sim 2.79\) BS scores on _DX_; \(1.65\) R2 and \(2.69\) BS scores on _QX_). Note that 2-Step system is additionally presented with source summary and input text information compared with E2E and _S-T_ systems. Thus, the superiority of 2-Step demonstrates that the source document and source summary are cru
Figure 2: Illustration of pipeline method, end2end method, and our 2-Step method. MLS: mono-lingual summarizer; CLS: cross-lingual summarizer; Tans: translator.
cial in modeling cross-lingual summaries, and are complementary to each other.
Moreover, _S-T_ outperforms End2End models. The contradiction between our results and previous findings (Bai et al., 2021; Chen et al., 2022) can be explained by the fact that the summarizer and translator we use are much stronger and the error propagation problem is less severe. Also, _S-T_ can benefit from our high-quality parallel cross-lingual summary pairs (\(S^{src}\) and \(S^{tgt}\)) as few-shot translation data, while previous work ignores such valuable data and only uses a fixed MT system without fine-tuning (Zhu et al., 2019).
All CLS systems perform better at En2Zh and En2Fr than En2Ukr. The high performance on En2Zh and En2Fr can be explained by that both Zh and Fr are highly-rich resource data on which mBART-50 is pre-trained (Tang et al., 2021), and mBART-50 can easily bridge the alignment between texts in Zh/Fr and En. In contrast, Ukr is a low-resource language, on which the mBART-50 performs poorly. All systems have higher performance on _DX_ compared with _QX_, which is because _QX_ is more challenging w.r.t the task of query-based summarization for long text and more extreme few-shot setting, and its domain is very different from mBART-50's pre-training data.
We notice that all models perform better on _QX_ En2Fr than En2Zh and En2Ukr. A possible reason can be that _QX_ contains many professional in-domain words whose word sense can be multiple and very different from its general ones. The sense of these words can be different lexical items, in particular for Zh or Ukr, which are typologically different from En (Chen and Ng, 1989; Budzhak-Jones, 1998). In contrast, Fr and En both use Latin script and are more similar in terms of morphology and lexicon rules (Kirsner et al., 1984; Pacton and Deacon, 2008; Fan et al., 2021) compared with Zh and Ukr. For example, "_discourse_" can be mapped into "\(\natural\)\(\natural\)\(\times\)((\)academic paper)/\(\natural\)\(\times\)((\)talk\()/..." in Zh and "\(\natural\)\(\natural
including grammar and whether it is natural; _Coherence_ evaluates the collective quality of generated summaries; _Relevance_ evaluates the importance of information in generated summaries; _Consistency_ evaluates factual alignment between generated summaries and source input texts. We randomly extract \(50\) summaries from _S-T_ and 2-Step outputs on ConvSumX for each language, and ask native speakers to give scores from \(1\) to \(5\). Higher scores indicate higher qualities.
The result is shown in Table 7. Generally, all metrics see low scores, suggesting the challenge of few-shot CLS. Both models see higher scores on _DX_ compared with _QX_, which is consistent with our automatic evaluation. Compared with _S-T_, 2-Step achieves similar Relevance scores on all tasks. This is because the input source summary for both models is identical, thus the information in it is the same. However, 2-Step achieves higher Fluency, Coherence, and Consistency scores, which justifies our assumption that source input text information is critical, in particular for consistency.
We present case study of model outputs in Appendix D.
## 6 Related Work
CLS CorporaExisting CLS corpora construction can be categorized into two main protocols. 1) Pipeline annotation: translating summaries from MLS corpora into other languages and; 2) Automatic alignment: aligning summaries and input texts of different language versions.
Zhu et al. (2019) construct the first large-scale CLS dataset by automatically translating monolingual summaries using MT systems with a round-trip strategy and manual post-editing on test sets. Bai et al. (2021) construct an En2De dataset using the same method. Feng et al. (2022) automatically translate summaries from SAMSum Gliwa et al. (2019) into Russian, De and Zh. Wang et al. (2022) manually translate summaries from SAMSum Gliwa et al. (2019) and MediaSum Zhu et al. (2021) into De and Zh. Different from them, we propose a new annotation protocol, which helps annotators to comprehend documents quickly and accurately. To our knowledge, we are the first to address such human annotation issues for CLS research and present a new benchmark, ConvSumX.
A different line of work constructs CLS datasets by linking different language versions of online articles, such as Wikipedia Perez-Beltrachini and Lapata (2021) and WikiHow Ladhak et al. (2020). Despite the cheap cost and large scale, there can be misalignment and hallucination problems. For example, Wikipedia articles and their leading paragraphs (pseudo summaries) of the same person in different languages can contain different contents. Also, such a method is limited to resources that contain multi-lingual data, which may not be available for all domains of interest, for example, the conversational text.
CLS ModelsEarly work on CLS focuses on a pipeline paradigm by first summarizing, then translating, or vice versa. However, due to the poor performance of early MT and summarization systems, such methods can often suffer from error propagation. With the advance of deep learning and PLM technologies, recent work deploys end-to-end methods. Zhu et al. (2019), Xu et al. (2020), Bai et al. (2021) and Wang et al. (2022) propose multi-task learning or pre-training on large in-domain CLS, mono-lingual summarization and translation data. Different from them, we propose a 2-Step method under the same concept of sourcing from source input text with the guidance of source summary, which is free of pre-training on large and thus can be easily adapted to other tasks and languages.
## 7 Conclusion
We conducted data analysis on 3 typical corpora and showed that the pipeline annotation protocol suffers from errors from both the summarization and translation processes. To address these issues, we proposed that cross-lingual summaries should be sourced from source input text. Based on this principle, we annotated a more faithful CLS benchmark, ConvSumX by relying on both source-language texts and summaries. Based on the same intuition, we proposed a 2-Step method that takes both source text and source summaries as input. Ex
\begin{table}
\begin{tabular}{c c|c c c|c c c c} \hline \hline \multicolumn{2}{c|}{**Model**} & \multicolumn{3}{c|}{_DX_} & \multicolumn{3}{c}{_QX_} \\ & _F._ & _Coh._ & _Con._ & _R._ & _F._ & _Coh._ & _Con._ & _R._ \\ \hline \multirow{3}{*}{_S-T_} & En2Zh & 2.60 & 2.87 & 2.27 & 3.30 & 2.10 & 2.15 & 1.95 & 2.25 \\ & En2Fir & 3.23 & 4.43 & 3.37 & 2.50 & 2.85 & 3.65 & 1.60 & 1.35 \\ & En2Ukr & 3.90 & 3.57 & 3.20 & 3.20 & 3.30 & 3.25 & 2.90 & 3.00 \\ \hline \multirow{3}{*}{_S-T_} & En2Zh & 2.90 & 3.00 & 2.50 & 3.30 & 2.40 & 2.45 & 2.20 & 2.45 \\ & En2Fir & 3.30 & 4.47 & 3.47 & 2.50 & 3.00 & 3.65 & 1.90 & 1.50 \\ \cline{1-1} & En2Ukr & 3.83 & 3.70 & 3.57 & 3.30 & 3.35 & 3.25 & 3.00 & 3.05 \\ \hline \hline \end{tabular}
\end{table}
Table 7: _F., Coh., Con. and R. are Fluency, Coherence, Consistency and Relevance. 2-S: 2-Step. Please note that the scores are not comparable between languages._
perimental results showed that 2-Step method outperforms strong baselines on ConvSumX, demonstrating that both source-language texts and summaries are crucial in modeling cross-lingual summaries and are complementary to each other. To our knowledge, we are the first to show that summary translation has limitations for CLS, giving a more faithful solution.
### Limitations
The limitation of this paper can be stated from three perspectives. First, although using our CLS annotation protocol can label more faithful data, the annotation cost is higher because annotators need to comprehend the full source text instead of only the source summary. Second, ConvSumX only covers 3 typical languages, while languages from different language families and have different morphology and lexical-/syntactic rules require further investigation. Third, although the proposed 2-Step method is effective, we simply concatenate the source input text and mono-lingual summary at the token level as the model input but do not make further exploration. We believe that more smart and sophisticated designs to integrate features from source input text and mono-lingual summary can further improve the CLS performance, which, however, we leave for future work.
### Ethics Statement
Data Usage and LicenseConvSumX is based on two public English conversation summarization datasets, namely DialogSum and QMSum. Both datasets are freely available online under the MIT license, which has no constraint to academic use, modification, and further distribution. We will follow the MIT license to make our data (annotated target summaries/queries and corrected English summaries/queries) freely available online.
Human AnnotationThe construction of ConvSumX involves human annotation. We hire \(4\) expert translators as our annotators and editors for each target language. The total cost is around \(6,500\) USD, which applies to our annotation (including quiz annotation) and review. The hourly salary is equal. The total annotation time (including training annotation and editing) for Zh, Fr and Ukr is around \(96\), \(96\), and \(120\) hours (according to our annotation cost/hourly salary). Detailed information about our annotators/judges/editors can be found in Appendix A.
Content SafetyDuring our annotation, annotators are explicitly asked to not involve any personal/violent information and to write summaries strictly limited to the scope of source input text. Also, if any violent or uncomfortable information is found in source input text, annotators are asked to report such issues. All data are further reviewed by editors. With careful checking and evaluation, ConvSumX (including source input text) contains no personal/violent content, and is safe to use.
## Acknowledgement
We thank reviewers from ACL2023 for their suggestions. We extend our sincere and special thanks to our meta-reviewers for their indispensable and exceptional contributions. We also appreciate Ruochen Xu for insightful discussion and expert translators from Lan-bridge who have played a crucial role in the development of ConvSumX. This work is funded by the Ministry of Science and Technology of China (grant No. 2022YFE0204900) and National Natural Science Foundation of China (grant NSFC No. 62161160339).
|
2304.14983 | Homotopy truncations of homotopically stratified spaces | Intersection homology of Goresky and MacPherson can be defined from the
Deligne sheaf, obtained from truncations of complexes of sheaves. As
intersection homology is not the homology of a particular space, the search for
a family of spaces whose homologies have properties analogous to intersection
homology has developed. For some stratified spaces, M. Banagl has introduced
such a family by using a topological truncation: the original link is replaced
by a truncation of its homological Moore resolution.
In this work, we study the dual approach in the Eckmann-Hilton sense : we
consider the stratified space obtained by replacing the original link by a
Postnikov approximation. The main result is that our construction restores the
space constructed by Gajer to establish an intersection Dold-Thom theorem.
We are conducting this study within the general framework of Quinn's
homotopically stratified spaces. | David Chataur, Martintxo Saralegi-Aranguren, Daniel Tanré | 2023-04-28T17:07:36Z | http://arxiv.org/abs/2304.14983v3 | # Homotopy truncations of Homotopically stratified spaces
###### Abstract.
Intersection homology of Goresky and MacPherson can be defined from the Deligne sheaf, obtained from truncations of complexes of sheaves. As intersection homology is not the homology of a particular space, the search for a family of spaces whose homologies have properties analogous to intersection homology has developed. For some stratified spaces, M. Banagl has introduced such a family by using a topological truncation: the original link is replaced by a truncation of its homological Moore resolution.
In this work, we study the dual approach in the Eckmann-Hilton sense : we consider the stratified space obtained by replacing the original link by a Postnikov approximation. The main result is that our construction restores the space constructed by Gajer to establish an intersection Dold-Thom theorem. We are conducting this study within the general framework of Quinn's homotopically stratified spaces.
Key words and phrases:Intersection homology; Quinn spaces; Gajer spaces; Linkwise localization 2020 Mathematics Subject Classification: 57N80, 55P60, 58A35, 32S60 The first author was supported by the research project ANR-18-CE93-0002 "OCHOTO". The third author was partially supported by by the Proyecto PID2020-114474GB-100 and the ANR-11-LABX-0007-01 "CEMPI"
## Introduction
In [14], M. Goresky and R. MacPherson re-establish Poincare duality with rational coefficients for some stratified spaces, called pseudomanifolds. For that, they introduce perversity functions, \(\overline{p}\), and intersection homology defined from the Deligne sheaf, obtained from a succession of ad'hoc truncations of complexes of sheaves ([15]). This process is done at an algebraic level and not at a topological one; intersection homology is not the homology of a space.
M. Banagl has developed ([1]) the idea of the construction of a family of spaces, \(I_{\overline{p}}X\), from a stratified pseudomanifold \(X\) and whose homologies have properties analogous to intersection homology. In particular, if \(D\overline{p}\) is the complementary perversity of \(\overline{p}\), a Poincare duality, similar to that of [14, 15], is required between the (co)homologies of \(I_{\overline{p}}X\) and \(I_{D\overline{p}}X\). In [2, 3], Banagl achieves this goal for 2-strata pseudomanifolds with certain requirements, as the existence of a flat link bundle. The construction of \(I_{\overline{p}}X\) uses a topological truncation: the original link is replaced by a truncation of its homological Moore resolution. If the pseudomanifold has only isolated singularities, let us also mention the work of M. Spiegel ([28]) where a truncation of the link is performed with respect to any homology theory given by a connective ring spectrum. In the case of the ordinary Eilenberg-MacLane spectrum, Spiegel recovers exactly the intersection homology.
Here, we develop an Eckmann-Hilton dual version of Banagl's construction, \(I_{\overline{p}}X\), replacing the truncation of the Moore homological decomposition by a truncation of the Postnikov tower of the links. Using homotopical tools, the process fits well with the homotopically stratified spaces introduced by F. Quinn and we place this work in this setting, see Definition 2.3.
A second type of space also appears. To present it, let us recall that intersection homology is based on a choice of singular chains. This choice is made according to a perversity \(\overline{p}\) and the chosen chains are called \(p\)-allowable. The tricky point is that the boundary of a \(\overline{p}\)-allowable chain is not necessarily \(\overline{p}\)-allowable. Thus, for having a chain complex, it is necessary to require the allowability of the chain and of its boundary. Here we do not work with chain complexes but with topological spaces or simplicial sets. Therefore, the allowability property must be required on the simplexes and on all their faces. This construction has been introduced by P. Gajer in [12, 13]. From a stratified space \(X\) and a perversity \(\overline{p}\), Gajer gets a simplicial set \(\mathscr{G}_{\overline{p}}X\).
Let's detail our main result for a Quinn's homotopically stratified space \(X\) of depth \(1\). Denote by \(S\) the singular stratum of \(X\) and by \(\operatorname{holink}_{\bullet}(X,S)\) the stratified homotopy holink introduced by Quinn and formed of paths beginning in \(S\) which never return to \(S\). The space \(X\) is the homotopy pushout of
where \(\operatorname{\mathsf{eval}}_{0}\) and \(\operatorname{\mathsf{eval}}_{1}\) are evaluation maps of paths. Let \(\overline{p}\) be a perversity on \(X\), of complementary perversity \(D\overline{p}\). As recalled in Section 4, a Postnikov stage \(P_{\ell}Y\) of a topological space \(Y\) is a particular case of a localization. The process of localization also exists for maps and is called fibrewise localization. We denote it \(\widetilde{P}_{\ell}\). Our main result states as follows for spaces of depth \(1\).
**Theorem A**.: _Let \((X,\overline{p})\) be a perverse manifold homotopically stratified space with connected links and two strata, \(S\) and \(X\backslash S\). Then there is a homotopy equivalence between the realisation of \(\mathscr{G}_{\overline{p}}X\) and the fibrewise Postnikov \(D\overline{p}(S)\)-localization of \(\operatorname{\mathsf{eval}}_{0}\colon\operatorname{holink}(X,S)\to S\)._
For instance, if \(X=\mathring{c}Y\) is the cone of apex \(\mathsf{v}\) on a topological space \(Y\), the previous construction gives the \(D\overline{p}(\mathsf{v})\)-stage of the Postnikov tower of \(Y\) which is known to be a Gajer space.
In the general case, we need a linkwise localization, obtained by induction on the depth and described in Subsection 4.1. The main result is stated in Theorem 4.2 and, as announced, means that the Eckmann-Hilton dual of Banagl's construction gives the Gajer space.
**Outline of the paper.** Sections 1 and 2 are basic recalls on stratified objects and tools: Quinn's homotopicaly stratified spaces, holink, perversity,... In Section 3, we present the Gajer spaces ([12]) already studied in [6, 7] and complete their properties concerning fibrations and homotopy pushouts. In Section 4, we specify the concept of linkwise localization and prove the main theorem. An example is also given.
**Notation and convention.** Let \(\mathbf{Sset}\) be the category of simplicial sets and \(\mathbf{Top}\) be the category of weak-Hausdorff compactly generated spaces ([23]). We denote by \(\operatorname{Sing}\colon\mathbf{Top}\to\mathbf{Sset}\) the functor given by the singular chains and by \(|-|\colon\mathbf{Sset}\to\mathbf{Top}\) the realisation functor. We use the notation \(\Delta[n]\), \(\partial\Delta[n]\), \(\Lambda[n,j]\) in the simplicial case and \(\Delta^{n}\), \(\partial\Delta^{n}\), \(\Lambda^{n}_{j}\) for their associated polyhedron. A manifold is supposed to be a connected, separable metric space. All path spaces are given the compact-open topology.
We are grateful to the referee for her/his comments and suggestions which helped us to improve the manuscript.
## 1. Perversity on stratified spaces
Among the first structures adapted to singularities, there is the complex of manifolds of Whitney ([32]): an amalgamation of manifolds of different dimensions. This concept led to the notions of Whitney and Thom-Mather stratified spaces. In their pioneer works ([14, 15]), Goresky and MacPherson use _pseudomanifolds_. Here we are mainly concerned with the homotopically stratified spaces of Quinn ([27]), recalled in Section 2.
We denote by **Top** the category of weak-Hausdorff compactly generated spaces (henceforth called "space") with morphisms the continuous maps (henceforth called "map"), see [23, Section 2]. This category verifies the conditions required by Hirschhorn in [16, Section 1.1.1]. Let us now enter in the stratified world.
**Definition 1.1**.: A _filtered space_ is a space, \(X\), endowed with a filtration
\[X^{-1}=\emptyset\subseteq X^{0}\subseteq X^{1}\subseteq\cdots\subseteq X^{n-2 }\subseteq X^{n-1}\subseteq X^{n}=X,\]
by closed subsets. Such a filtration gives a partition of \(X\) by path-connected, locally closed subspaces defined as the non-empty path-components of \(X_{i}=X^{i}\backslash X^{i-1}\) and called _strata_. The set of strata is denoted by \(\mathcal{S}_{X}\) (or \(\mathcal{S}\) if there is no ambiguity). A _stratified space_ is a filtered space whose set of strata satisfies the _Frontier condition:_
\[S_{i}\cap\overline{S_{j}}\neq\emptyset\quad\text{implies}\quad S_{i}\subset \overline{S_{j}}.\]
Each stratum of a filtered space has a _formal dimension_ given by \(\dim S=i\) if \(S\subset X^{i}\). As well, the formal codimension is \(\operatorname{codim}S=n-i\). In a stratified space, the set \(\mathcal{S}\) of strata is a poset for the relation \(S_{i}\preceq S_{j}\) if \(S_{i}\subset\overline{S_{j}}\). The maximal elements of \(\mathcal{S}\) are called _regular_ and we say _bottom stratum_ for a minimal one.
**Definition 1.2**.: A _stratified map_\(f\colon X\to Y\) between two stratified spaces is a continuous map such that for each stratum \(S\) of \(X\) there is a stratum \(S^{f}\) of \(Y\) verifying \(f(S)\subset S^{f}\).
Let \(X\) be a stratified space. A map \(F\colon Z\times A\to X\) is _stratum-preserving along_\(A\) if \(F(\{z\}\times A)\) lies in a single stratum of \(X\), for any \(z\in Z\). If \(A=I\), we say that \(F\) is a _stratum-preserving homotopy_ and denote by \(\sim\) the associated equivalence relation. A map \(F\colon Z\times I\to X\) whose restriction to \(Z\times[0,1[\) is stratum-preserving along \([0,1[\) is called a _nearly stratum-preserving homotopy._
**Definition 1.3**.: Two stratified spaces, \(X\) and \(Y\), are _stratified homotopy equivalent_ if there exist stratified maps \(f\colon X\to Y\) and \(g\colon Y\to X\) and stratum-preserving homotopies: \(f\circ g\sim\operatorname{id}_{Y}\) and \(g\circ f\sim\operatorname{id}_{X}\). We denote this relation \(X\simeq_{s}Y\).
Let us recall a notion of fibration adapted to stratified spaces, see [18, Definition 5.1].
**Definition 1.4**.: Let \(X\) and \(Y\) be stratified spaces. A map \(f\colon X\to Y\) is a _stratified fibration_ provided given any space \(Z\) and any commuting diagram,
(1.1)
with \(F\) a stratum-preserving homotopy, there exists a stratum-preserving homotopy, \(\widetilde{F}\), such that \(f\circ\widetilde{F}=F\) and \(\widetilde{F}(z,0)=g(z)\) for each \(z\in Z\).
The mapping cylinder of a map \(f\colon X\to Y\) is endowed with the _teardrop topology_. As set, the mapping cylinder of \(f\) is the quotient \(\operatorname{cyl}f=(X\times[0,1])\sqcup Y/\sim\) for the relation \((x,1)\sim f(x)\) for \(x\in X\). The teardrop topology on \(\operatorname{cyl}f\) is defined as the minimal topology such that:
1. the inclusion \(X\times[0,1[\to\operatorname{cyl}f\) is an open embedding,
2. the map \(c\colon\operatorname{cyl}f\to Y\times[0,1]\), defined by \(c(x,t)=(f(x),t)\) if \((x,t)\in X\times[0,1[\) and \(c(y)=(y,1)\) if \(y\in Y\), is continuous.
If \(f\) is a proper map between locally compact Hausdorff spaces, the teardrop topology is the usual quotient topology. We send the reader to [17, 21], [20, Page 138-139] for basic properties of teardrop topology.
Let \(f\colon X\to Y\) be a map (not necessarily stratified) between stratified spaces. The _strata of the mapping cylinder_ are the strata of \(Y\) and the products \(S\times[0,1[\) where \(S\) is a stratum of \(X\). The _open mapping cylinder_ is the subspace \(\mathring{\operatorname{cyl}}f=\operatorname{cyl}f\backslash(X\times\{0\}\). Homotopy pushouts of two maps with the same domain being double mapping cylinders, they are also equipped with teardrop topology.
Intersection homology of Goresky and MacPherson is defined from a parameter, called perversity.
**Definition 1.5**.: A _perversity on a filtered space, \(X\),_ is a map \(\overline{p}\colon\mathcal{S}_{X}\to\overline{\mathbb{Z}}=\mathbb{Z}\cup\{\pm\infty\}\) taking the value \(0\) on the regular strata. The pair \((X,\overline{p})\) is called a _perverse space_. The _top perversity_\(\overline{t}\) is defined by \(\overline{t}(S)=\operatorname{codim}S-2\), for a singular stratum \(S\). Given a perversity \(\overline{p}\) on \(X\), the _complementary perversity_ on \(X\), \(D\overline{p}\), is characterized by \(D\overline{p}(S)+\overline{p}(S)=\overline{t}(S)\), for any singular stratum \(S\).
**Definition 1.6**.: Let \(f\colon X\to Y\) be a stratified map and \(\overline{p}\) be a perversity on \(Y\). The _pullback perversity of \(\overline{p}\) by \(f\)_ is the perversity \(f^{*}\overline{p}\) on \(X\) defined on any singular stratum \(S\) of \(X\) by \(f^{*}\overline{p}(S)=\overline{p}(S^{f})\), where \(S^{f}\) is the stratum of \(Y\) containing \(f(S)\). In the case of the canonical injection of an open subset endowed with the induced filtration, \(\iota\colon U\to Y\), we still denote by \(\overline{p}\) the perversity \(\iota^{*}\overline{p}\) and call it the _induced perversity_.
A perversity \(\overline{p}\) allows a selection among the singular simplexes of a filtered space.
**Definition 1.7**.: Let \((X,\overline{p})\) be a perverse space. A simplex \(\sigma\colon\Delta\to X\) is _\(\overline{p}\)-allowable_ if, for each singular stratum \(S\), the set \(\sigma^{-1}S\) verifies
\[\dim\sigma^{-1}S\leq\dim\Delta-\operatorname{codim}S+\overline{p}(S)=\dim \Delta-2-D\overline{p}(S), \tag{1.2}\]
with the convention \(\dim\emptyset=-\infty\).
For this definition to have meaning, we need to specify the notion of dimension for the subspace \(\sigma^{-1}S\) of a Euclidean simplex. There is some flexibility and as King wrote in [22], any reasonable notion of dimension gives the original intersection homology of [14]. Here, we choose the following one, introduced by Gajer ([12]) and revisited in [6].
**Definition 1.8**.: A subspace \(A\subset\Delta\) of a Euclidean simplex is of _polyhedral dimension_ less than or equal to \(\ell\) if \(A\) is included in a polyhedron \(Q\) with \(\dim Q\leq\ell\).
As a face of a \(\overline{p}\)-allowable simplex is not necessarily \(\overline{p}\)-allowable, for having a simplicial set, we will strengthen the notion of \(\overline{p}\)-allowability in Definition 3.1.
## 2. Quinn's homotopically stratified spaces
In this section, we recall the singular spaces introduced by F. Quinn in [26, 27]. Allowing a study of stratified spaces with homotopical tools, there is an extensive literature on their properties and applications.
Let \(M\) be a differentiable manifold. In [24], Nash defines a topological space \(P_{\mathcal{N}}(M)\) as the set of continuous map, \(\alpha\colon[0,1]\to M\) such that \(\alpha(t)\neq\alpha(0)\) for all \(t>0\), endowed with the compact-open topology. Using a Riemanian metric, Nash proves that the tangent bundle of \(M\) is a fibre deformation retract of \(P_{\mathcal{N}}(M)\). This result gives an alternative proof to the following theorem of Thom ([30]): the fibre homotopy type of the tangent bundle of \(M\) depends only on the topology of \(M\), which infers the topological invariance of the Stiefel-Whitney classes.
In [8], Fadell extends Nash's definition to obtain a topological analogue of the normal bundle of a locally flat \(n\)-dimensional topological manifold \(S\) in a \((n+k)\)-dimensional topological manifold \(M^{n+k}\). Recall that \(S\) is locally flat in \(M\) if each point of \(S\) admits an open neighborhood \(U\) in \(S\) and there is an open embedding \(h\colon U\times\mathbb{R}^{k}\to M\) such that \(h(x,0)=x\) for all \(x\in U\). In analogy with \(P_{\mathcal{N}}(M)\), Fadell defines the space \(\mathcal{E}^{0}\) as the space of paths in \(M\) which start on \(S\) and never return in \(S\),
\[\mathcal{E}^{0}=\{\omega\colon[0,1]\to M\mid\omega(t)\in S\text{ if, and only if, }t=0\},\]
and the space \(\mathcal{E}\) which is the union of \(\mathcal{E}^{0}\) with the constant paths in \(S\). Fadell proves that the evaluation in \(t=0\), \(\mathtt{eval}_{0}\colon(\mathcal{E},\mathcal{E}^{0})\to S\), gives a pair of locally trivial fibre spaces, thus of Hurewicz fibrations since the base \(S\) is required to be paracompact in [8]. Moreover, the fibres of this pair are homotopy equivalent to \((\mathbb{R}^{k},\mathbb{R}^{k}\backslash 0)\). Fadell also shows that the Whitney sum of \((\mathcal{E},\mathcal{E}^{0})\) with the Nash bundle of \(S\) is the restriction to \(S\) of the Nash bundle of \(M\). This naturally leads to call the pair \((\mathcal{E},\mathcal{E}^{0})\) the normal fibre space for the inclusion \(S\subset M\).
In the case of Whitney spaces ([32]), for each stratum \(S\) there is a bundle over \(S\) whose fibre is the link. This bundle comes from the existence of a tubular neighborhood ([31]). In [27], Quinn develops a family of stratified spaces for which the total space can be recovered from a succession of "homotopical amalgamations of fibrations". Most of the upcoming recalls are in the work of Quinn ([26, 27]). The first point is the stratified version of the Fadell normal bundle.
**Definition 2.1**.: Let \(X\) be a stratified space and \(Y\subset X\). The _homotopy link_ (or _holink_) of \(Y\) in \(X\) is the space
\[\operatorname{holink}(X,Y)=\{\omega\colon[0,1]\to X\mid\omega(0)\in Y\text{ and }\omega(t)\in X\backslash Y\text{ for }t\in]0,1]\}\,.\]
The _stratified homotopy link_ is a subspace of \(\operatorname{holink}(X,Y)\) whose elements lie in a single stratum after leaving \(Y\), i.e.,
\[\operatorname{holink}_{\mathfrak{s}}(X,Y)=\{\omega\in\operatorname{holink}(X, Y)\mid\text{ for some }S_{i}\in\mathcal{S}_{X},\;\omega(]0,1]\}\subset S_{i}\}\,.\]
The evaluation at \(0\) defines maps \(\mathtt{eval}_{0}\colon\operatorname{holink}(X,Y)\to Y\) and \(\mathtt{eval}_{0}\colon\operatorname{holink}_{\mathfrak{s}}(X,Y)\to Y\). The stratified homotopy link is naturally filtered by
\[\operatorname{holink}_{\mathfrak{s}}(X,Y)^{j}=\left\{\omega\in\operatorname{ holink}_{\mathfrak{s}}(X,Y)\mid\omega(1)\in X^{j}\right\}.\]
Let \(S\in\mathcal{S}_{X}\). The _local holink_ of \(x_{0}\in S\), \(\operatorname{holink}_{\mathfrak{s}}(X,x_{0})\), is the fibre at \(x_{0}\) of the map \(\mathtt{eval}_{0}\colon\operatorname{holink}_{\mathfrak{s}}(X,S)\to S\).
**Definition 2.2**.: ([18, Definitions 3.1 and 3.4]) The subspace \(Y\) of a space \(X\) is _forward tame_ in \(X\) if there is a neighborhood \(N\) of \(Y\) in \(X\) and a homotopy, \(h\colon N\times I\to X\), such that \(h(-,0)\) is the inclusion \(N\hookrightarrow X\), the restriction \(h(-,t)\colon Y\to X\) is the inclusion \(Y\hookrightarrow X\) for each \(t\in I\), \(h(N,1)=Y\) and \(h((N\backslash Y)\times[0,1[)\subset X\backslash Y\).
The subspace \(Y\) of a stratified space \(X\) is _stratified forward tame_ in \(X\) if there is a neighborhood \(N\) of \(Y\) in \(X\) and a homotopy, \(h\colon N\times I\to X\), such that \(h(-,0)\) is the inclusion \(N\hookrightarrow X\), the restriction \(h(-,t)\colon Y\to X\) is the inclusion \(Y\hookrightarrow X\) for each \(t\in I\), \(h(N,1)=Y\) and \(h((N\backslash Y)\times[0,1[)\subset X\backslash Y\) is stratum-preserving along \([0,1[\). The map \(h\) is called a _nearly stratum-preserving strong deformation retraction_ of \(N\) to \(X\).
**Definition 2.3**.: ([27]) A stratified metric space \(X\) is a _homotopically stratified space_ if the following two conditions are satisfied for every pair of strata with \(S\preceq S^{\prime}\),
* \(S\) is forward tame in \(S\cup S^{\prime}\),
* the evaluation map \(\mathtt{eval}_{0}\colon\operatorname{holink}(S\cup S^{\prime},S)\to S\) is a fibration.
If, moreover, each stratum is a manifold without boundary and is locally-closed in \(X\), we say that \(X\) is a _manifold homotopically stratified space_.
Whitney spaces are typical examples of homotopically stratified spaces. Quinn also proves that a filtered space, with locally contractible skeleta and with conical neighborhoods (up to a stratified homotopy equivalence), is a homotopically stratified space. Strata and holink spaces suffice to determine the stratified homotopy type of a homotopically stratified space ([27, Lemma 2.4]) and to detect stratified homotopy equivalences.
We list properties of homotopically stratified spaces, already present in the work of Quinn ([27]).
**Definition 2.4**.: A subspace \(Y\) of a stratified space is said _pure_ if it is closed and a union of strata.
**Proposition 2.5**.: ([18, Theorem 6.3 and Corollary 6.2]) _Let \(X\) be a homotopically stratified space with a finite number of strata and \(Y\subset X\) be a pure subspace. Then \(Y\) is stratified forward tame in \(X\) and the map \(\operatorname{\mathtt{eval}}_{0}\colon\operatorname{holink}_{\mathfrak{s}}(X,Y)\to Y\) is a stratified fibration._
If a nearly stratum-preserving deformation retraction of a neighborhood \(N\) of a pure subset \(Y\) exists as in the previous proposition, the neighborhood \(N\) can be replaced by the cylinder of an evaluation map. In the proof given in [27, Lemma 2.4], some evaluation maps turn out not to be continuous. Friedman has fixed it in [9, Appendix]. We quote the particular case that we need.
**Proposition 2.6**.: ([9, Proposition A.1]) _Let \(X\) be a manifold homotopically stratified space and \(S\) be a bottom stratum. Given a nearly stratum-preserving deformation retraction \(h\colon N\times I\to N\) of a neighborhood \(N\) of \(S\), to \(S\), then \(N\) is stratum-preserving homotopy equivalent to the mapping cylinder of the map \(\operatorname{\mathtt{eval}}_{0}\colon\operatorname{holink}_{\mathfrak{s}}(N,S)\to S\)._
_Remark 2.7_.: ([11, Page 49]) With the notation of Proposition 2.6, denote \(M\) the mapping cylinder of the map \(\operatorname{\mathtt{eval}}_{0}\colon\operatorname{holink}_{\mathfrak{s}}(N,S)\to S\) and \(\mathcal{M}\) the mapping cylinder of \(\operatorname{\mathtt{eval}}_{0}\colon\operatorname{holink}_{\mathfrak{s}}(X,S)\to S\). Recall that \(\operatorname{holink}_{\mathfrak{s}}(X,S)\simeq_{s}\operatorname{holink}_{ \mathfrak{s}}(N,S)\) since they are stratified homotopy equivalent to the stratified holink of "small paths". There also exist stratum-preserving homotopy equivalences of pairs
\[(N,N\backslash S)\simeq_{s}(M,M\backslash S)\simeq_{s}(M,\operatorname{holink} _{\mathfrak{s}}(N,S))\simeq_{s}(\mathcal{M},\operatorname{holink}_{\mathfrak{ s}}(X,S))\simeq_{s}(\mathcal{M},\mathcal{M}\backslash S).\]
**Corollary 2.8**.: _Let \(X\) be a manifold homotopically stratified space and \(S\) be a bottom stratum. Then \(X\) is the homotopy pushout of_
Proof.: This is a consequence of the existence of a nearly stratum-preserving deformation retract neighborhood, \(N\), of \(S\) in \(X\), ([19, Theorem 7.1]), of Proposition 2.6 and of a stratified homotopy equivalence \(\operatorname{holink}_{s}(N,S)\simeq_{s}\operatorname{holink}_{s}(X,S)\).
## 3. Gajer simplicial set
The notion of \(\overline{p}\)-allowable simplexes (Definition 1.7) is used by Gajer ([12]) for the definition of a simplicial set as follows.
**Definition 3.1**.: Let \((X,\overline{p})\) be a perverse space. A simplex \(\sigma\colon\Delta^{\ell}\to X\) is \(\overline{p}\)_-full_ if \(\sigma\) and all its faces are \(\overline{p}\)-allowable.
The set of \(\overline{p}\)-full simplexes is a simplicial set verifying the Kan condition ([12, Page 946] or [7, Proposition 2.3]). We denote it by \(\mathscr{G}_{\overline{p}}X\) and call it the _Gajer simplicial set_ associated to \((X,\overline{p})\). In [7], we define the \(\overline{p}\)-intersection homotopy groups of a perverse space \((X,\overline{p})\) as the homotopy groups of \(\mathscr{G}_{\overline{p}}X\) and prove a Hurewicz theorem linking these groups to the \(\overline{p}\)-intersection homology
groups. We also show that they verify a Van Kampen theorem relatively to the open covers of \(X\) and prove their topological invariance if the regular parts of \(X\) and of its intrinsic stratification coincide. Let us recall and establish some other results on the spaces \(\mathscr{G}_{\overline{p}}X\).
Any stratified map \(f\colon(X,\overline{p})\to(Y,\overline{q})\) between perverse spaces such that \(f^{*}D\overline{q}\leq D\overline{p}\) induces a simplicial map \(\mathscr{G}_{\overline{p},\overline{q}}f\colon\mathscr{G}_{\overline{p}}X \to\mathscr{G}_{\overline{q}}Y\). Moreover, if \(\varphi\colon(X\times[0,1],\overline{p})\to(Y,\overline{q})\) is a stratified homotopy between two stratified maps \(f\), \(g\colon(X,\overline{p})\to(Y,\overline{q})\) with \(f^{*}D\overline{q}\leq D\overline{p}\), then we also have \(g^{*}D\overline{q}\leq D\overline{p}\) and the simplicial maps, \(\mathscr{G}_{\overline{p},\overline{q}}f\) and \(\mathscr{G}_{\overline{p},\overline{q}}g\) are homotopic ([7, Proposition 2.5]).
**Proposition 3.2**.: ([7, Corollary 2.6]) _Let \(f\colon(X,\overline{p})\to(Y,\overline{q})\) be a stratified homotopy equivalence between perverse stratified spaces such that \(f^{*}D\overline{q}=D\overline{p}\). Then, the assignment \(\sigma\mapsto f\circ\sigma\) induces a homotopy equivalence between \(\mathscr{G}_{\overline{p}}X\) and \(\mathscr{G}_{\overline{q}}Y\)._
**Proposition 3.3**.: _Let \(f\colon E\to B\) be a stratified fibration such that \(B\) has only one stratum and let \(\overline{p}\) be a perversity on \(E\). The fibre \(F\) of \(f\) is endowed with the induced filtration. Then, the map \(\mathscr{G}_{\overline{p}}f\colon\mathscr{G}_{\overline{p}}E\to\operatorname{ Sing}B\) is a Kan fibration of fibre \(\mathscr{G}_{\overline{p}}F\)._
Proof.: By definition of a Kan fibration, we have to solve the lifting problem
By adjunction between \(\operatorname{Sing}\) and the realisation functor \(|-|\), as \(\mathscr{G}_{\overline{p}}E\subset\operatorname{Sing}E\), this is equivalent to the lifting problem
with \(\widetilde{\sigma}\) of full \(\overline{p}\)-intersection. The map \(|\wedge[\ell,k]|\to|\Delta[\ell]|\) is homeomorphic to \(\Delta^{\ell-1}\to\Delta^{\ell-1}\times[0,1]\). Thus, by hypothesis and Definition 1.4, we have a stratum-preserving homotopy \(\widetilde{\sigma}\colon\Delta^{\ell-1}\times[0,1]\to E\). In this particular case, this means \(f\circ\widetilde{\sigma}=\sigma\), \(\widetilde{\sigma}(z,0)=\tau(z)\) and, for any \(z\in\Delta^{\ell-1}\), the image \(\widetilde{\sigma}(z\times[0,1])\) is included in one stratum of \(E\). Let \(S\subset E_{n-k}\backslash E_{n-k-1}\) be a stratum. The previous properties imply \(\widetilde{\sigma}^{-1}(S)\cong\tau^{-1}(S)\times[0,1]\) and \(\dim\widetilde{\sigma}^{-1}(S)=\dim\tau^{-1}(S)+1\leq\ell-k+\overline{p}(S)+1\). The same argument works for any face, thus \(\widetilde{\sigma}\in\mathscr{G}_{\overline{p}}E\).
By definition ([12, Page 947]) the map \(f\) is a filtered fibration in the sense of Gajer. From [12, Theorem 2.2], we get a long exact sequence in homotopy \(\cdots\to\pi_{*}(\mathscr{G}_{\overline{p}}F)\to\pi_{*}(\mathscr{G}_{ \overline{p}}E)\to\pi_{*}(\operatorname{Sing}B)\to\dots\). If \(K\) is the fibre of \(\mathscr{G}_{\overline{p}}E\to\operatorname{Sing}B\), there is a simplicial map \(\mathscr{G}_{\overline{p}}F\to K\). From the long exact sequence in homotopy of the fibration \(\mathscr{G}_{\overline{p}}f\) and the five lemma, we get isomorphisms \(\pi_{*}(\mathscr{G}_{\overline{p}}F)\cong\pi_{*}(K)\). As \(K\) and \(\mathscr{G}_{\overline{p}}F\) are Kan complexes, they are homotopy equivalent.
**Proposition 3.4**.: _Let \((X,\overline{p})\) be a perverse stratified space and \(U\), \(V\) be two open subsets of \(X\), endowed with the induced stratifications and perversities. We suppose that \(U\), \(V\), \(U\cap V\) are path-connected. Then the following diagram is a homotopy pushout,_
(3.1)
Proof.: Denote by \(K\) the pushout in \(\mathbf{Sset}\) of \(\mathscr{G}_{\overline{p}}V\)\(\mathscr{G}_{\overline{p}}(U\cap V)\)\(\mathscr{G}_{\overline{p}}V\). The two maps being injective, this is a homotopy pushout. By universal property, there is a canonical map \(\varphi\colon K\to\mathscr{G}_{\overline{p}}(U\cup V)\). By ([7, Theorem 2.13]), the map \(\varphi\) induces an isomorphism in homology for any local coefficients and by the stratified Van-Kampen theorem ([7, Theorem 4.1]), it induces an isomorphism between the fundamental groups. Therefore, the map \(\varphi\) is a weak homotopy equivalence.
**Corollary 3.5**.: _Let \((X,\overline{p})\) be a homotopically stratified space with path-connected links and \(Y\subset X\) be a pure subset such that \(Y\), \(X\backslash Y\) and \(\operatorname{holink}_{s}(X,Y)\) are path-connected. If \(X\) is the homotopy pushout of_
(3.2)
_then \(\mathscr{G}_{\overline{p}}X\) is the homotopy pushout of_
(3.3)
Proof.: The homotopy pushout of (3.2) is the space
\[M=X\backslash Y\sqcup\operatorname{holink}_{\mathfrak{s}}(X,Y)\times I\sqcup Y /\sim,\]
where the relation \(\sim\) is generated by \((\omega,0)\sim\omega(0)\) and \((\omega,1)\sim\omega(1)\). A basis of open subsets of the teardrop topology of \(M\) ([11]) consists of the open subsets of the product \(\operatorname{holink}_{\mathfrak{s}}(X,Y)\times]0,1[\) with the product topology and of the sets \((\operatorname{\mathsf{eval}}_{0}^{-1}(W)\times]0,\varepsilon[)\cup W\), \((\operatorname{\mathsf{eval}}_{1}^{-1}(W^{\prime})\times]1-\varepsilon,1[) \cup W^{\prime}\), where \(W\) is an open subset of \(Y\) and \(W^{\prime}\) an open subset of \(X\backslash Y\). We consider the two open subsets
\[\left\{\begin{array}{rcl}U&=&(\operatorname{holink}_{\mathfrak{s}}(X,Y) \times]0,3/4[)\cup Y,\\ V&=&(\operatorname{holink}_{\mathfrak{s}}(X,Y)\times]1/4,1[)\cup(X\backslash Y).\end{array}\right.\]
Their union is \(M\), their intersection is \(\operatorname{holink}_{\mathfrak{s}}(X,U)\times]1/4,3/4[\) and there are homotopy equivalences \(U\simeq Y\), \(V\simeq X\backslash Y\), \(U\cap V=\operatorname{holink}_{\mathfrak{s}}(X,Y)\). We thus have a pushout with open subsets
The result follows from Proposition 3.4.
## 4. Postnikov truncation of links
### Linkwise localization
We denote by \(\mathcal{T}\) one of the categories \(\mathbf{Top}\) or \(\mathbf{Sset}\), pointed or not. The following recalls concern the localization along a map \(f\colon A\to B\) of \(\mathcal{T}\) between cofibrant spaces, see [16].
A fibrant space \(W\) is said \(f\)_-local_ if the induced map of simplicial sets, \(f^{*}\colon\operatorname{Map}(B,W)\to\operatorname{Map}(A,W)\) is a weak equivalence. A map \(g\colon X\to Y\) between cofibrant spaces is an \(f\)_-local equivalence_ if the induced map of simplicial sets, \(g^{*}\colon\operatorname{Map}(Y,W)\to\operatorname{Map}(X,W)\) is a weak equivalence for every \(f\)-local space \(W\). An \(f\)_-localization_ of a space \(X\) is an \(f\)-local space \(\overline{X}\) with an \(f\)-local equivalence \(j_{X}\colon X\to\overline{X}\). Let \(f\colon A\to B\) be an injection if \(f\in\mathbf{Sset}\) or an inclusion of cell complexes if \(f\in\mathbf{Top}\). Then for every space \(X\), there exists ([16, Theorem 1.3.11]) a natural \(f\)-localization \(j_{X}\colon X\to L_{f}X\) with \(j_{X}\) a cofibration.
Localization can also be done for maps as shown by the following proposition extracted from [16, Theorem 6.1.3].
**Proposition 4.1**.: _There is a functorial factorization of every map \(p\colon X\to Z\) of \(\mathcal{T}\) as \(X\xrightarrow{i}\overline{L}_{f}X\xrightarrow{q}Z\), called fibrewise \(f\)-localization of \(p\), such that the following properties are satisfied._
1. _The map_ \(q\) _is a fibration with_ \(f\)_-local fibres and the map_ \(i\) _is a cofibration and an_ \(f\)_-local equivalence. Moreover, for any_ \(z\in Z\)_, the map induced by_ \(i\) _between the homotopy fibres is an_ \(f\)_-localization._
2. _For any decomposition of_ \(p\) _as_ \(X\xrightarrow{j}W\xrightarrow{r}Z\)_, where_ \(r\) _a fibration with_ \(f\)_-local fibres, there exists_ \(k\colon\overline{L}_{f}X\to W\) _such that_ \(k\circ i=j\) _and_ \(r\circ k=q\)_. Moreover, if_ \(j\) _is another fibrewise_ \(f\)_-localization, then_ \(k\) _is a weak equivalence._
Recall that a functorial factorization in \(\mathcal{T}\) means that any map in \(\mathcal{T}\) factors into a composite of two maps, in a way that depends functorially on commutative square ([25]).
Let \(X\) be a homotopically stratified space with a finite poset of strata \(\mathcal{S}\). If \(S\) is a bottom stratum, the stratified fibration, \(\operatorname{\mathtt{eval}}_{0}\colon\operatorname{holink}_{\mathfrak{s}}(X,S)\to S\), is a fibration. Its fibre in \(x_{0}\in S\) is the local holink \(L=\operatorname{holink}_{\mathfrak{s}}(X,x_{0})\) and we can apply to \(\operatorname{\mathtt{eval}}_{0}\) the fibrewise localization along any map \(f\colon A\to B\) of \(\operatorname{\mathbf{Top}}\). We then replace the space \(\operatorname{X}\), obtained as the homotopy pushout of
(4.1)
by the space \(\mathcal{L}_{f}X\), defined as the homotopy pushout of
(4.2)
For instance, in the particular case of a cone on a space \(X=\hat{\mathtt{c}}Y\), of apex \(\mathtt{v}\), stratified by \(\mathtt{v}\preceq Y\times[0,1[\), we have
(4.3)
The map \(\operatorname{\mathtt{eval}}_{1}\) is a weak equivalence. The fibration \(\operatorname{\mathtt{eval}}_{0}\) can be trivialized and its fibre has the homotopy type of \(Y\). We therefore get \(\mathcal{L}_{f}(\hat{\mathtt{c}}Y)\simeq L_{f}Y\). In short, the link \(Y\) has been localized along \(f\). If the stratified space has more than two strata, we repeat the previous process.
We can also choose different localizing maps for each stratum: let \(\Phi\) be a correspondence that associates to each stratum \(S\) a map \(\Phi(S)\colon A_{S}\to B_{S}\) of \(\operatorname{\mathbf{Top}}\). The _linkwise \(\Phi\)-localization_, \(\mathcal{L}_{\Phi}X\), of \(X\) is obtained by induction on the number of strata by starting with \(\mathcal{L}_{\Phi}X=X\) if \(X\) has only regular strata. In the general case, let \(S\) be a bottom stratum of \(X\). The image by \(\mathcal{L}_{\Phi}\) of \(X\backslash S\) and \(\operatorname{holink}_{\mathfrak{s}}(X,S)\) is defined by induction since they have one stratum less than \(X\). We therefore have a map \(\mathcal{L}_{\Phi}\operatorname{holink}_{\mathfrak{s}}(X,S)\to\mathcal{L}_{ \Phi}(X\backslash S)\). Let us also consider the fibrewise \(\Phi(S)\)-localization of \(\mathcal{L}_{\Phi}(\operatorname{\mathtt{eval}}_{0})\colon\mathcal{L}_{\Phi} \operatorname{holink}_{\mathfrak{s}}(X,S)\to S\):
\[\mathcal{L}_{\Phi}\operatorname{holink}_{\mathfrak{s}}(X,S)\xrightarrow{i_{ S}}\overline{L}_{\Phi(S)}(\mathcal{L}_{\Phi}\operatorname{holink}_{ \mathfrak{s}}(X,S))\xrightarrow{q_{S}}S.\]
We define \(\mathcal{L}_{\Phi}X\) as the homotopy pushout of
(4.4)
We study it in the case of the Postnikov localization.
### Postnikov truncation
Let \(\ell\geq 0\) and \(f_{\ell}\colon\mathbb{S}^{\ell+1}\to\mathbb{D}^{\ell+2}\) be the standard inclusion in \(\operatorname{\mathbf{Top}}\) of the \((\ell+1)\)-sphere in the \((\ell+2)\)-ball, see [16, Section 1.5]. A space \(X\) is \(f_{\ell}\)-local if, and only if, \(\pi_{i}X=0\) for \(i>\ell\) and every choice of basepoint. The Postnikov projection \(\rho_{\ell}\colon X\to P_{\ell}X\) is an \(f_{\ell}\)-localization map. We also call it the _Postnikov \(\ell\)-localization_ of \(X\). A map \(g\colon X\to Y\) is an \(f_{\ell}\)-local equivalence if, and only if, \(g\) induces isomorphisms \(g_{*}\colon\pi_{i}X\to\pi_{i}Y\) for \(i>\ell\) and every choice of basepoint.
The main result of this section shows the Gajer simplicial set as a linkwise localization.
**Theorem 4.2**.: _Let \((X,\overline{p})\) be a perverse manifold homotopically stratified space with a finite number of strata and connected local holinks. Let \(\Phi\) be the correspondence associating to any stratum \(S\in\mathcal{S}\) the injection \(\mathsf{S}^{D\overline{p}(S)+1}\to\mathsf{D}^{D\overline{p}(S)+2}\). Then, the linkwise localization \(\mathcal{L}_{\Phi}X\) has the homotopy type of the realisation \(|\mathscr{G}_{\overline{p}}X|\) of the Gajer simplicial set associated to \((X,\overline{p})\)._
Proof of Theorem 4.2.: The result is obvious for spaces with only one stratum. We use an induction on the number of strata and assume the result true for the spaces having strictly less than \(k\) strata. Let \((X,\overline{p})\) be as in the statement with \(k\) strata. We choose a bottom stratum \(S\) of \(X\). We know from Proposition 2.5 that the evaluation map \(\mathtt{eval}_{0}\) is a stratified fibration and, thus a fibration since \(S\) is unstratified. From Proposition 2.5, we also know that \(S\) admits a neighborhood \(N\) in \(X\) with a nearly stratum-preserving retraction taking \(N\) into \(S\) relatively to \(S\). Moreover, from Proposition 2.6, we have the existence of a stratified homotopy equivalence between \(N\) and the mapping cylinder of \(\mathtt{eval}_{0}\colon\operatorname{holink}_{\mathtt{s}}(N,S)\to S\),
\[N\simeq_{s}\mathring{\operatorname{cyl}}(\mathtt{eval}_{0})=(\operatorname{ holink}_{\mathtt{s}}(N,S)\times[0,1[)\sqcup S/(\omega,0)\sim\omega(0).\]
This mapping cylinder has the teardrop topology and a filtration defined by \(S\) and the ends of the paths in \(\operatorname{holink}_{\mathtt{s}}(N,S)\). The map \(\overline{\mathtt{eval}}_{0}\colon\operatorname{cyl}(\mathtt{eval}_{0})\to S\) sends \(s\in S\) to itself and \((\omega,t)\) to \(\omega(0)\). This is a stratified fibration of fibre \(\mathring{\mathfrak{c}}L\), see [10, Proposition 3.3]. Recall that \(P_{\ell}\) and \(\widetilde{P}_{\ell}\) are, respectively, the Postnikov \(\ell\)-truncation and the fibrewise Postnikov \(\ell\)-truncation. We consider the following diagram in \(\mathbf{Sset}\):
(4.5)
Here, \(L=\operatorname{holink}_{\mathtt{s}}(N,x_{0})\) is the local holink of \(x_{0}\in S\). The left column is the fibrewise Postnikov \(D\overline{p}(\mathrm{S})\)-localization of \(\mathscr{G}_{\overline{p}}(\mathtt{eval}_{0})\) and the right one is the image of the fibration \(\overline{\mathtt{eval}}_{0}\) by \(\mathscr{G}_{\overline{p}}\). As \(\mathscr{G}_{\overline{p}}\mathring{\mathfrak{c}}L\) is homotopy equivalent to the Postnikov \(P_{D\overline{p}(S)}\)-section of \(\mathscr{G}_{\overline{p}}L\) ([7, Corollary 3.7]), we deduce from the point (2) of Proposition 4.1, the existence of a homotopy equivalence \(k\colon\widetilde{P}_{D\overline{p}(S)}(\mathscr{G}_{\overline{p}} \operatorname{holink}_{\mathtt{s}}(N,S))\to\mathscr{G}_{\overline{p}} \mathring{\operatorname{cyl}}(\mathtt{eval}_{0})\) such that \(\mathscr{G}_{\overline{p}}(\overline{\mathtt{eval}}_{0})\circ k=q\) and \(k\circ i=j\).
Finally Proposition 3.2 implies the existence of a homotopy equivalence between \(\mathscr{G}_{\overline{p}}\mathring{\operatorname{cyl}}(\mathtt{eval}_{0})\) and \(\mathscr{G}_{\overline{p}}N\) and we have obtained a homotopy equivalence \(\widetilde{P}_{D\overline{p}(S)}(\mathscr{G}_{\overline{p}}\operatorname{holink }_{\mathtt{s}}(N,S))\simeq\mathscr{G}_{\overline{p}}N\). In the following diagram of simplicial sets,
(4.6)
the horizontal maps are homotopy equivalences. For the top ones, this property comes from Remark 2.7 and for the lower ones this has been established above. We conclude that the diagram (4.6) is a homotopy pushout.
We consider the open covering \(X=N\cup(X\backslash S)\). As \(S\) and \(L\) are path-connected and \(\mathtt{eval}_{0}\) is a fibration, we deduce that \(\operatorname{holink}_{\mathtt{s}}(X,S)\) is path-connected and so is \(N\). We can apply
Proposition 3.4 and get a homotopy pushout
(4.7)
With the existence of a stratified homotopy equivalence \(\operatorname{holink}_{\mathfrak{s}}(N,S)\simeq_{s}\operatorname{holink}_{ \mathfrak{s}}(X,S)\), the juxtaposition of (4.6) and (4.7) is a homotopy pushout:
(4.8)
By induction and Remark 2.7, we have a series of weak equivalences:
\(\mathcal{L}_{\Phi}\!\operatorname{holink}_{\mathfrak{s}}(X,S)\simeq|\mathscr{ G}_{\overline{p}}\!\operatorname{holink}_{\mathfrak{s}}(X,S)|\simeq|\mathscr{ G}_{\overline{p}}\!\operatorname{holink}_{\mathfrak{s}}(N,S)|\simeq\mathcal{L}_{\Phi}(X \backslash S)\) and
\(|\widetilde{P}_{D\overline{p}(S)}(\mathscr{G}_{\overline{p}}\!\operatorname {holink}_{\mathfrak{s}}(N,S))|\simeq\widetilde{P}_{D\overline{p}(S)}(\mathcal{ L}_{\Phi}\!\operatorname{holink}_{\mathfrak{s}}(X,S))\). Therefore, the diagram (4.8) gives a homotopy pushout,
(4.9)
and the result follows.
In the particular case of a space with two strata, Theorem 4.2 reduces to Theorem A.
**Example 4.3**.: Let \(F\to E\to B\) be a manifold bundle over manifold base. With a fibrewise conification, we get a fibration
(4.10)
This fibration admits a section, \(x\in B\mapsto\mathtt{v}_{x}\in X\), where \(\mathtt{v}_{x}\) is the apex of the fibre over \(x\). We thus get an identification of the base \(B\) as a closed subset of \(X\) and we filter \(X\) by \(\emptyset\subset X_{0}=B\subset X_{1}=X\). The singular subset is \(B\) and the link of \(x\in B\) in \(X\) is \(F\). The Quinn presentation expresses \(X\) as the homotopy pushout,
(4.11)
Thus the realisation of \(\mathscr{G}_{\overline{p}}X\) is the fibrewise \(D\overline{p}(B)\)-Postnikov localization of \(F\to E\to B\), and the \(\overline{p}\)-intersection homotopy groups of [7], \(\pi_{*}^{\overline{p}}X\), fits into the long exact sequence
If we add hypotheses of nilpotency and finite type (as in [4, Section 6]), the fibration \(F\to E\to B\) admits a Sullivan model ([29]), \((\wedge Z,d)\to(\wedge Z\otimes\wedge W,D)\to(\wedge W,\overline{D})\). The indice \(\ell\) of a fibrewise Postnikov \(\ell\)-localization corresponds to the degree of the graded vector space \(W\). Thus, \((\wedge Z\otimes\wedge W^{\leq\overline{p}(B)},D)\) is a Sullivan model of the Gajer space \(\mathscr{G}_{\overline{p}}X\). In a future work, we will connect this construction with that of the perverse minimal model introduced in [5]. |
2303.01497 | Teach a Robot to FISH: Versatile Imitation from One Minute of
Demonstrations | While imitation learning provides us with an efficient toolkit to train
robots, learning skills that are robust to environment variations remains a
significant challenge. Current approaches address this challenge by relying
either on large amounts of demonstrations that span environment variations or
on handcrafted reward functions that require state estimates. Both directions
are not scalable to fast imitation. In this work, we present Fast Imitation of
Skills from Humans (FISH), a new imitation learning approach that can learn
robust visual skills with less than a minute of human demonstrations. Given a
weak base-policy trained by offline imitation of demonstrations, FISH computes
rewards that correspond to the "match" between the robot's behavior and the
demonstrations. These rewards are then used to adaptively update a residual
policy that adds on to the base-policy. Across all tasks, FISH requires at most
twenty minutes of interactive learning to imitate demonstrations on object
configurations that were not seen in the demonstrations. Importantly, FISH is
constructed to be versatile, which allows it to be used across robot
morphologies (e.g. xArm, Allegro, Stretch) and camera configurations (e.g.
third-person, eye-in-hand). Our experimental evaluations on 9 different tasks
show that FISH achieves an average success rate of 93%, which is around 3.8x
higher than prior state-of-the-art methods. | Siddhant Haldar, Jyothish Pari, Anant Rai, Lerrel Pinto | 2023-03-02T18:57:38Z | http://arxiv.org/abs/2303.01497v1 | # Teach a Robot to FISH: Versatile Imitation from One Minute of Demonstrations
###### Abstract
While imitation learning provides us with an efficient toolkit to train robots, learning skills that are robust to environment variations remains a significant challenge. Current approaches address this challenge by relying either on large amounts of demonstrations that span environment variations or on handcrafted reward functions that require state estimates. Both directions are not scalable to fast imitation. In this work, we present Fast Imitation of Skills from Humans (FISH), a new imitation learning approach that can learn robust visual skills with less than a minute of human demonstrations. Given a weak base policy trained by offline imitation of demonstrations, FISH computes rewards that correspond to the "match" between the robot's behavior and the demonstrations. These rewards are then used to adaptively update a residual policy that adds on to the base policy. Across all tasks, FISH requires at most twenty minutes of interactive learning to imitate demonstrations on object configurations that were not seen in the demonstrations. Importantly, FISH is constructed to be versatile, which allows it to be used across robot morphologies (e.g. xArm, Allegro, Stretch) and camera configurations (e.g. third-person, eye-in-hand). Our experimental evaluations on 9 different tasks show that FISH achieves an average success rate of 93%, which is around 3.8\(\times\) higher than prior state-of-the-art methods.
## I Introduction
Imitation learning has proven to be among the most efficient tools to teach robots complex, dexterous and contact-rich skills. Its applications in robotics already span the fields of manipulation [29, 19], locomotion [44, 52], navigation [69, 30], and flying [18, 57]. Such imitation approaches are now gaining traction in directly learning from high-dimensional visual observations [35, 25, 30]. In broad strokes, visual imitation produces a policy that takes an image as input, and outputs actions that control the robot to perform desirable behaviors. Directly reasoning from images allows such methods to be generally applied as they circumvent the need for task-dependent estimation of state or design of features.
But, there is no free lunch. The generality of learning vision-based policies comes at the cost of needing a large number of demonstrations. MIME [59] uses 400 demonstrations per task, while robomantic [36] uses 200 demonstrations to train manipulation policies. This scale of data significantly hampers our ability to train multiple skills in reasonable amounts of time. Furthermore, collecting large amounts of demonstrations is physically and cognitively taxing on the human demonstrators due to the nature of available teleoperation frameworks [5]. Hence, getting imitation learning to work with few demonstrations is paramount for practical training of robotic skills.
To understand why imitation learning requires large amounts of data, let us take a look at one common paradigm - offline imitation. Methods in this class such as Behavior
Cloning (BC) [47] or Nearest Neighbor retrieval (NN) [43] use a supervised learning objective to maximize the likelihood of demonstrated actions given observations in the demonstration. To ensure that the resulting policy is generalizable to varying factors in deployment (e.g. object configurations), the demonstration set used in training will need to span these factors of variation. Without sufficient coverage, which is only possible with large amounts of demonstration data, trained policies often suffer from distribution shift during deployment [54].
To address the large data requirements of offline imitation and instead imitate with few examples, a promising direction is to adapt policies that were trained offline with online RL [39, 25, 51]. The hope is that while the offline policy, trained with few demonstrations, would fail in deployment, online RL will allow the policy to improve and adapt to deployment scenarios. But how does the RL algorithm get the rewards needed for adaptation? Constructing a task-specific reward function is one possibility [51, 48]. However, this strategy may not be applicable in real-world scenarios where states of objects are hard to estimate or reward functions are hard to create.
In this work, we present Fast Imitation of Skills from Humans (FISH), a new technique for robotic imitation, where given only a minute of demonstrations (between 1 to 3 trajectories), a robot can learn visual policies that both solve the task and adapt to new object configurations through subsequent online training. FISH operates in two phases. First, a weak base policy is learned by offline imitation on the few demonstrations. Second, a residual policy [61, 28, 76, 2] is trained to produce corrective offsets to the weak policy. During online trial and error training, only the residual policy is updated, while the weak policy is queried as a black box model. This allows the use of non-parametric weak policies that are shown to be superior and more robust than parametric ones in low-data settings [43, 6, 5].
An important consideration in online policy learning is obtaining relevant rewards for robot behavior. Since we do not have access to task-specific reward functions, the rewards will need to be inferred from visual data. This is done by matching the visual observations from robot rollouts with the trajectory demonstrated by the human. The matching function uses fast approximations to Optimal Transport (OT) [14] to generate a matching score, which is proportional to the rewards. This procedure does not require explicit estimation of the state or any other object-centric representation.
We evaluate FISH on three different robot platforms that cover different morphologies, weak base policies, camera placements, and gripper types. Through an extensive study across 9 tasks, we present the following key insights:
1. FISH improves upon prior state-of-the-art work in online imitation [25, 12, 32] with an average of 93% improvement in success rate given 20 minutes of online interactions (Section IV-D).
2. We find that FISH can generalize and adapt to a wide range of object configurations unseen in training (Section IV-D).
3. Ablations on different representation modules, adaptation strategies, and exploration strategies show that the design decisions in FISH are crucial for high performance (Section IV).
Open-sourced code and videos of FISH can be found at:
fast-imitation.github.io.
## II Background
Our work builds on several fundamental ideas in reinforcement learning, imitation learning and optimal transport. Here, we describe the most relevant background for FISH.
### _Reinforcement Learning (RL)_
We study RL as a discounted infinite-horizon Markov Decision Process (MDP) [8, 62]. For pixel observations, the agent's observation is approximated as a stack of consecutive RGB frames [37]. The MDP is of the form \((\mathcal{O},\mathcal{A},P,R,\gamma,d_{0})\) where \(\mathcal{O}\) is the observation space, \(\mathcal{A}\) is the action space, \(P:\mathcal{O}\times\mathcal{A}\rightarrow\Delta(\mathcal{O})\) is the transition function, \(R:\mathcal{O}\times\mathcal{A}\rightarrow\mathbb{R}\) is the reward function, \(\gamma\) is the discount factor and \(d_{0}\) is the initial state distribution. In this work, we use an actor critic based method to maximize the expected discounted sum of rewards. The rewards obtained through the OT computation can be used to optimize our policy through off-policy learning [32]. In this work, we use Deep Deterministic Policy Gradient (DDPG) [34] as our RL optimizer, which is an actor-critic algorithm that concurrently learns a deterministic policy \(\pi_{\phi}\) and a Q-function \(Q_{\theta}\). Instead of minimizing the one-step Bellman residual as in vanilla DDPG, we use the n-step variant proposed by Yarats et al. [72] which has been successful on visual control problems.
### _Imitation Learning (IL)_
In imitation learning, the goal is a learn a behavior policy \(\pi^{b}\) from either an expert policy \(\pi^{e}\) or trajectories derived from an expert policy \(\mathcal{T}^{e}\). In this work, we operate in a setting where the agent only has access to expert observational trajectories, i.e. \(\mathcal{T}^{e}\equiv\{(o_{t},a_{t})_{t=0}^{T}\}_{n=0}^{N}\). Here, N refers to the number of trajectory rollouts and T denotes the episode length. We opt for this specific setting since obtaining expert or near-expert demonstrations is feasible in real-world settings [75, 73] and is in line with recent works in the area [25, 16, 26, 32].
### _Inverse Reinforcement Learning (IRL)_
IRL [41, 1] reformulates the IL problem in the RL setting by inferring the reward function \(r^{e}\) from expert trajectories \(\mathcal{T}^{e}\). The inferred reward \(r^{e}\) is used to derive the behavior policy \(\pi^{b}\) using policy optimization. Prominent algorithms in IRL [32, 26] require alternating the inference of reward and optimization of policy in an iterative manner, which is practical for restricted model classes [1]. For compatibility with more expressive deep networks, techniques such as adversarial learning [26, 32] or optimal-transport [42, 16, 12] are needed. Adversarial IRL approaches infer a reward by learning a discriminator that minimizes the gap between expert trajectories \(\mathcal{T}^{e}\) and behavior trajectories \(\mathcal{T}^{b}\). Such a learning procedure
results in non-stationary rewards \(r^{e}\) for the optimization of \(\pi^{b}\) which is prone to unstable training.
### _Optimal Transport (OT) for imitation_
In order to alleviate the non-stationary reward issue with adversarial IRL frameworks, we resort to optimal transport (OT) based reward inference in this work [14]. A detailed description of optimal transport is provided in Appendix A.1. OT seeks to find a way to transform one distribution into another for a given cost function. The cost function represents the cost of transporting mass from one location to another. In our work, we use OT to compute a similarity between an expert trajectory \(\mathcal{T}^{e}=\{o_{1}^{e},...,o_{n}^{e}\}\) and a rollout trajectory \(\mathcal{T}^{b}=\{o_{1}^{b},...,o_{n}^{b}\}\) from our policy. Each visual observation \(o_{i}^{j}\) is passed through an encoder to obtain a lower dimensional representation \(z_{i}^{j}\). The cost function is computed as a cosine distance between the encoded representations of the observations from two trajectories, and the cost matrix \(C\) comprises the costs for different pairs of representations.
Optimal transport computes a transport plan \(\mu^{*}\) that finds the best matching between \(\mathcal{T}^{e}\) and \(\mathcal{T}^{b}\), where \(\mu^{*}_{i,j}\) represents the strength of the match between the \(i^{\text{th}}\) representation from the expert trajectory and the \(j^{\text{th}}\) representation from the rollout trajectory under some constraints which are described in Appendix A.1. We compute rewards from \(\mu^{*}\), by the following equation.
\[r^{\text{OT}}(\mathcal{T}^{b})=\sum_{t,t^{\prime}=1}^{T}C_{t,t^{\prime}}\mu^{* }_{t,t^{\prime}} \tag{1}\]
Intuitively, maximizing this reward incentivizes the imitation agent to produce trajectories that are closer to the demonstrated trajectories.
## III Approach
Given a few demonstrations for complex, contact-rich manipulation that covers a small subset of possible object configurations, we seek to learn a robot policy that can generalize to a larger set of configurations not seen during the demonstrations. To enable this, we propose Fast Imitation of Skills from Humans (FISH). FISH operates in two phases. In the first phase, a weak base policy is trained on the few demonstrations using supervised learning. This weak policy, while being poor in generalization, serves as a useful prior for subsequent adaptation. In the second phase, a residual policy is trained to adapt the base policy to new object configurations. This is done by RL on the robot with these configurations using visual trajectory matching scores as the reward signal.
### _Phase 1: Non-parametric base policy_
The expert demonstrations are first used to derive an imperfect base policy \(\pi^{b}\). In this work, we stick to non-parametric base policies owing to their proven robustness in the low-data regime [43, 6, 5] as compared to parametric alternatives such as Behavior Cloning (BC). We observe that different base policies perform differently across robots and thus, we employ two variants of non-parametric base policies in this work - an open-loop policy and closed-loop Visual Imitation through Nearest Neighbors (VINN) [43]. More details about these base policies have been provided in Section IV-G.
**Visual representation learning:** Since we operate in the visual domain, a BC policy is trained on the expert demonstrations and we use the encoder from the BC policy to encode the visual observations \(o\) into lower dimensional representations \(z\). The encoded representation \(z\) is provided as an input to both the base policy \(\pi^{b}\) and the residual policy \(\pi^{r}\). An ablation study comparing the use of such a BC encoder with other self-supervised learning techniques [24] as well as pretrained encoders [17, 71, 49, 40] is provided in Section IV.
### _Phase 2: Online offset learning with IRL_
Given the base policy \(\pi^{b}\), we then train a residual policy \(\pi^{r}\) on top of the base policy through environment rollouts. Since we are operating without explicit task rewards, we obtain rewards using OT-based trajectory matching, as described in Section II. A standard RL optimizer utilizes these OT-based rewards \(r^{OT}\) to optimize the residual policy \(\pi^{r}\) by maximizing the cumulative reward from the final policy \(\pi^{\text{FISH}}\). Similar to prior work [25, 12], we use n-step DDPG [34] as our RL optimizer, a deterministic actor-critic based method that provides high performance in continuous control [72].
**Residual learning:** In residual RL [61, 28, 76, 2], given a base policy \(\pi^{b}:\mathcal{Z}\rightarrow\mathcal{A}\) with encoded representations \(z\in\mathcal{Z}\) and action \(a\in\mathcal{A}\), we learn a residual policy \(\pi^{r}:\mathcal{Z}\times\mathcal{A}\rightarrow\mathcal{A}\) such that an action sampled from the final policy \(\pi\) is the sum of the base action \(a^{b}\sim\pi^{b}(z)\) and the residual offset \(a^{r}\sim\pi^{r}(z,a^{b})\). In prior work, the base policy \(\pi^{b}\) is either a hand-crafted controller [61, 28] or a learned policy [2]. In this work, we use the non-parametric base policy \(\pi^{b}\) and learn a residual policy \(\pi^{r}\) using OT rewards to refine the action output by \(\pi^{b}\).
**OT-based reward maximization:** For the RL algorithm, the rewards corresponding to an agent trajectory are computed using the OT-based approach described in Equation 1. A visualization of OT rewards has been shown in Figure 4. These rewards are used to optimize the residual policy using the
Fig. 3: A schematic of FISH. The first phase obtains a base policy through offline imitation from demonstrations. The second phase learns a residual model from online interactions
learning objective shown in Equation 2.
\[\pi^{r}=\operatorname*{argmax}_{\pi}\mathbb{E}_{(z,a^{b},a^{r})\sim\mathcal{D}_{ \beta}}[Q(z,a^{b},a^{r})] \tag{2}\]
Here, \(Q(z,a^{b},a^{r})\) represents the Q-value from the critic used in actor-critic policy optimization. For an encoded representation \(z\) corresponding to observation \(o\), \(a^{b}\sim\pi^{b}(z)\) is the action from the base policy, and \(a^{r}\sim\pi^{r}(z,a^{b})\) is the action from the residual policy. The executed action \(a\) is a sum of \(a^{b}\) and \(a^{r}\).
**Stabilizing OT with representation learning:** The OT rewards used for the IRL optimization are computed using the encoded representations. As a result, a changing encoder during training results in non-stationary rewards which makes the training prone to instabilities. In order to alleviate this issue, we fix the BC encoder obtained from the demonstrations and the OT rewards are computed using the representations from this fixed encoder. Section IV-I shows that a fixed encoder improves stability resulting in superior performance.
**Guided exploration for residual policy:** In contrast to fine-tuning a base policy [25], applying offsets through a residual policy allows us to guide the exploration during online learning by injecting domain knowledge into the framework. For instance, if there is only a subspace of the full action space that we need to explore, our framework allows learning only the offsets for this subspace while keeping the base action along the remainder of the action space unaltered. We have provided ablation studies in Section IV-E showing the advantage of such guided exploration. In addition to performance gains, constraining the offsets prevents the robot from going into undesirable positions and enables safer exploration during online learning (refer to Appendix B).
## IV Experiments
Our experiments are designed to answer the following questions in detail: (1) How efficient is FISH for imitation learning? (2) How important is guided exploration for faster convergence? (3) How does the choice of base policy affect performance? (4) Are off-the-shelf pretrained encoders useful for online learning in a low-data regime? (5) How do additional implementation details affect FISH? (6) Does FISH generalize to new objects?
### _Experimental setup_
We demonstrate the versatility of our algorithm by evaluating our approach on a suite of 9 tasks of varying difficulty across three different robot morphologies. We collect 1 minute of demonstrations (between 1 to 3 trajectories) for each task and allow a maximum of 20 minutes of online learning. For all tasks, we operate purely in the visual domain.
### _Robot setup and task descriptions_
We evaluate our approach on 3 different robots - a Ufactory xArm 7 robot, an Allegro Hand, and a Hello Robot Stretch.
1. **Ufactory xArm 7:** We use a xArm 7 robot with a two-fingered gripper for three tasks - key insertion, flipping a
Fig. 4: An analysis of the values of OT rewards for different trajectories with respect to a given expert demonstration. The leftmost column depicts the visual demonstration, while the other columns each depict a trajectory rollout. Trajectories are sorted in increasing order of OT rewards from left to right. Raw OT scores can be visualized using the red-to-green color map.
bagel, and peg in a cup. The observations are RGB images from a fixed external camera. For each task, the start position of the xArm is fixed and the object position is varied across trajectories. We use closed-loop VINN [43] as a base policy on the xArm. We provide one, two, and three expert demonstrations for the task of inserting a key, flipping a bagel, and inserting a peg in a cup respectively.
2. **Allegro Hand:** We use a 4-fingered robotic hand with a 16-dimensional joint space. We study 3 dexterous manipulation tasks on the hand - cube flipping, bottle cap spinning, and dollar bill picking. The tasks have been designed to exhibit the need for dexterity to accurately manipulate the objects. The observations are RGB images from a fixed external camera. For each task, the start position of the hand is fixed and the object position is varied across trajectories during online training. We use an open-loop policy as a base policy on the hand and the demonstrations are collected using a virtual reality (VR) framework [5]. We use one expert demonstration for all tasks on the Allegro hand.
3. **Hello Robot Stretch:** We use Hello Robot's Stretch to showcase our model's ability to interact with a realistic environment using a non-stationary robot. We perform three tasks using the Stretch Robot - door opening, drawer opening, and light switching. The observations are RGB images from an egocentric camera attached to the robot gripper. Hence, the camera viewpoint changes as the robot moves. For each task, the robot is initialized at a random position in front of the object. The demonstration we collect has the robot centered with respect to the handle of the door and drawer and the light switch. We used an open-loop policy as a base policy. We use one expert demonstration for all tasks on the Stretch robot.
For each task, we vary the position of the object or the robot at the start of each episode of online learning. All the methods are evaluated on the same initial object or robot configurations, shown in Figure 6.
Fig. 5: A visualization of rollouts from FISH on a selected set of 8 tasks.
Fig. 6: Plot showing variation in object positions or robot initializations for selected tasks. The region of operation for each task is denoted by the blue box. \(\times\) on the images indicate positions where the demonstrations are collected. The green marks indicate positions where FISH succeeds and the red ones indicate failure modes. As shown, FISH succeeds with varied object positions and initial robot configurations.
### _Baseline algorithms_
We now describe the various imitation learning algorithms, both offline and online, used in this work.
1. **Open-loop:** In settings where we have one demonstration, an open-loop policy copies the actions performed by the expert at each step of the trajectory. Though this yields robust performance when the object and robot's positions match the demonstration, it yields poor performance on any variations of the task.
2. **Behavior Cloning (BC):** This refers to the behavior-cloned policy [47] trained on expert demonstrations.
3. **Closed-loop VINN:** In closed-loop VINN [43], each visual observation in the demonstration is encoded into a representation. During rollouts, the \(k\)-Nearest Neighbors (\(k\)NN) algorithm is used to match to the \(k\) closest observations, and the action is computed using Locally Weighted Regression (LWR) [7] on the actions of the matched observations. In this work, we use a BC encoder for obtaining visual representations.
4. **ROT:** ROT [25] is an IRL algorithm that finetunes a BC pretrained policy through online learning in an environment by leveraging optimal transport for reward computation. ROT gets around the "forgetting problem" in such a finetuning setting [39, 67] by using a soft Q-filtering based approach to prevent the actor from incorrectly deviating from the expert demonstration.
5. **RDAC:** Discriminator Actor Critic (DAC) [32] is an adversarial imitation learning method [26, 64, 32]. DAC outperforms prior work such as GAIL [26] and AIRL [22]. RDAC is a DAC with a ROT-like regularization applied to it and has been observed to be a strong adversarial IRL baseline [25].
### _How efficient is FISH for imitation learning?_
Performance of FISH on a suite of 9 real-world tasks across 3 different robots has been depicted in Table I. We observe that FISH outperforms prior work on all tasks. FISH significantly outperforms ROT [25], which is a method for finetuning a pretrained BC policy using online learning. This highlights the benefits of fixing a base policy as compared to modifying it during online finetuning. Further, aligned with results in Arunachalam et al. [6], we observe that BC performs poorly on the Allegro Hand owing to its high dimensional action space and limited demonstrations. This provides a case for using non-parametric base policies as opposed to parametric alternatives in such low-data regimes. Poor BC performance also affects online learning shown by the poor performance of ROT and is further shown in Section IV-G. We observe that while the learned BC policy is not robust enough to perform with high precision, the resulting representations are still sufficient for downstream fine-tuning (indicated by OT rewards shown in Figure 4). Empirically, we notice that BC is able to complete the coarse portions of the task such as reaching the object. However, the actions are often inaccurate indicating that the BC policy learned on top of the encoded representations is not precise enough.
### _How important is guided exploration?_
As opposed to finetuning a parametric model where any update to the model can affect all dimensions of the action space, learning residuals over a fixed-based policy allows us to guide our exploration. For instance, owing to its high dimensional action space, exploring along all dimensions of the action space in the Allegro Hand renders online learning ineffective. So depending on the base policy performance, we only apply residuals along some dimensions while keeping the base policy unaltered along the remaining dimensions. Specifically, we divide our evaluations into three parts - guided, semi-guided, and unguided. For the bagel flipping task, we explore only along the Z-axis for the guided setting, along the XYZ axes for the semi-guided setting, and along both the XYZ axes and roll-pitch-yaw for the unguided setting. Figure 7 demonstrates the effectiveness of such guided exploration over the unconstrained alternative. Note that although guided exploration improves sample efficiency, unguided exploration with FISH still outperforms our strongest baselines in Table I.
learning. This is primarily due to the untrained offsets driving the agent to an observation unseen in the expert demonstration, thus, adversely affecting the base policy. Drawing inspiration from recent work that uses adaptive regularization to keep the online policy close to the base policy during the initial part of training [25], we adaptively regularize our residuals to stay close to zero using the same soft Q-filtering approach (see more details in Appendix A.2). However, as observed in Table II, this harms the performance of our model. Empirically, we observe that such regularization drives the residuals to be a very small value close to zero which renders them ineffective in producing significant performance gains over the base policy.
### _How does the choice of base policy affect performance?_
To understand the effect of using different base policies, we compare the performance of FISH on four variants shown in Table III. The finetuned ImageNet [17] encoder refers to a pretrained ImageNet encoder finetuned with BYOL [24] on the expert demonstrations. These experiments provide 3 key insights - \((a)\) OT-based IRL without pre-training does not work well with few environment interactions, \((b)\) self-supervised learning (SSL) methods such as BYOL do not work well in the low data regime, and \((c)\) with a decent BC policy as in the case of bagel flipping, FISH can produce significant improvements on the base policy. However, using a non-parametric base policy such as VINN obtains a superior performance as compared to parametric alternatives.
### _Are pretrained encoders useful for online learning?_
We compare the performance of FISH with the VINN base policy obtained from a variety of off-the-shelf encoders pretrained using self-supervised learning on large-scale datasets - ImageNet [17], MVP [71, 49] and R3M [40]. Table IV shows that even though these encoders are trained on large-scale datasets, they do not perform well in this setting. In many cases, the performance is worse than our base policies. This is perhaps because the representations learned on Internet data may not transfer well to our suite of tasks. Further, this indicates that representations trained on in-domain data, even in the low-data regime, may perform better than training on large amounts of out-of-domain data.
### _How do additional implementation details affect FISH?_
Table V provides additional insights with regard to - \((a)\) having a fixed encoder during online learning, \((b)\) conditioning the residual policy on the base policy action. We observe that both of these techniques are necessary and dropping either of them adversely affects the performance of the algorithm.
### _Does FISH generalize to new objects?_
We demonstrate the ability of FISH to generalize to different objects with varied appearances and dynamics. In Figure 8, we show this generalization for a representative task on the xArm and the Allegro Hand. We observe that the performance drops proportionally with the increase in variation. For instance, the xArm completely fails at flipping a flatbread which is considerably softer than a bagel and requires a different strategy to flip. Similarly, the hand fails to pick up a wallet that is thicker and more uneven than a dollar bill. However, even though the model fails in extreme cases, it succeeds at performing the task with a significant variation in visual and dynamic properties of the object.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Fix Encoder} & \multicolumn{2}{c}{Condition on} & \multirow{2}{*}{Bagel Flipping} & \multirow{2}{*}{Dollar Bill Picking} \\ & base action & & & \\ \hline ✓ & \(\times\) & 0.6 & 0.1 \\ \(\times\) & ✓ & **0.9** & 0.0 \\ ✓ & ✓ & **0.9** & **0.8** \\ \hline \hline \end{tabular}
\end{table} TABLE V: Ablation analysis on fixing encoders and conditioning on base actions during online learning.
\begin{table}
\begin{tabular}{c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{Bagel Flipping} & \multicolumn{1}{c}{Dollar Bill} \\ & Flipping & \multicolumn{1}{c}{Picking} \\ \hline IRL Scratch & 0.0 & 0.0 \\ Open-loop & 0.1 & **0.8** \\ BC & 0.7 & 0.0 \\ VINN (ImageNet) & 0.0 & 0.0 \\ VINN (BYOL) & 0.0 & 0.0 \\ VINN (BC Encoder) & **0.9** & 0.0 \\ \hline \hline \end{tabular}
\end{table} TABLE III: Comparison between success rates on 10 trials for our method with different base polices.
\begin{table}
\begin{tabular}{c c c} \hline \hline \multirow{2}{*}{Encoder} & \multirow{2}{*}{Bagel Flipping} & \multicolumn{1}{c}{Dollar Bill} \\ & & \multicolumn{1}{c}{Picking} \\ \hline ImageNet & 0.0 & 0.0 \\ R3M & 0.0 & 0.1 \\ MVP & 0.3 & 0.0 \\ BC & **0.9** & **0.8** \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Analysis of the performance of FISH using different pre-trained encoders.
Fig. 7: Comparison between success rate for varied levels of guidance applied to the residual policy. On the left, we show the meaning of each level of guidance. In each scenario, the green axes denote the direction along which the offsets are learned.
\begin{table}
\begin{tabular}{c c c} \hline \hline \multirow{2}{*}{Fix Encoder} & \multicolumn{2}{c}{Condition on} & \multirow{2}{*}{Bagel Flipping} & \multirow{2}{*}{Dollar Bill Picking} \\ & base action & & \\ \hline ✓ & \(\times\) & 0.6 & 0.1 \\ \(\times\) & ✓ & **0.9** & 0.0 \\ ✓ & ✓ & **0.9** & **0.8** \\ \hline \hline \end{tabular}
\end{table} TABLE V: Ablation analysis on fixing encoders and conditioning on base actions during online learning.
### _Limitations of_ Fish
To summarize our experiments, we showcase the effectiveness of our algorithm when operating in a low data regime with a limited budget for environment interactions. We demonstrate a significant improvement in performance as compared to prior state-of-the-art work and provide extensive ablations to justify our design choices. However, we recognize a few limitations in this work: \((a)\) Since the OT-based rewards used to train the residual policy align the agent with the demonstrations, it relies on the demonstrator being an 'expert'. \((b)\) We restrict ourselves to the visual domain which makes it difficult to perform precise tasks where the visual signals are not very prominent. For example, it is difficult to infer a keyhole spanning a minuscule portion of an image. A potential improvement along this line might result from embracing other modalities such as tactile sensing. \((c)\) Our residual policy is randomly initialized. Pretraining the residual policy might help scale to more difficult tasks requiring more precise control.
## V Related Work
**Imitation Learning (IL)** IL [3, 27] has been shown to solve complex tasks in real-world environments. Approaches for IL include Behavior Cloning (BC) [47, 65] and Inverse Reinforcement Learning (IRL) [41, 1]. BC solely learns from offline demonstrations and has shown promising results in the presence of large diverse datasets [47, 63, 10, 56, 74]. Assistive tools and other teleoperation methods have allowed for more efficient data collection [73, 5]. BC has also been applied to tasks with a multimodal action distribution [21, 58, 13]. However, BC suffers on out-of-distribution samples [54] which renders it unsuitable for the low-data regime. Lately, there has been some work on utilizing non-parametric models to tackle such low data regimes with offline imitation [43, 6, 5]. A significant drawback of offline IL is that they do not provide any means for correcting the behavior on unseen observations. IRL provides a solution to this problem by learning a robust reward function through online interactions but suffers from sample inefficiency [32]. There has been some work on improving the sample efficiency of IRL [32, 22, 70, 64], with some visual extensions to these IRL approaches [25, 9, 66, 50, 12]. There has also been demonstration of such IRL approaches performing complex tasks on real robots [1, 25, 33].
**Optimal Transport (OT)** OT [68, 46] provides a tool for comparing probability measures while including the geometry of the space. In imitation learning, OT can be used to compute the alignment between a set of agent and expert observations using distance metrics such as Sinkhorn [14], Gromov-Wasserstein [45], GDTW [11], CO-OT [53] and SoftDTW [15]. Many of these distance metrics have an associated IL algorithm - SIL [42] uses Sinkhorn, PWIL [16] uses greedy Wasserstein, GDTW-IL [11] uses GDTW, and GWIL [20] using Gromov-Wasserstein. Recent work by Cohen et al. [12] has demonstrated that the Sinkhorn distance [42] produces the most efficient learning among the discussed metrics and can be combined with offline pretraining to efficient perform complex tasks in the real world [25]. OT has also seen use in the field of computer vision [55, 4] to show improvements for Generative Adversarial Networks (GANs) [23]. In this work, we adopt the Sinkhorn metric for online learning and combine it with non-parametric IL approaches to perform precise tasks across three robot morphologies.
**Residual RL for robotics** Learning residuals through RL enables safe and robust online learning [61, 28, 76, 2]. Residual RL operates by applying offsets on top of a base policy. Prior works either use a hand-engineered controller [61, 28] or a policy learned from demonstrations [2] as the base policy. In this work, we resort to the latter and use non-parametric base policies obtained from one minute of expert demonstration. Prior works also assume the availability of task-specific rewards for learning the online policy. However, we differ from this and use OT matching to obtain rewards from the collected demonstration set.
## VI Conclusion
In this work, we present a new algorithm for fast imitation learning, FISH, that demonstrates improved performance compared to prior state-of-the-art work on a variety of real robot tasks across three different robot morphologies. We demonstrate that combining an imperfect base policy with a learned
Fig. 8: Here we run FISH, showing one demonstration on the leftmost object and then training it on new objects. Success rates (S.R.) after 5 minutes of online learning are reported below the corresponding object.
residual policy can enable performing precise tasks with one minute of demonstration collection and limited environment interactions. Further, we ablate over various design decisions of FISH, which shows the importance of learning stable representations, choosing the right base policy, and performing guided exploration. While powerful, we recognize that FISH has limitations (see Section IV-K).
## Acknowledgments
We thank Sridhar Pandian Arunachalam, David Brandfonbrener, Zichen Jeff Cui, Venkatesh Pattabiraman, Ilija dosavovic, and Chris Paxton for valuable feedback and discussions. This work was supported by grants from Honda, Meta, Amazon, and ONR awards N00014-21-1-2758 and N00014-22-1-2773.
|
2307.13564 | Existence and uniqueness of solutions to some anisotropic elliptic
equations with a singular convection term | We prove the existence and uniqueness of weak solutions to a class of
anisotropic elliptic equations with coefficients of convection term belonging
to some suitable Marcinkiewicz spaces. Some useful a priori estimates and
regularity results are also derived. | Giuseppina di Blasio, Filomena Feo, Gabriella Zecca | 2023-07-25T15:16:45Z | http://arxiv.org/abs/2307.13564v2 | Existence and uniqueness of solutions to some anisotropic elliptic equations with a singular convection term
###### Abstract
We prove the existence and uniqueness of weak solutions to a class of anisotropic elliptic equations with coefficients of convection term belonging to some suitable Marcinkiewicz spaces. Some useful a priori estimates and regularity results are also derived.
## 1 Introduction
In this paper we obtain existence and uniqueness results for the weak solutions of the following class of Dirichlet problems
\[\left\{\begin{array}{ll}-\sum_{i=1}^{N}\partial_{x_{i}}\left[\mathcal{A}_{i }(x,\nabla u)+\mathcal{B}_{i}(x,u)\right]+\mathcal{G}(x,u)=\mathcal{F}&\mbox{ in }\Omega,\\ &\\ u=0&\mbox{ on }\partial\Omega,\end{array}\right. \tag{1.1}\]
where \(\Omega\) is a bounded domain of \(\mathbb{R}^{N}\) with Lipschitz boundary, \(N>2\), \(p_{i}>1\) for every \(i=1,...,N\) with \(\bar{p}<N\), denoting by \(\overline{p}\) the harmonic mean of \(\vec{p}=(p_{1},\cdots,p_{N})\), _i.e._
\[\frac{1}{\overline{p}}=\frac{1}{N}\sum_{i=1}^{N}\frac{1}{p_{i}}. \tag{1.2}\]
Throughout this paper, we make the following assumptions for any \(i=1,...,N\),
\(({\cal H}1)\quad{\cal A}_{i}:\Omega\times{\mathbb{R}}^{N}\to{\mathbb{R}}\) is a Caratheodory function that satisfies
\[|{\cal A}_{i}(x,\xi)|\leqslant\beta_{i}|\xi_{i}|^{p_{i}-1} \tag{1.3}\] \[\alpha|\xi_{i}|^{p_{i}}\leqslant{\cal A}_{i}(x,\xi)\,\xi_{i}\] (1.4) \[0<\left({\cal A}_{i}(x,\xi)-{\cal A}_{i}(x,\eta)\right)\left(\xi _{i}-\eta_{i}\right)\quad\xi\neq\eta \tag{1.5}\]
for a.e. \(x\in\Omega\) and for any vector \(\xi,\eta\) in \({\mathbb{R}}^{N}\), where \(0<\alpha\leqslant\beta_{i}\) are constants.
\(({\cal H}2)\quad{\cal B}_{i}:\Omega\times{\mathbb{R}}\to{\mathbb{R}}\) is a Caratheodory function such that
\[|{\cal B}_{i}(x,s)|\leqslant b_{i}(x)|s|^{\frac{p}{p_{i}}}, \tag{1.6}\]
for a.e. \(x\in\Omega\) and for every \(s\in{\mathbb{R}}\), with \(b_{i}:\,\Omega\to[0,+\infty)\) measurable function such that
\[b_{i}\in L^{\frac{Np_{i}^{\prime}}{p},\infty}(\Omega). \tag{1.7}\]
\(({\cal H}3)\)\({\cal G}:\Omega\times{\mathbb{R}}\to{\mathbb{R}}\) is a Caratheodory function such that
\[|{\cal G}(x,s)|\leqslant\tilde{\mu}|s|^{\gamma} \tag{1.8}\]
with \(1\leq\gamma<p_{\infty}-1\) and
\[{\cal G}(x,s)\,s\geq 0 \tag{1.9}\]
for a.e. \(x\in\Omega\) and for every \(s\in{\mathbb{R}}\), where \(\tilde{\mu}\) is a non negative constant and \(p_{\infty}=\max\{\overline{p}^{*},p_{\max}\}\), with \(p_{\max}=\max_{i}p_{i}\).
\(({\cal H}4)\)\({\cal F}\) belongs to the dual space \((W^{1,\vec{p}}_{0}(\Omega))^{*}\), where \(W^{1,\vec{p}}_{0}(\Omega)\) is the anisotropic Sobolev space defined in Section 2.
We observe that in our assumptions, the following definition of weak solution is well posed.
**Definition 1.1**: _For any \({\cal F}\in(W^{1,\vec{p}}_{0}(\Omega))^{*}\) we say that \(u\in W^{1,\vec{p}}_{0}(\Omega)\) is a weak solution to (1.1) provided_
\[\int_{\Omega}\left[\sum_{i=1}^{N}\left[{\cal A}_{i}(x,\nabla u)+{\cal B}_{i}( x,u)\right]\partial_{x_{i}}\varphi+{\cal G}(x,u)\varphi\right]\,dx=\langle{\cal F }\,,\varphi\rangle \tag{1.10}\]
\(\forall\varphi\in C^{\infty}_{0}(\Omega)\)_, where \(\langle\cdot,\cdot\rangle\) denotes the duality product of \((W^{1,\vec{p}}_{0}(\Omega))^{*}\) and \(W^{1,\vec{p}}_{0}(\Omega)\)._
In the anisotropic framework the natural space where looking for weak solutions of Dirichlet problem (1.1) is the anisotropic Sobolev space \(W^{1,\overrightarrow{p}}_{0}(\Omega)\) (see SS 2 for definition). When the harmonic mean \(\bar{p}\), defined in (1.2), is less then the dimension \(N\), it is well-known (see SS 2) that \(W^{1,\overrightarrow{p}}_{0}(\Omega)\) is continuously embedded in the Lorentz space \(L^{\vec{p}^{*},\bar{p}}(\Omega)\). On the other hand, by Poincare inequality (see (2.7) below) the space \(W^{1,\overrightarrow{p}}_{0}(\Omega)\) is embedded in the Lebesgue space \(L^{p_{\max}}(\Omega)\). This suggests us to link, as in our assumption (1.8), the growths of zero order term with \(p_{\infty}\), which takes over how the \(p_{i}\) are spread out.
The prototype equation of our class of problems (1.1) is
\[-\sum_{i=1}^{N}\partial_{x_{i}}\left[|\partial_{x_{i}}u|^{p_{i}-2}\partial_{x_{i} }u+\beta_{i}(x)|u|^{\frac{\tilde{p}}{p_{i}}-1}u\right]+\tilde{\mu}|u|^{\gamma-1 }u={\cal F}\qquad\mbox{ in }\Omega, \tag{1.11}\]
where \(p_{i}\geq 1,\tilde{p}<N,\tilde{\mu}\geq 0,1\leq\gamma<p_{\infty}-1,\beta_{i} \in L^{\frac{N\tilde{p}_{i}^{\prime}}{\tilde{p}},\infty}(\Omega)\) and \({\cal F}\) belongs to the dual space. We stress that when \(\bar{p}^{*}\geq p_{\max}\) using the Sobolev embedding in the Lorentz space \(L^{\tilde{p}^{*},\tilde{p}}(\Omega)\) the summability assumption on \(\beta_{i}\) is optimal to assure \(|\beta_{i}|^{p_{i}^{\prime}}|u|^{\bar{p}}\in L^{1}(\Omega)\). Moreover we observe that when \(p_{i}=p\), \(\beta_{i}=0\) and \(\widetilde{\mu}=0\), the principal part in equation (1.11) becomes the so-called pseudo-Laplacian operator (see [27] pp. 106 and 155) or orthotropic \(p\)-Laplacian operator (see [11]), extensively studied in the literature.
Let us point out that term anisotropy is used in various scientific disciplines and could have a different meaning when it is related to equations as well. The interest in anisotropic problems has deeply increased in the last years since their many applications in mathematical modelling for natural phenomena in biology and fluid mechanics. For example, they are related to the mathematical description of the dynamics of fluids in anisotropic media when the conductivities of the media are different in different directions (see e.g. [3]) and they also appear in biology as a model for the propagation of epidemic diseases in heterogeneous domains (see e.g. [6]). On anisotropic problems many results in different directions have been obtained, here we quote a list of references that is obviously not exhaustive and we refer the reader to references therein to extend it: [1, 4, 5, 9, 11, 12, 13, 16, 19, 22, 17, 20, 23, 26, 31].
A goal of this paper is to analyze the existence of weak solutions for this class of problems (see [21] for the isotropic case). Since under our assumptions the coercivity for the involved operator in problem (1.1) is not guaranteed, we will proceed as usual by approximations. The existence of weak solutions can be expected when the datum and the coefficients are smooth enough. Then we first consider problem with \(b_{i}\in L^{\infty}(\Omega)\) and then we reduce to the general case \(b_{i}\in L^{\frac{N\tilde{p}_{i}^{\prime}}{\tilde{p}},\infty}(\Omega)\) assuming a control on suitable distance of \(b_{i}\) from \(L^{\infty}(\Omega)\) (see assumption (3.2) in Theorem 5.1). This strategy allows us to overcome that the norms in the Marcinkiewicz \(L^{\frac{N\tilde{p}_{i}^{\prime}}{\tilde{p}},\infty}\) are not absolutely continuous and that \(L^{\infty}(\Omega)\) is not dense in Marcinkiewicz spaces. Finally when we pass to the limit in the approximated problems we have to deal with an extra difficulty due to the fact that in our assumption the operator \(u\in W^{1,\overrightarrow{p}}_{0}(\Omega)\to b_{i}(x)|u|^{\frac{\tilde{p}_{i}^{ \prime}}{p_{i}}-1}u\in L^{p_{i}^{\prime}}(\Omega)\) is not compact in general. We emphasize that our assumption on the distance (3.2), firstly considered in [24] in the isotropic case (see also [25], [21]), is weaker than asking the smallness of the norms of \(b_{i}\) which is the standard approach to treat the presence of the first order term.
We also analyze the regularity for weak solutions of problem (1.1) and we obtain Stampacchia type regularity, extending [10, 31, 20] where the case \({\cal B}\equiv 0\) is studied. Regularity results for local solutions of problem (1.1) has been recently obtained in [18].
For what concerns the uniqueness (see Theorems 7.1, 7.3 and 7.5) we emphasize that our proofs strongly uses the presence of the zero order term and his monotonicity assumption (7.1) below. In order to describe our result, let us reduce ourselves to the model case (1.11) for simplicity. The main difficulty is due to the presence of the lower order terms that are only Holder continuous, but not Lipschitz continuous with respect to solution
when \(p_{i}<2\) for all \(i\). We recall [7] for equations with a Holder continuous dependence on the solution of the coefficient in the main part of the operator.
If one wants to neglect the term \({\cal G}\), Corollary 6.2 gives a partial uniqueness result when the datum is zero (see [8] for isotropic case). Otherwise also in the isotropic case the uniqueness is proved in [8] requiring a control of the partial derivative of first order term \(B(x,u)\) with respect to \(u\), which is not satisfied in the simple case \(B(x,u)={\bf b}(x)|u|^{p-2}u\) with \({\bf b}\in\left(L^{\frac{N}{p-1}}(\Omega)\right)^{N}\).
The paper is organized as follows. Section 2 contains some preliminaries and in Section 3 we prove some useful a priori estimates. In our strategy we need some \(L^{\infty}\) estimate of \(u\) as well. For completeness we write Section 4 to analyze the regularity of the solutions. The existence theorem is stated and proved in Section 5. The last section is devoted to the uniqueness results.
## 2 Preliminaries
In the present section we recall some known function spaces, useful in the sequel.
We start by recalling definitions of Lorentz spaces and their properties (see [29] for more details).
Here we assume that \(\Omega\subset\mathbb{R}^{N}\), \(N>2\) is an open set. Given \(1<p,q<+\infty\), the Lorentz space \(L^{p,q}(\Omega)\) consists of all measurable functions \(f\) defined on \(\Omega\) for which the quantity
\[\|f\|_{p,q}^{q}=p\int_{0}^{+\infty}|\Omega_{t}|^{\frac{q}{p}}t^{q-1}dt \tag{2.1}\]
is finite, where \(\Omega_{t}=\{x\in\Omega:|f(x)|>t\}\) and \(|\Omega_{t}|\) is the Lebesgue measure of \(\Omega_{t}\), that is, \(\mu_{f}(t)=|\Omega_{t}|\) is the distribution function of \(f\). Note that \(\|\cdot\|_{p,q}\) is equivalent to a norm and \(L^{p,q}(\Omega)\) becomes a Banach space when endowed with it. For \(p=q\), the Lorentz space \(L^{p,p}(\Omega)\) reduces to the Lebesgue space \(L^{p}(\Omega)\). For \(q=\infty\), the class \(L^{p,\infty}(\Omega)\) consists of all measurable functions \(f\) defined on \(\Omega\) such that
\[\|f\|_{p,\infty}^{p}=\sup_{t>0}t^{p}\mu_{f}(t)<+\infty \tag{2.2}\]
and it coincides with the Marcinkiewicz class or the so-called weak-\(L^{p}(\Omega)\).
It well-known that if \(\Omega\) is bounded, the following inclusions hold
\[L^{r}(\Omega)\subset L^{p,q}(\Omega)\subset L^{p,r}(\Omega)\subset L^{p, \infty}(\Omega)\subset L^{q}(\Omega), \tag{2.3}\]
whenever \(1\leqslant q<p<r\leqslant\infty.\) Moreover, for \(1<p<\infty\), \(1\leqslant q\leqslant\infty\) and \(\frac{1}{p}+\frac{1}{p^{\prime}}=1\), \(\frac{1}{q}+\frac{1}{q^{\prime}}=1\), if \(f\in L^{p,q}(\Omega)\), \(g\in L^{p^{\prime},q^{\prime}}(\Omega)\) we have the Holder-type inequality
\[\int_{\Omega}|f(x)g(x)|dx\leqslant\|f\|_{p,q}\|g\|_{p^{\prime},q^{\prime}}.\]
We remark that \(L^{\infty}(\Omega)\) is not dense in \(L^{p,\infty}(\Omega)\) for \(p\in\,]1,+\infty[\). Therefore it is possible to define the distance of a given function \(f\in L^{p,\infty}(\Omega)\) to \(L^{\infty}(\Omega)\) as
\[\mbox{dist}_{L^{p,\infty}(\Omega)}(f,L^{\infty}(\Omega))=\inf_{g\in L^{ \infty}(\Omega)}\|f-g\|_{L^{p,\infty}(\Omega)}. \tag{2.4}\]
Note that, since \(\|\ \|_{p,\infty}\) is not a norm, \(\mathrm{dist}_{L^{p,\infty}(\Omega)}\) is just equivalent to a metric. In [15] is proved that
\[\mathrm{dist}_{L^{p,\infty}(\Omega)}(f,L^{\infty}(\Omega))=\lim_{M\to+\infty}\|f -T_{M}f\|_{L^{p,\infty}(\Omega)}, \tag{2.5}\]
where the truncation at level \(M>0\) is defined as
\[T_{M}\,y=\frac{y}{|y|}\min\{|y|,M\}. \tag{2.6}\]
Now, let \(\vec{p}=(p_{1},p_{2},...,p_{N})\) with \(p_{i}>1\) for \(i=1,...,N\), and let \(\Omega\) be a bounded open subset of \(\mathbb{R}^{N}\). As usual the anisotropic Sobolev space is the Banach space defined as
\[W^{1,\vec{p}}(\Omega)=\{u\in W^{1,1}(\Omega):\partial_{x_{i}}u\in L^{p_{i}}( \Omega),i=1,...,N\}\]
equipped with
\[\|u\|_{W^{1,\vec{p}}(\Omega)}=\|u\|_{L^{1}(\Omega)}+\sum_{i=1}^{N}\|\partial_ {x_{i}}u\|_{L^{p_{i}}(\Omega)}.\]
It is well-known that in the anisotropic setting a Poincare type inequality holds true (see [23]). Indeed for every \(u\in C_{0}^{\infty}(\Omega)\) with \(\Omega\) a bounded open set with Lipschitz boundary we have
\[\|u\|_{L^{p_{i}}(\Omega)}\leq C_{P}\|\partial_{x_{i}}u\|_{L^{p_{i}}(\Omega)}, i=1,...,N \tag{2.7}\]
for a constant \(C_{P}\) proportional to the width of \(\Omega\) in the direction of \(e_{i}\) and then to the diameter of \(\Omega\). Moreover, for \(u\in C_{0}^{\infty}(\mathbb{R}^{\mathbb{N}})\) the following anisotropic Sobolev inequality holds true (see [32])
\[\|u\|_{L^{p,q}(\mathbb{R}^{\mathbb{N}})}\leq S_{N}\prod_{i=1}^{N}\|\partial_{x _{i}}u\|_{L^{p_{i}}(\mathbb{R}^{\mathbb{N}})}^{\frac{1}{N}}, \tag{2.8}\]
where \(S_{N}\) is an universal constant and \(p=\bar{p}^{*}\) and \(q=\bar{p}\) whenever \(\bar{p}<N\), where \(\bar{p}\) is defined in (1.2) and
\[\bar{p}^{*}=\frac{N\bar{p}}{N-\bar{p}}. \tag{2.9}\]
Using the inequality between geometric and arithmetic mean we can replace the right-hand-side of (2.8) with \(\sum_{i=1}^{N}\|\partial_{x_{i}}u\|_{L^{p_{i}}}\). In [18] is proved a generalization of (2.8) to product of function. We recall an its simpler form that we will need in the following
\[\|u\|_{L^{\vec{p}^{*},\bar{p}}(\Omega)}\leq\widetilde{S}_{N}\left\|\left(\prod _{i=1}^{N}|\partial_{x_{i}}u|\right)^{1/N}\right\|_{L^{\overline{p}}(\Omega)}, \tag{2.10}\]
for a suitable universal constant \(\widetilde{S}_{N}\).
When \(\overline{p}<N\) and \(\Omega\) is a bounded open set with Lipschitz boundary, the space \(W^{1,\overline{\mathscr{P}}}_{0}(\Omega)=\overline{C_{0}^{\infty}(\Omega)}^{ \sum_{i=1}^{N}\|\partial_{x_{i}}u\|_{L^{p_{i}}}}\) is continued embedding into \(L^{q}(\Omega)\) for \(q\in[1,p_{\infty}]\), with \(p_{\infty}:=\max\{\overline{p}^{*},\max_{i}p_{i}\}\) as a consequence of (2.8) and (2.7).
We recall the following useful lemma.
**Lemma 2.1**: \((\)_see [28, page 43]\()\) Let \(X\) be a rearrangement invariant space and let \(0\leq\theta_{i}\leq 1\) for \(i=1,...,M,\) such that \(\sum_{i=1}^{M}\theta_{i}=1\), then_
\[\left\|\prod_{i=1}^{M}|f_{i}|^{\theta_{i}}\right\|_{X}\leq\prod_{i=1}^{M}\|f_{i }\|_{X}^{\theta_{i}}\quad\forall f_{i}\in X.\]
## 3 An useful a priori estimates
In this section we suppose that a weak solution \(u\) of the problem (1.1) exists and we prove the following a priori estimate under suitable assumptions on \(\mbox{\rm dist}_{L^{\frac{Np^{\prime}_{i}}{p},\infty}(\Omega)}(b_{i},L^{\infty }(\Omega))\), defined in (2.4), for \(i=1,\cdots,N\).
Since \({\cal F}\) belongs to \((W_{0}^{1,\vec{p}}(\Omega))^{*}\), in what follows we can write it as
\[{\cal F}=-\sum_{i=1}^{N}(f_{i})_{x_{i}},\mbox{ with }f_{i}\in L^{p^{\prime}_{i}}( \Omega)\ \forall i=1,...,N. \tag{3.1}\]
**Lemma 3.1**: _Let us assume that \(\Omega\) is a bounded Lipschitz domain, \(p_{i}>1\) for \(i=1,..,N\), \(\bar{p}<N\), \(({\cal H}1)-({\cal H}4)\) are in force and let \(u\in W_{0}^{1,\vec{p}}(\Omega)\) be a weak solution of (1.1). Then there exists a positive constant \(d=d(N,\alpha,\vec{p})\) such that whenever_
\[\max_{i}\left\{\mbox{\rm dist}_{L^{\frac{Np^{\prime}_{i}}{p},\infty}(\Omega)} (b_{i},L^{\infty}(\Omega))\right\}<d, \tag{3.2}\]
_the following uniform estimate holds_
\[\sum_{i=1}^{N}\int_{\Omega}|u_{x_{i}}|^{p_{i}}dx\leq C, \tag{3.3}\]
_where \(C=C(\alpha,N,\overrightarrow{p},d,\|{\cal F}\|_{(W_{0}^{1,\vec{p}}(\Omega))^{ *}})\)._
**Proof.** Using as test function \(T_{k}u\) in (1.10), by (1.4) and (1.6) we get
\[\alpha\sum_{i=1}^{N}\int_{\Omega_{k}}|u_{x_{i}}|^{p_{i}}dx+\int_{\Omega}{\cal G }(x,u)T_{k}u\,dx\leq\sum_{i=1}^{N}\int_{\Omega_{k}}|b_{i}(x)||u|^{\frac{\vec{p }}{\vec{r}_{i}}}|u_{x_{i}}|dx+\sum_{i=1}^{N}\int_{\Omega_{k}}|f_{i}||u_{x_{i}} |dx,\]
where \(\Omega_{k}=\{x\in\Omega:|u(x)|<k\}\). For all \(M>0\), using (1.9), Young and Holder inequality we have
\[\sum_{i=1}^{N}\int_{\Omega_{k}}|u_{x_{i}}|^{p_{i}}dx\leq C\left(\sum_{i=1}^{N}\int_{\Omega_{k}}|u|^{\bar{p}}dx+\sum_{i=1}^{N} \|b_{i}(x)-T_{M}b_{i}\|_{L^{\frac{Np^{\prime}_{i}}{p},\infty}(\Omega)}^{p^{ \prime}_{i}}\|T_{k}u\|_{L^{\frac{Np^{\prime}_{i}}{p},\infty}(\Omega)}^{\bar{p} }\right.\] \[+\left.\sum_{i=1}^{N}\int_{\Omega_{k}}|f_{i}|^{p^{\prime}_{i}}dx \right),\]
where \(C\) is a suitable positive constant which can vary from line to line.
Denoting \(B=\prod_{i=1}^{N}\|(T_{k}u)_{x_{i}}\|_{L^{p_{i}}(\Omega)}\), by Sobolev inequality (2.8) it follows
\[\begin{split}\sum_{i=1}^{N}\int_{\Omega_{k}}|u_{x_{i}}|^{p_{i}}dx \leq& C\left(\int_{\Omega_{k}}|u|^{\bar{p}}dx+\sum_{i=1}^{N}\|b_{i} (x)-T_{M}b_{i}\|_{L^{\frac{Np_{i}^{\prime}}{p}},\infty}^{p^{\prime}_{i}}B^{ \frac{\bar{p}}{N}}\right.\\ &+\left.\sum_{i=1}^{N}\int_{\Omega_{k}}|f_{i}|^{p^{\prime}_{i}} dx\right).\end{split} \tag{3.4}\]
Previous inequality gives us an estimate of the \(j^{th}\) addendum of the sum at the left-hand side of (3.4) as well. Then, elevating to the power \(\frac{1}{Np_{j}}\), making the product on the left and right sides of (3.4) we get
\[\begin{split} B^{\frac{1}{N}}\leq& C\left[\left(\int_{ \Omega_{k}}|u|^{\bar{p}}dx\right)^{\frac{1}{\bar{p}}}+\sum_{i=1}^{N}\|b_{i}(x )-T_{M}b_{i}\|_{L^{\frac{Np_{i}^{\prime}}{p}},\infty}^{\frac{p^{\prime}_{i}}{ p}}B^{\frac{1}{N}}+\left(\sum_{i=1}^{N}\int_{\Omega_{k}}|f_{i}|^{p^{\prime}_{i}} dx\right)^{\frac{1}{\bar{p}}}\right].\end{split}\]
At this point, assuming \(d<\left(\frac{1}{CN}\right)^{\frac{\bar{p}}{p^{\prime}_{i}}}\) in (3.2) and using (2.5), we can fix now \(M\) large enough in order to have
\[\begin{split} B^{\frac{1}{N}}\leq& C\left[\left(\int_{ \Omega_{k}}|u|^{\bar{p}}dx\right)^{\frac{1}{\bar{p}}}+\left(\sum_{i=1}^{N}\int _{\Omega_{k}}|f_{i}|^{p^{\prime}_{i}}dx\right)^{\frac{1}{\bar{p}}}\right]. \end{split} \tag{3.5}\]
Combining (3.4) and (3.5) we obtain
\[\sum_{i=1}^{N}\int_{\Omega_{k}}|u_{x_{i}}|^{p_{i}}dx\leq C\left(\int_{\Omega_ {k}}|u|^{\bar{p}}dx+\sum_{i=1}^{N}\int_{\Omega_{k}}|f_{i}|^{p^{\prime}_{i}}dx \right), \tag{3.6}\]
for every \(k>0\) and with \(C\) positive costant independent of \(u\) and \(k.\) We prove now that previous estimate gives (3.3). To this aim we follow the idea of [21, Lemma 2]. We argue by contradiction and assume that there exists a sequence of functions \(\{u_{n}\}_{n}\subseteq W_{0}^{1,\overline{p}}(\Omega)\) satisfying (3.6) and such that
\[\|u_{n}\|:=\|u_{n}\|_{W_{0}^{1,\overline{p}}(\Omega)}\to\infty\]
as \(n\to\infty\). For every \(n\in\mathbb{N}\) and \(\varepsilon>0\), we set \(k_{n}=\varepsilon\|u_{n}\|\) so that by (3.6), for \(i=1,...,N\),
\[\left(\int_{\Omega}|\partial_{x_{i}}(T_{k_{n}}u_{n})|^{p_{i}}\,\mathrm{d}x \right)^{\frac{1}{\bar{p}_{i}}}\leq C^{\frac{1}{\bar{p}_{i}}}\left(1+\int_{ \Omega}|u_{n}|^{\bar{p}}\chi_{\{|u_{n}|<k_{n}\}}\,\mathrm{d}x\right)^{\frac{1}{ \bar{p}_{i}}},\]
where \(T_{k_{n}}\) is defined in (2.6). Since \(\bar{p}<\bar{p}^{*}\), using Lemma 2.1 with \(X=L^{\bar{p}}(\Omega)\) and \(\theta_{i}=\frac{\bar{p}}{p_{i}N}\), we get from the previous inequality
\[\begin{split}\left(\int_{\Omega}\prod_{i=1}^{N}|\partial_{x_{i}}( T_{k_{n}}u_{n})|^{\frac{\bar{p}}{N}}\,\mathrm{d}x\right)^{\frac{1}{\bar{p}}}& \leq\prod_{i=1}^{N}\left(\int_{\Omega}|\partial_{x_{i}}(T_{k_{n}}u_{n})|^{p_{ i}}\,\mathrm{d}x\right)^{\frac{1}{Np_{i}}}\\ &\leq C^{\frac{1}{\bar{p}}}\left(1+\int_{\Omega}|u_{n}|^{\bar{p}} \chi_{\{|u_{n}|<k_{n}\}}\,\mathrm{d}x\right)^{\frac{1}{\bar{p}}}.\end{split} \tag{3.7}\]
We set
\[w_{n}=\frac{u_{n}}{\|u_{n}\|}\,.\]
so that, up to subsequence not relabeled, there exists \(\bar{w}\in W^{1,\overrightarrow{p}}_{0}\,(\Omega)\) such that \(w_{n}\rightharpoonup\bar{w}\) weakly in \(W^{1,\overrightarrow{p}}_{0}\,(\Omega)\), \(w_{n}\to\bar{w}\) strongly in \(L^{\bar{p}}(\Omega)\) and \(w_{n}\to\bar{w}\) for a.e. in \(\Omega\). Dividing both sides of (3.7) by \(\|u_{n}\|^{\bar{p}}\) we have
\[\int_{\Omega}\prod_{i=1}^{N}|\partial_{x_{i}}(T_{\varepsilon}w_{n})|^{\frac{ \bar{p}}{N}}\,\mathrm{d}x=\int_{\Omega}\frac{\prod_{i=1}^{N}|\partial_{x_{i}}( T_{k_{n}}u_{n})|^{\frac{\bar{p}}{N}}}{\|u_{n}\|^{\bar{p}}}\,\mathrm{d}x\leqslant C \left(\frac{1}{\|u_{n}\|^{\bar{p}}}+\int_{\Omega}|w_{n}|^{\bar{p}}\chi_{\{|w_{n }|<\varepsilon\}}\,\mathrm{d}x\right). \tag{3.8}\]
Assume now that
\[|\{x\in\Omega:|\bar{w}(x)|=\varepsilon\}|=0\,. \tag{3.9}\]
In this case we have \(\chi_{\{|w_{n}|<\varepsilon\}}\to\chi_{\{|\bar{w}|<\varepsilon\}}\) a.e. in \(\Omega\) and hence \(w_{n}\,\chi_{\{|w_{n}|<\varepsilon\}}\to\bar{w}\,\chi_{\{|\bar{w}|<\varepsilon\}}\) strongly in \(L^{\bar{p}}(\Omega)\). So, since \(T_{\varepsilon}w_{n}\rightharpoonup T_{\varepsilon}\bar{w}\) weakly in \(W^{1,\overrightarrow{p}}_{0}\,(\Omega)\) and \(T_{\varepsilon}w_{n}\to T_{\varepsilon}\bar{w}\) strongly in \(L^{\bar{p}}(\Omega)\), letting \(n\to+\infty\) in (3.8), using the semicontinuity of the norm with respect to weak convergence in \(W^{1,\overrightarrow{p}}_{0}\,(\Omega)\), we arrive to the following estimate
\[\int_{\Omega}\prod_{i=1}^{N}|\partial_{x_{i}}(T_{\varepsilon}\bar{w})|^{\frac {\bar{p}}{N}}\,\mathrm{d}x\leqslant C\int_{\Omega}|\bar{w}|^{\bar{p}}\chi_{\{| \bar{w}|<\varepsilon\}}\,\mathrm{d}x. \tag{3.10}\]
Using Sobolev inequality (2.10) and Holder inequality by (3.10) we get
\[\varepsilon^{\bar{p}}\,|\{x\in\Omega:|\bar{w}|\geqslant\varepsilon\}|^{\frac{ \bar{p}}{\bar{p}^{*}}}\leqslant C\,\varepsilon^{\bar{p}}\,|\{x\in\Omega:0<| \bar{w}|<\varepsilon\}|\,.\]
Passing to the limit as \(\varepsilon\downarrow 0\), we deduce
\[|\{x\in\Omega:|\bar{w}|>0\}|=0\,,\]
that is, \(\bar{w}(x)=0\) a.e. Note that previous equality has been obtained assuming (3.9). Nevertheless, the set of values \(\varepsilon>0\) for which (3.9) fails is at most countable. This means \(w_{n}\rightharpoonup 0\) weakly in \(W^{1,\overrightarrow{p}}_{0}\,(\Omega)\). On the other hand, at this point we can use again the same argument above to obtain that \(w_{n}\to 0\) strongly in \(W^{1,\overrightarrow{p}}_{0}\,(\Omega)\), and this gives the contradiction, since by definition \(\|w_{n}\|=1\) for every \(n\in\mathbb{N}\).
## 4 Regularity results
In this section we study the regularity of weak solutions of problem (1.1). Here and in the following we shall assume notation (3.1) being in force. When the datum \(f_{i}\in L^{p_{i}}(\Omega)\) for \(i=1,..,N\) with \(\bar{p}<N\), as follows by the anisotropic Sobolev embedding, a solution \(u\) belongs to \(L^{\overline{p}}\,(\Omega)\), where \(\bar{p}^{*}\) is defined in (2.9). Otherwise if \(f_{i}\in L^{s_{i}}(\Omega)\) with \(s_{i}>p_{i}\) for \(i=1,..,N\), the summability of \(u\) improves. In order to study the higher summability of \(u\), the following minimum
\[\mu=\min_{i}\left\{\frac{s_{i}}{p_{i}^{\prime}}\right\} \tag{4.1}\]
first introduced in [10], plays a crucial role. In [18, Theorem 2.1 and Remark 5.3] the following Stampacchia type regularity result is proved (see [30] as classical reference).
**Theorem 4.1**: _Let \(\Omega\) be a bounded Lipschitz domain, \(p_{i}>1\) for \(i=1,..,N\), \(\bar{p}<N\) and let \(s_{1},\cdots,s_{N}\) be such that_
\[1<\mu<\frac{N}{\bar{p}},\]
_where \(\mu\) is defined in (4.1). Assume that \((\mathcal{H}1)-(\mathcal{H}3)\) are fulfilled and \(f_{i}\in L^{s_{i}}(\Omega)\) for \(i=1,..,N\). There exists a positive constant \(d=d(\vec{r},N,\alpha,\vec{p})\) such that if_
\[\max_{i}\left\{\mathrm{dist}_{L^{\frac{Np_{i}^{\prime}}{p},\infty}(\Omega)}( b_{i},L^{\infty}(\Omega))\right\}<d\]
_and \(u\in W^{1,\vec{p}}_{0}(\Omega)\) is a weak solution to (1.1), then_
\[u\in L^{s}(\Omega)\quad\text{with }s=\max\{(\mu\bar{p})^{*},\mu p_{\max}\}.\]
_where_
\[(\mu\bar{p})^{*}=\frac{N\mu\bar{p}}{N-\mu\bar{p}}.\]
Without lower order terms in [10] the authors have proved that the boundedness of a weak solution of Dirichlet problems is guaranteed under the assumption \(\mu>\frac{N}{\bar{p}}\), where \(\mu\) is defined in (4.1). However if \(\mathcal{B}_{i}\not\equiv 0\) for \(i=1,\cdots,N\) the boundedness is not assured assuming that (3.2) is in force as showed in Example 4.8 of [24] (when \(p_{i}=2\) for \(i=1,\cdots,N\)). The smallness of \(\|b_{i}\|_{L^{\frac{Np_{i}^{\prime}}{p},\infty}}\) for \(i=1,\cdots,N\) neither is sufficient to get boundedness, as showed in Example 2.3 of [18].
In order to get the boundedness of solutions, we need to improve the summability of data and of the coefficients \(b_{i}\).
**Lemma 4.2**: _Let \(\Omega\) be a bounded Lipschitz domain, \(p_{i}>1\) for \(i=1,..,N\), \(\bar{p}<N\), and let \(s_{1},\cdots,s_{N}\) be such that_
\[\mu>\frac{N}{\bar{p}} \tag{4.2}\]
_where \(\mu\) is defined in (4.1). Assume that \((\mathcal{H}1)-(\mathcal{H}3)\) are fulfilled and \(b_{i},f_{i}\in L^{s_{i}}(\Omega)\) for \(i=1,..,N\). Then every weak solution \(u\) of problem (1.1) is bounded and there exists a positive constant \(C=C(\alpha,\Omega,N,\overrightarrow{p},\|b_{i}\|_{L^{s_{i}}(\Omega)},\|f_{i}\| _{L^{s_{i}}(\Omega)})\) such that_
\[\|u\|_{\infty}\leq C.\]
**Proof.** The proof is quite standard, and for the convenience of the reader we give some details. So, let \(u\) be a weak solution of problem (1.1). For \(k>0\), we use as a test function \(G_{k}u:=u-T_{k}u\) in (1.10) and by (1.3), (1.6) we obtain
\[\alpha\sum_{i=1}^{N}\int_{\Omega}|(G_{k}u)_{x_{i}}|^{p_{i}}dx+\int_{\Omega} \mathcal{G}(x,u)G_{k}udx\leq\sum_{i=1}^{N}\int_{\Omega}|b_{i}(x)||u|^{\frac{ \bar{p}}{p_{i}}}|(G_{k}u)_{x_{i}}|dx+\sum_{i=1}^{N}\int_{\Omega}|f_{i}||(G_{k }u)_{x_{i}}|dx.\]
Denoting \(A_{k}=\{x\in\Omega:|u(x)|>k\}\), we note that by Lemma 3.1 for every \(k\geq 0\) we have:
\[|A_{k}|\leq\frac{\|u\|_{L^{1}(\Omega)}}{k}\leq\frac{C_{0}}{k}, \tag{4.3}\]
where \(C_{0}\) is a positive constant indepedent on \(u\). We stress that (3.2) is obviously satisfied when \(b_{i}\in L^{s_{i}}(\Omega)\).
Using (1.9) and Young inequality we have
\[\alpha\sum_{i=1}^{N}\int_{\Omega}|(G_{k}u)_{x_{i}}|^{p_{i}}dx\leq C \sum_{i=1}^{N}\int_{A_{k}}|b_{i}(x)|^{p^{\prime}_{i}}|G_{k}u|^{\bar{p}}dx+C\sum_ {i=1}^{N}\int_{A_{k}}|b_{i}(x)|^{p^{\prime}_{i}}k^{\bar{p}}dx\] \[+\varepsilon\sum_{i=1}^{N}\int_{\Omega}|(G_{k}u)_{x_{i}}|^{p_{i}} dx+C\sum_{i=1}^{N}\int_{A_{k}}|f_{i}|^{p^{\prime}_{i}}dx+\varepsilon\sum_{i=1}^{N} \int_{A_{k}}|(G_{k}u)_{x_{i}}|^{p_{i}}dx,\]
for every \(\varepsilon>0\) and a suitable \(C>0\). Choosing \(\varepsilon=\varepsilon(\alpha,\vec{p})\) small enough we have
\[\sum_{i=1}^{N}\int_{\Omega}|(G_{k}u)_{x_{i}}|^{p_{i}}dx \leq C\left(\sum_{i=1}^{N}\int_{A_{k}}|b_{i}|^{p^{\prime}_{i}}|G_ {k}u|^{\bar{p}}dx+\sum_{i=1}^{N}\int_{A_{k}}|b_{i}(x)|^{p^{\prime}_{i}}k^{\bar {p}}dx+\sum_{i=1}^{N}\int_{A_{k}}|f_{i}|^{p^{\prime}_{i}}dx\right)\] \[\leq C\left[\sum_{i=1}^{N}\|b_{i}\|^{p^{\prime}_{i}}_{L^{\frac{Np^{ \prime}_{i}}{p}}(A_{k})}\|G_{k}u\|^{\bar{p}}_{L^{\frac{p^{\prime}}{p}}(\Omega)} +\sum_{i=1}^{N}\int_{A_{k}}\left(|b_{i}(x)|^{p^{\prime}_{i}}k^{\bar{p}}+|f_{i} |^{p^{\prime}_{i}}\right)dx\right],\]
where, here and below, \(C\) is a suitable positive constant which can vary from line to line.
As before, denoting \(B=\prod_{i=1}^{N}\|(G_{k}u)_{x_{i}}\|_{L^{p_{i}}(A_{k})}\), by Sobolev inequality (2.8) it follows
\[\sum_{i=1}^{N}\int_{\Omega}|(G_{k}u)_{x_{i}}|^{p_{i}}dx\leq C\left[\sum_{i=1}^{N}\|b_{i}\|^{p^{\prime}_{i}}_{L^{\frac{Np^{ \prime}_{i}}{p}}(A_{k})}B^{\frac{\bar{p}}{N}}+\sum_{i=1}^{N}\int_{A_{k}}\left( |b_{i}(x)|^{p^{\prime}_{i}}k^{\bar{p}}+|f_{i}|^{p^{\prime}_{i}}\right)dx\right]. \tag{4.4}\]
Previous inequality gives us an estimate of the \(j^{th}\) addendum of the sum at the left-hand side of (4.4) as well. Then, elevating to the power \(\frac{1}{Np_{j}}\), making the product on the left and right sides of (4.4) we get
\[B^{\frac{1}{N}}\leq C\left[\sum_{i=1}^{N}\|b_{i}(x)\|^{\frac{p^{\prime}_{i}}{\bar{p} }}_{L^{\frac{Np^{\prime}_{i}}{p}}(A_{k})}B^{\frac{1}{N}}+\left(\sum_{i=1}^{N} \int_{A_{k}}(|b_{i}(x)|^{p^{\prime}_{i}}k^{\bar{p}}+|f_{i}|^{p^{\prime}_{i}}) dx\right)^{\frac{1}{\bar{p}}}\right].\]
At this point, we can choose \(k=k(N,\vec{p},b_{i},\alpha,C)>1\) large enough in order to have \(|A_{k}|\) small and so by absolutely continuity of the Lebesgue norm we have
\[B^{\frac{1}{N}}\leq C_{1}\left[\left(\sum_{i=1}^{N}\int_{A_{k}}(|b_{i}(x)|^{p^{ \prime}_{i}}k^{\bar{p}}+|f_{i}|^{p^{\prime}_{i}})dx\right)^{\frac{1}{\bar{p}}}\right] \tag{4.5}\]
for suitable \(C_{1}>0\). Combining (4.4) and (4.5) we obtain
\[\sum_{i=1}^{N}\int_{\Omega}|(G_{k}u)_{x_{i}}|^{p_{i}}dx\leq C_{2}\left(\sum_{i=1}^ {N}\int_{A_{k}}(|b_{i}(x)|^{p^{\prime}_{i}}k^{\bar{p}}+|f_{i}|^{p^{\prime}_{i}} )dx\right). \tag{4.6}\]
As before, we note that previous inequality gives us an estimate of the \(j^{th}\) addendum of the sum at the left-hand side of (4.6) as well. Then, elevating to the power \(\frac{1}{Np_{j}}\), making the product on the left and right sides of (4.6) and using Sobolev inequality, we get
\[\|G_{k}u\|_{L^{p^{*}}(\Omega)} \leq C_{3}\prod_{j=1}^{N}\left(\sum_{i=1}^{N}\int_{A_{k}}(|b_{i}( x)|^{p^{\prime}_{i}}k^{\bar{p}}+|f_{i}|^{p^{\prime}_{i}})dx\right)^{\frac{1}{ Np_{j}}}\] \[\leq C_{3}\left(\sum_{i=1}^{N}\int_{A_{k}}(|b_{i}(x)|^{p^{\prime} _{i}}k^{\bar{p}}+|f_{i}|^{p^{\prime}_{i}})dx\right)^{\frac{1}{\bar{p}}}\] \[\leq C_{3}k\left(\sum_{i=1}^{N}(\|b_{i}\|_{s_{i}}^{p^{\prime}_{i} }+\|f_{i}\|_{s_{i}}^{p^{\prime}_{i}})\ |A_{k}|^{1-\frac{p^{\prime}_{i}}{s_{i}}}\right)^{\frac{1}{\bar{p}}}.\]
We fix now \(k_{0}>1\) such that \(|A_{k_{0}}|<1\). Note that in view of (4.3) we can fix such \(k_{0}\) independent on \(u\).
Applying Holder inequality at the left hand side of previous inequality, for every \(k\geq k_{0}\) we get
\[\int_{\Omega}|G_{k}u|\,dx\leq C\,k|A_{k}|^{\frac{n}{\bar{p}}+1-\frac{1}{\bar{p }^{*}}}\qquad\mbox{ where }\quad\eta=\min_{i}\{1-\frac{p^{\prime}_{i}}{s_{i}}\}. \tag{4.7}\]
Observe now that the function
\[g(k):=\int_{\Omega}|G_{k}u|\,dx\]
is a non-negative and decreasing function such that \(g^{\prime}(k)=-|A_{k}|\). Hence we can rewrite (4.7) as
\[\left(\frac{g(k)}{k}\right)^{a}\leq-Cg^{\prime}(k)\qquad\mbox{ where }\quad a:=\frac{\bar{p}\bar{p}^{*}}{\eta\bar{p}^{*}+\bar{p}\bar{p}^{*}-\bar{p}}\]
and by (4.2), it holds \((1-a)>0\). Let us consider now every value of \(k>k_{0}\) such that \(g(k)\neq 0.\) By previous inequality we get
\[k^{-a}\leq-Cg^{\prime}(k)g(k)^{-a}=-C(g(k)^{(1-a)})^{\prime}.\]
Integrating previous inequality with respect to \(k\) from \(k_{0}\) to \(k\) we get
\[k^{1-a}-{k_{0}}^{1-a}\leq-C(g(k)^{1-a}-g(k_{0})^{(1-a)}).\]
This implies
\[g(k)^{1-a}\leq Cg(k_{0})^{(1-a)}-k^{1-a}+{k_{0}}^{1-a}.\]
It is obvious at this point that, since \(g(k_{0})\leq\|u\|_{L^{1}(\Omega)}\), there exists a value \(\bar{k}\) (independent in \(u\)) such that \(g(\bar{k})=0\) and the thesis follows.
Existence result
In this section we analyze the existence of weak solutions of problem (1.1). It is well known that, if the operator in problem (1.1) is coercive, then a solution exists in \(W^{1,\vec{p}}_{0}(\Omega)\). Unfortunately, under our assumptions the coercivity for the involved operator in problem (1.1) is not guaranteed. Another difficulty is due to the singularity of coefficients \(b_{i}\) in the lower order term. Indeed in the Marcinkiewicz space \(L^{\frac{Np^{\prime}_{i}}{\bar{p}},\infty}(\Omega)\), which is slightly larger than Lebesgue space \(L^{\frac{Np^{\prime}_{i}}{\bar{p}}}(\Omega)\), the bounded functions are not dense and the norm is not absolutely continuous (i.e. a function can have large norm even if restricted to a set with small measure).
**Theorem 5.1**: _Let us assume that \(\Omega\) is a bounded Lipschitz domain, \(p_{i}>1\) for \(i=1,..,N\), \(\bar{p}<N\) and (\(\mathcal{H}1\))-(\(\mathcal{H}4\)) are in force. There exists a positive constant \(d=d(N,\alpha,\vec{p})\) such that if (3.2) holds, then there exists at least a weak solution to problem (1.1)._
For example in our prototype (1.11) we can consider as coefficient of the lower order term the function
\[\beta_{i}(x)=\frac{\kappa_{i}}{|x|\gamma_{i}}+\beta_{i}^{0}(x)\]
with \(\kappa_{i}\) suitable small constant, \(\beta_{i}^{0}\in L^{\infty}(\Omega)\) and \(\gamma_{i}=\frac{\bar{p}}{p^{\prime}_{i}}\) for \(i=1,\cdots,N\). Indeed, in this case, an easy computation shows that \(\underset{L^{\frac{Np^{\prime}_{i}}{\bar{p}},\infty}(\Omega)}{\text{dist}}( \beta_{i},L^{\infty}(\Omega))=|\kappa_{i}|\omega_{N}^{\frac{\gamma_{i}}{N}}\).
Our proof is detailed in the next subsection.
### Proof of Theorem 5.1
We split the proof in three steps.
_Step 1: Existence when \(b_{i}\in L^{\infty}(\Omega)\) and \(f_{i}\in C^{\infty}_{c}(\Omega)\) for \(i=1,\cdots,N\)._
We stress that also in this case the operator \(\sum_{i=1}^{N}\left[-\partial_{x_{i}}(\mathcal{A}_{i}(x,\nabla u)+\mathcal{B} _{i}(x,u))\right]+\mathcal{G}(x,u)\) can be not coercive. To overcome this difficulty we use the boundedness result proved in Lemma 4.2 and so we suppose that a solution \(u\) there exists. Using Lemma 4.2, it follows that
\[\|u\|_{\infty}\leq M, \tag{5.1}\]
where \(M\) is a constant depending only on the data of the equation.
Now we define
\[\widetilde{\mathcal{B}}(x,s)=\left\{\begin{array}{ll}\mathcal{B}(x,s)&\text{ if }|s|\leq M,\\ \mathcal{B}(x,M)&\text{ if }|s|>M.\end{array}\right.\]
Problem (1.1) with \(\mathcal{B}(x,s)=\widetilde{\mathcal{B}}(x,s)\), \(b_{i}\in L^{\infty}(\Omega)\) and smooth data has a solution \(u\), since the operator is coercive. Moreover this solution \(u\) verifies estimate (5.1), because \(\mathcal{B}(x,s)\) verifies (1.6). It follows that \(\widetilde{\mathcal{B}}(x,s)=\mathcal{B}(x,s)\) and we have proved the existence of a solution to problem (1.1) when \(b_{i}\in L^{\infty}(\Omega)\) and data are smooth.
_Step 2: Existence when \(b_{i}\in L^{\infty}(\Omega)\) and \(\mathcal{F}\in(W^{1,\overrightarrow{p}}_{0}(\Omega))^{*}\)._
We consider the following sequence of approximate problems
\[\left\{\begin{array}{ll}-\sum_{i=1}^{N}\partial_{x_{i}}\left[{\cal A}_{i}(x, \nabla u_{n})+{\cal B}_{i}(x,u_{n})\right]+{\cal G}(x,u_{n})=-\sum_{i=1}^{N}(f_{ i}^{n})_{x_{i}}&\mbox{ in }\Omega,\\ u_{n}=0&\mbox{ on }\partial\Omega,\end{array}\right. \tag{5.2}\]
where \(f_{i}^{n}\in C_{c}^{\infty}(\Omega)\) such that
\[f_{i}^{n}\to f_{i}\quad\mbox{ in }L^{p^{\prime}_{i}}(\Omega),\quad\|f_{i}^{n} \|_{L^{p^{\prime}_{i}}(\Omega)}\leq\|f_{i}\|_{L^{p^{\prime}_{i}}(\Omega)}\mbox { for }i=1,\cdots,N. \tag{5.3}\]
Step 1 assure the existence of a solution \(u_{n}\) of problem (5.2). We stress that estimate (3.3) holds for problem (5.2) when \(b_{i}\in L^{\infty}(\Omega)\) for \(i=1,\cdots,N\).
We get that the sequence \(\{u_{n}\}_{n}\) is bounded in \(W_{0}^{1,\overrightarrow{p}}(\Omega)\) by a constant independent of \(n\). Then there exists a function \(u\in W_{0}^{1,\overrightarrow{p}}(\Omega)\) such that (up to a subsequence denoted again by \(u_{n}\))
\[u_{n}\rightharpoonup u\mbox{ weakly in }W_{0}^{1,\overrightarrow{p}}(\Omega), \tag{5.4}\]
\[u_{n}\to u\mbox{ in }L^{q}(\Omega)\mbox{ for }q<p_{\infty} \tag{5.5}\]
and
\[u_{n}\to u\mbox{ a.e. }\Omega. \tag{5.6}\]
To take into account the terms \({\cal A}_{i}(x,\nabla u_{n})\) we need the following lemma, which is the anisotropic version of Lemma 2.2 of [27]. Note that if \(A_{i}\) is strongly monotone, the claim is obvious.
**Lemma 5.2**: _Assume \(({\cal H}1)\) be in force. Let us suppose \(u_{n},u\in W_{0}^{1,\overrightarrow{p}}(\Omega)\), \(u_{n}\rightharpoonup u\) weakly in \(W_{0}^{1,\overrightarrow{p}}(\Omega)\) and_
\[\int_{\Omega}\left({\cal A}_{i}(x,\nabla u_{n})-{\cal A}_{i}(x,\nabla u) \right)\left[\partial_{x_{i}}u_{n}-\partial_{x_{i}}u\right]\,dx\to 0 \tag{5.7}\]
_for \(i=1,\cdots,N\). Then for \(i=1,\cdots,N\)_
\[\partial_{x_{i}}u_{n}\to\partial_{x_{i}}u\quad\mbox{ a.e. in }\Omega. \tag{5.8}\]
**Proof.** The proof is standard and follows as in Lemma 2.2 of [27]. For the convenience of the reader we give some details. Let us denote for \(i=1,\cdots,N\)
\[Q_{n}^{i}(x):=\left({\cal A}_{i}(x,\nabla u_{n}(x))-{\cal A}_{i}(x,\nabla u(x) )\right)\left[\partial_{x_{i}}u_{n}(x)-\partial_{x_{i}}u(x)\right].\]
Since (1.5) and (5.7) we get (up to a subsequence) that
\[Q_{n}^{i}(x)\to 0\mbox{ for every }x\in\Omega\setminus\Omega_{0}\]
with \(|\Omega_{0}|=0\). Moreover by our assumptions we get (up to a subsequence) \(u_{n}(x)\to u(x)\) for \(x\in\Omega\setminus\Omega_{0}\) (using that \(W_{0}^{1,\overrightarrow{p}}(\Omega)\) is compactly embedded in \(L^{q}(\Omega)\) for \(q<p_{\infty}\)). Let us suppose \(x\in\Omega\setminus\Omega_{0}\) and let us denote \(\partial_{x_{i}}u_{n}(x)=\xi_{n}^{i}\) and \(\partial_{x_{i}}u(x)=\xi^{i}\) and by \(\bar{\xi}^{i}\) one of the limit of \(\xi_{n}^{i}\) for \(i=1,\cdots,N\). By (1.4) and (1.3) we have
\[Q_{n}^{i}(x)\geq\alpha|\xi_{n}^{i}|^{p_{i}}-\beta_{i}|\xi_{n}^{i}|^{p_{i}-1}| \xi^{i}|-\beta_{i}|\xi_{n}^{i}||\xi^{i}|^{p_{i}-1}. \tag{5.9}\]
If we assume \(|\bar{\xi}^{i}|=+\infty\) then by (5.9) we obtain \(Q_{n}^{i}(x)\to+\infty\), which is a contradiction (see (5.7)). Then \(|\bar{\xi}^{i}|<+\infty\quad\forall i=1,\cdots,N\). Using the continuity of \({\cal A}_{i}\) with respect to \(\xi\) we conclude that
\[\left({\cal A}_{i}(x,\bar{\xi})-{\cal A}_{i}(x,\xi)\right)\left[\bar{\xi}^{i}- \xi^{i}\right]=0\quad\forall i=1,\cdots,N.\]
So assumption (1.5) yields then \(\bar{\xi}=\xi\), i.e. \(\partial_{x_{i}}u_{n}(x)\to\partial_{x_{i}}u(x)\quad\forall i=1,\cdots,N\) for every \(x\in\Omega\setminus\Omega_{0}\).
Now we prove that (5.7) is in force. We use \(\varphi=u_{n}-u\) as test function in the variational formulation of the approximating problems (5.2):
\[\sum_{i=1}^{N}\int_{\Omega}\left[{\cal A}_{i}(x,\nabla u_{n})+{\cal B}_{i}(x, u_{n})\right]\partial_{x_{i}}(u_{n}-u)\,dx+\int_{\Omega}{\cal G}(x,u_{n})(u_{n}-u )\,dx=\sum_{i=1}^{N}\int_{\Omega}f_{i}^{n}\partial_{x_{i}}(u_{n}-u)\,dx.\]
Adding and subtracting in the previous equality the term \(\sum_{i=1}^{N}\int_{\Omega}{\cal A}_{i}(x,\nabla u)\partial_{x_{i}}(u_{n}-u)\), by (1.3), (5.4) and (5.3) we easily obtain that
\[\sum_{i=1}^{N}\int_{\Omega}{\cal A}_{i}(x,\nabla u)\partial_{x_{i}}(u_{n}-u) \to 0\quad\mbox{ and }\quad\sum_{i=1}^{N}\int_{\Omega}f_{i}^{n}\partial_{x_{i}}(u_{n}-u)\,dx \to 0.\]
Then recalling again (5.4) in order to prove (5.7) it is enough to show that
\[{\cal B}_{i}(x,u_{n})\to{\cal B}_{i}(x,u)\mbox{ in }L^{p^{\prime}_{i}}(\Omega) \quad\forall i=1,\cdots,N, \tag{5.10}\]
\[{\cal G}(x,u_{n})\to{\cal G}(x,u)\mbox{ in }L^{p^{\prime}_{\infty}}(\Omega). \tag{5.11}\]
To this aim we observe that by continuity of \({\cal B}_{i}\) with respect to \(s\) and by (5.6) we get \({\cal B}_{i}(x,u_{n})\to{\cal B}_{i}(x,u)\) a.e. in \(\Omega\). Now (1.6), Holder inequality, Sobolev inequality (2.8) and (3.3) yields that for every measurable set \(\Omega^{\prime}\subset\Omega\)
\[\int_{\Omega^{\prime}}|{\cal B}_{i}(x,u_{n})|^{p^{\prime}_{i}}\,dx \leq\|b_{i}\|_{\infty}^{p^{\prime}_{i}}\int_{\Omega^{\prime}}|u_{ n}|^{\bar{p}}\,dx\leq\|b_{i}\|_{\infty}^{p^{\prime}_{i}}\left(\int_{\Omega^{ \prime}}|u_{n}|^{\bar{p}^{*}}dx\right)^{\frac{\bar{p}}{p^{*}}}|\Omega^{\prime} |^{1-\frac{\bar{p}}{\bar{p}^{*}}}\] \[\leq C\|b_{i}\|_{\infty}^{p^{\prime}_{i}}|\Omega^{\prime}|^{1- \frac{\bar{p}}{\bar{p}^{*}}},\]
where \(C\) is independent of \(n\). By Vitali convergence Theorem we conclude that (5.10) holds.
On the other hand by continuity of \({\cal G}\) with respect to \(s\) we get \({\cal G}(x,u_{n})\to{\cal G}(x,u)\) a.e. in \(\Omega\), by (5.6). Now (1.8), Holder inequality, Sobolev inequality (2.8) and (3.3) yields that for every measurable set \(\Omega^{\prime}\subset\Omega\)
\[\int_{\Omega^{\prime}}|{\cal G}(x,u_{n})|^{p^{\prime}_{\infty}}\,dx\leq\widetilde {\mu}\int_{\Omega^{\prime}}|u_{n}|^{\gamma p^{\prime}_{\infty}}\,dx\leq\mu \left(\int_{\Omega^{\prime}}|u_{n}|^{p_{\infty}}\,dx\right)^{\frac{\gamma}{p_ {\infty}-1}}|\Omega^{\prime}|^{1-\frac{\gamma}{p_{\infty}-1}}\leq C|\Omega^{ \prime}|^{1-\frac{\gamma}{p_{\infty}-1}},\]
where \(C\) is independent of \(n\). Also in this case we conclude that (5.11) holds by Vitali convergence Theorem.
Then (5.7) is in force, we can apply Lemma 5.2 and convergence (5.8) follows for \(i=1,\cdots,N\). Then
\[{\cal A}_{i}(x,\nabla u_{n})\to{\cal A}_{i}(x,\nabla u)\quad\mbox{ a.e. }\Omega,\]
because of continuity of \({\cal A}_{i}\) with respect to \(\xi\). Moreover by (1.3) and (3.3) we get that \({\cal A}_{i}(x,\nabla u_{n})\) is bounded in \(L^{p^{\prime}_{i}}(\Omega)\) and then
\[{\cal A}_{i}(x,\nabla u_{n})\rightharpoonup{\cal A}_{i}(x,\nabla u)\hbox{ weakly in }L^{p^{\prime}_{i}}(\Omega). \tag{5.12}\]
Taking \(\phi\in W^{1,\overrightarrow{p}}_{0}(\Omega)\) as test function in (5.2) and using (5.12), (5.10),(5.11) and (5.3) we can pass to the limit obtaining that \(u\) is a solution of problem (1.1) when \(b_{i}\in L^{\infty}(\Omega)\) and \({\cal F}\in(W^{1,\overrightarrow{p}}_{0}(\Omega))^{*}\).
_Step 3 : Existence dropping assumptions that \(b_{i}\in L^{\infty}(\Omega)\quad\forall i=1,\cdots,N\)._
In order to remove the assumptions on \(b_{i}\in L^{\infty}(\Omega)\), we set for each \(n\in\mathbb{N}\) and \(i=1,...,N\) for almost every \(x\in\Omega\),
\[\vartheta^{i}_{n}(x)=\left\{\begin{array}{ll}\frac{T_{n}b_{i}(x)}{b_{i}(x) }&\hbox{ if }b_{i}(x)\neq 0\\ 1&\hbox{ if }b_{i}(x)=0\end{array}\right.\]
and we consider the following sequence of approximating problems:
\[\left\{\begin{array}{ll}-\sum_{i=1}^{N}\partial_{x_{i}}\left[{\cal A}_{i}(x,\nabla u_{n})+\vartheta^{i}_{n}(x){\cal B}_{i}(x,u_{n})\right]+{\cal G}(x,u_{ n})={\cal F}&\hbox{ in }\Omega,\\ \\ u_{n}=0&\hbox{ on }\partial\Omega.\end{array}\right. \tag{5.13}\]
Applying the previous Step 2 with \(\vartheta^{i}_{n}(x)b_{i}(x)\in L^{\infty}(\Omega)\) in place of \(b_{i}\), for every \(n\in\mathbb{N}\), we find a solution \(u_{n}\in W^{1,\overrightarrow{p}}_{0}(\Omega)\) to problem (5.13). Moreover, by Lemma 3.1, using estimate (3.3) we get that the sequence \(\{u_{n}\}_{n}\) is bounded in \(W^{1,\overrightarrow{p}}_{0}(\Omega)\) by a constant independent of \(n\). Then, unless to pass to subsequences not relabeled there exists \(u\in W^{1,\overrightarrow{p}}_{0}(\Omega)\) such that (5.4), (5.5) and (5.6) hold.
We shall conclude our proof showing that such function \(u\) solves problem (1.1). We emphasize that in our assumptions the compactness of the sequence \({\cal B}_{i}(x,u_{n})\) could fail (see Remark 5.4).
To this end we use an idea contained in [21]. In the rest of our proof we let for simplicity \(\eta(t):=\arctan t\). Obviously, \(\eta\in C^{1}(\mathbb{R})\), \(|\eta(t)|\leq|t|\) and \(0\leq\eta^{\prime}(t)\leq 1\) for all \(t\in\mathbb{R}\). In particular, \(\eta\) is Lipschitz continuous in the whole of \(\mathbb{R}\) and therefore
\[u_{n},u\in W^{1,\overrightarrow{p}}_{0}(\Omega)\quad\Longrightarrow\quad\eta (u_{n}-u)\in W^{1,\overrightarrow{p}}_{0}(\Omega)\,.\]
Moreover, since \(\eta(0)=0\) we have
\[\eta(u_{n}-u)\rightharpoonup 0\qquad\hbox{in }W^{1,\overrightarrow{p}}_{0}( \Omega)\hbox{ weakly}\,. \tag{5.14}\]
Testing equation in (5.13) by the function \(\varphi_{n}=\eta(u_{n}-u)\), we get
\[\sum_{i=1}^{N}\int_{\Omega}\left[{\cal A}_{i}(x,\nabla u_{n})+\vartheta^{i}_{n }(x){\cal B}_{i}(x,u_{n})\right]\partial_{x_{i}}\varphi_{n}\,dx+\int_{\Omega}{ \cal G}(x,u_{n})\varphi_{n}\,dx=\langle{\cal F}\,,\varphi_{n}\rangle \tag{5.15}\]
As before, we add and subtract the term \(\sum_{i=1}^{N}\int_{\Omega}{\cal A}_{i}(x,\nabla u)\partial_{x_{i}}\varphi_{n}\) in previous equality. In view of (1.3) and (1.8), by (5.14) we obviously have
\[\lim_{n\to\infty}\sum_{i=1}^{N}\int_{\Omega}{\cal A}_{i}(x,\nabla u)\partial_{x_{i} }\varphi_{n}\to 0,\qquad\quad\lim_{n\to\infty}\langle{\cal F}\,,\varphi_{n} \rangle=0.\]
Moreover also in this case we get (5.11) and so by (5.14) we obtain
\[\int_{\Omega}{\cal G}(x,u_{n})\varphi_{n}\,dx\to 0.\]
At this point it suffices to show that (up a subsequence)
\[\vartheta_{n}^{i}(x){\cal B}_{i}(x,u_{n})\to{\cal B}_{i}(x,u)\,\,\,\mbox{ strongly in}\,\,L^{p^{\prime}_{i}}(\Omega)\quad\forall i=1,\cdots,N. \tag{5.16}\]
We preliminary observe that combining (5.6) with the property that \(\vartheta_{n}^{i}\to 1\) as \(n\to\infty\), we have
\[\vartheta_{n}^{i}(x){\cal B}_{i}(x,u_{n})\eta^{\prime}(u_{n}-u)=\frac{ \vartheta_{n}^{i}(x){\cal B}_{i}(x,u_{n})}{1+|u_{n}-u|^{2}}\to{\cal B}_{i}(x,u )\qquad\mbox{a.e.\ in}\,\,\Omega\,. \tag{5.17}\]
Moreover, using (1.6) we note that for every measurable set \(\Omega^{\prime}\subset\Omega\) it holds
\[\int_{\Omega^{\prime}}\left[\frac{\vartheta_{n}^{i}(x){\cal B}_{i}(x,u_{n})}{ 1+|u_{n}-u|^{2}}\right]^{p^{\prime}_{i}}dx\leq C\int_{\Omega^{\prime}}\left[b_ {i}(x)^{p^{\prime}_{i}}|u|^{\bar{p}}+\frac{b_{i}(x)^{p^{\prime}_{i}}|u_{n}-u| ^{\bar{p}}}{1+|u_{n}-u|^{2}}\right]dx, \tag{5.18}\]
for some positive constant \(C=C(\bar{p})\). Hence, if \(1<\bar{p}\leq 2\), (5.16) immediately follows by Vitali convergence Theorem combining (5.17) and (5.18). For \(\bar{p}>2\) we choose \(s\) satisfying
\[\frac{\bar{p}^{*}}{\bar{p}}<s<\frac{\bar{p}^{*}}{\bar{p}-2}\,,\]
so that \(s^{\prime}<\frac{N}{\bar{p}}\) and \(s(\bar{p}-2)<\bar{p}^{*}.\) Then, using Holder inequality and (5.5) it follows that
\[\int_{\Omega^{\prime}}\frac{b_{i}(x)^{p^{\prime}_{i}}|u_{n}-u|^{\bar{p}}}{1+| u_{n}-u|^{2}}\leq\int_{\Omega^{\prime}}b_{i}(x)^{p^{\prime}_{i}}|u_{n}-u|^{( \bar{p}-2)}dx\leq\|b_{i}\|_{L^{s^{\prime}}p^{\prime}_{i}(\Omega^{\prime})}^{ p^{\prime}_{i}}\|u_{n}-u\|_{s(\bar{p}-2)}^{(\bar{p}-2)}\leq C\|b_{i}\|_{L^{s^{ \prime}}p^{\prime}_{i}(\Omega^{\prime})}^{p^{\prime}_{i}},\]
where \(C\) is a constant independent of \(n\). So, also in this case (5.16) follows by Vitali convergence theorem. So we obtain that for \(i=1,\cdots,N\),
\[\int_{\Omega}\left({\cal A}_{i}(x,\nabla u_{n})-{\cal A}_{i}(x,\nabla u) \right)\left[\partial_{x_{i}}\eta(u_{n}-u)\right]\,dx\to 0. \tag{5.19}\]
Now, since \(\eta^{\prime}(u_{n}-u)\to 1\) a.e. in \(\Omega\), arguing as in Lemma 5.2 we have that (5.19) implies (5.8) for \(i=1,\cdots,N\). The end of proof runs as in the previous step, obtaining that \(u\) is a solution of problem (1.1)
**Remark 5.3**: _We observe that in case \(p_{\max}:=\max_{i}p_{i}>\bar{p}^{*}\), the compactness of sequence \({\cal B}_{i}(x,u_{n})\) in \(L^{p^{\prime}_{i}}(\Omega)\) holds and then Step 3 in the proof of previous Theorem 5.1 can be simplified. For completeness we gives some details. In order to obtain that \(\partial_{x_{i}}u_{n}\to\partial_{x_{i}}u\) a.e. in \(\Omega\) for every \(i=1,...,N\), we can choose as a test function \(\varphi_{n}=(u_{n}-u)\) instead of \(\varphi_{n}=\eta(u_{n}-u)\) in (5.15) and prove that, unless to pass to a subsequence:_
\[\vartheta_{n}^{i}(x){\cal B}_{i}(x,u_{n})\to{\cal B}_{i}(x,u)\,\,\,\mbox{ strongly in}\,\,L^{p^{\prime}_{i}}(\Omega)\quad\forall i=1,\cdots,N\,. \tag{5.20}\]
_To prove (5.20), we note that_
\[\vartheta_{n}^{i}(x)\mathcal{B}_{i}(x,u_{n})\to\mathcal{B}_{i}(x,u)\qquad\mbox{ a.e. in }\Omega\,.\]
_and that, using (1.6) for every measurable set \(\Omega^{\prime}\subset\Omega\) it holds_
\[\int_{\Omega^{\prime}}\left[\vartheta_{n}^{i}(x)\mathcal{B}_{i}(x,u_{n}) \right]^{p^{\prime}_{i}}dx\leq C\int_{\Omega^{\prime}}\left[b_{i}(x)^{p^{ \prime}_{i}}|u|^{\bar{p}}+b_{i}(x)^{p^{\prime}_{i}}|u_{n}-u|^{\bar{p}}\right]dx\]
_for some positive constant \(C=C(\bar{p})\). Hence, since \(p_{\max}>\bar{p}^{*}\) and \(1<\bar{p}<N\), we can choose \(s\) satisfying_
\[\frac{N}{N-\bar{p}}<s\leq\frac{p_{\max}}{\bar{p}}\,,\]
_so that \(s^{\prime}<\frac{N}{\bar{p}}\) and \(s\bar{p}\leq p_{\max}.\) Then, using again Holder inequality, by Poincare inequality it follows that_
\[\int_{\Omega^{\prime}}b_{i}(x)^{p^{\prime}_{i}}|u_{n}-u|^{\bar{p}}dx\leq\|b_{ i}\|_{L^{s^{\prime}}p^{\prime}_{i}(\Omega^{\prime})}^{p^{\prime}_{i}}\|u_{n}-u \|_{s\bar{p}}^{\bar{p}}\leq C\|b_{i}\|_{L^{s^{\prime}}p^{\prime}_{i}(\Omega^{ \prime})}^{p^{\prime}_{i}},\]
_where \(C\) is a constant independent of \(n\). Hence, by Vitali convergence theorem (5.20) is proved._
**Remark 5.4**: _When \(p_{i}=p<N\) for all \(i\) and \(N\geq 2\), the compactness of operator_
\[T:u\in W^{1,\overrightarrow{p}}_{0}(\Omega)\to b|u|^{p-2}u\in L^{p^{\prime}}(\Omega)\]
_could fail. In this case the usual norm in \(W^{1,p}_{0}(\Omega)\) gives an equivalent norm in \(W^{1,\overrightarrow{p}}_{0}(\Omega)\), then Example 3 in [21] allows us to built a sequence of functions \(\{u_{n}\}_{n\in\mathbb{N}}\) in \(W^{1,\overrightarrow{p}}_{0}(\Omega)\) and a function \(b\in L^{\frac{N}{p-1},\infty}(\Omega)\) such that \(\{\nabla u_{n}\}_{n\in\mathbb{N}}\) is bounded in \(L^{p}(\Omega,\mathbb{R}^{N})\), but it is not possible to extract from \(\{b|u_{n}|^{p-1}\}_{n\in\mathbb{N}}\) any sequence strongly converging in \(L^{p^{\prime}}(\Omega)\)._
## 6 Positivity of solutions
In the isotropic case it is well known (see [8]) that when the data is non-negative and the coefficient \(b(x)\in L^{N/p-1}(\Omega)\), then the solution is non-negative. In this section we prove a similar result for the anisotropic problem (1.1). In order to give a precise statement we introduce the following notation: \(\mathcal{F}\geq 0\) means \(<\mathcal{F},\phi>\geq 0\) for every \(\phi\geq 0\) when \(\mathcal{F}\in(W^{1,\overrightarrow{p}}_{0}(\Omega))^{*}\). We emphasize that the following result holds assuming \(\mathcal{G}\equiv 0\) as well.
**Proposition 6.1**: _Let \(u\in W^{1,\overrightarrow{p}}_{0}(\Omega)\) be a weak solution to problem (1.1) under assumptions (\(\mathcal{H}1\))-(\(\mathcal{H}4\)). If \(\mathcal{F}\geq 0\), then we have \(u\geq 0\)._
**Proof.** Let us denote \(u^{-}(x):=\min\{0,u(x)\}\). Taking \(T_{h}(u^{-})\) as test function with \(h>0\) in (1.10), by (1.4), (1.6), (1.9) and assumption on the data we get
\[\alpha\sum_{i=1}^{N}\int_{\{-h<u<0\}}|\partial_{x_{i}}T_{h}(u^{-})|^{p_{i}}dx \leq\sum_{i=1}^{N}\int_{\{-h<u<0\}}|b_{i}(x)||u|^{\frac{p}{p^{\prime}_{i}}}| \partial_{x_{i}}T_{h}(u^{-})|dx.\]
By Young inequality it follows that
\[\sum_{i=1}^{N}\int_{\{-h<u<0\}}|\partial_{x_{i}}T_{h}(u^{-})|^{p_{i}}dx\leq C\sum_ {i=1}^{N}\int_{\{-h<u<0\}}|b_{i}(x)|^{p^{\prime}_{i}}|u|^{\bar{p}}dx, \tag{6.1}\]
where \(C\) is a suitable positive constant. Using the definition (2.2) in (6.1) we get for every \(j\)
\[\int_{\{-h<u<0\}}|\partial_{x_{j}}T_{h}(u^{-})|^{p_{j}} \leq\sum_{i=1}^{N}\int_{\{-h<u<0\}}|\partial_{x_{i}}T_{h}(u^{-})|^ {p_{i}}\] \[\leq Ch^{\bar{p}}\sum_{i=1}^{N}\|b_{i}\|_{L^{\frac{Np^{\prime}_{i} }{p},\infty}(\Omega)}^{p^{\prime}_{i}}\int_{0}^{|\{-h<u<0\}|}s^{-\bar{p}/N}\,ds.\]
By Sobolev inequality (2.8), the previous inequality becomes
\[\left(\int_{\Omega}|T_{h}(u^{-})|^{\bar{p}^{*}}\,dx\right)^{\frac {1}{\bar{p}^{*}}}\leq C\prod_{j=1}^{N}\left(\int_{\{-h<u<0\}}|\partial_{x_{j}} T_{h}(u^{-})|^{p_{j}}\,dx\right)^{\frac{1}{Np_{j}}} \tag{6.2}\] \[\leq h\left(\int_{0}^{|\{-h<u<0\}|}s^{-\bar{p}/N}\,ds\right)^{1/ \bar{p}}\prod_{j=1}^{N}\left(\sum_{i=1}^{N}\|b_{i}\|_{L^{\frac{Np^{\prime}_{i }}{p},\infty}(\Omega)}^{p^{\prime}_{i}}\right)^{\frac{1}{Np_{j}}}.\]
Now we observe that for \(\delta>h\),
\[h|\{u<-\delta\}|^{1/\bar{p}^{*}}=\left(\int_{\{u<-\delta\}}|T_{h}(u^{-})|^{ \bar{p}^{*}}\,dx\right)^{\frac{1}{\bar{p}^{*}}}. \tag{6.3}\]
Combining (6.2) and (6.3), it follows that
\[|\{u<-\delta\}|^{1/\bar{p}^{*}}\leq C\left(\int_{0}^{|\{-h<u<0\}|}s^{-\bar{p} /N}\,ds\right)^{1/\bar{p}}\prod_{j=1}^{N}\left(\sum_{i=1}^{N}\|b_{i}\|_{L^{ \frac{Np^{\prime}_{i}}{p},\infty}(\Omega)}^{p^{\prime}_{i}}\right)^{\frac{1}{ Np_{j}}}.\]
Observing that
\[\int_{0}^{|\{-h<u<0\}|}s^{-\bar{p}/N}\,ds\to 0\mbox{ as }h\to 0^{+},\]
we conclude that the measure of the set \(\{u<-\delta\}\) is zero for every \(\delta>0\).
As a consequence of Proposition 6.1 we get the following partial uniqueness result when \({\cal F}\equiv 0\).
**Corollary 6.2**: _Let \(u\in W^{1,\overrightarrow{p}}_{0}(\Omega)\) be a weak solution to problem (1.1) under assumptions (\({\cal H}1\))-(\({\cal H}4\)). If \({\cal F}\equiv 0\), then we have \(u\equiv 0\)._
**Proof.** The thesis follows arguing as in the previous proposition taking \(T_{h}(u^{-})\) and \(T_{h}(u^{+})\) as test function.
We stress that a similar argument can be used to prove the uniqueness when \(p_{i}=2\) for all \(i\) and the coefficient \(b_{i}(x)=b(x)\in L^{N,\infty}(\Omega)\) (see [8] when \(b_{i}(x)=b(x)\in L^{N}(\Omega)\)).
Uniqueness results
First of all we observe that prototype (1.11) verifies a strongly monotone condition, but not the assumption of Lipschitz continuity in \(u\) when \(p_{i}\) are not all equals and \(p_{i}\leq 2\) for \(i=1,\cdots,N\) as required in the classical uniqueness result. Then the novelty of our result consists in the possibility to deal with cases when \(\mathcal{B}(x,u)\) is only Holder continuous with respect to \(u\) (see (7.3) below). In general the presence of a zero order term could help in order to obtain uniqueness result, even in the case \(p_{i}\geq 2\) for \(i=1,\cdots,N\). In what follows we assume that
\[\mathcal{G}\left(x,s\right)\text{ is strictly monotone increasing function in }s. \tag{7.1}\]
Note that whenever \(\mathcal{G}\) satisfies (7.1) and \(\mathcal{G}(x,0)\equiv 0\) for all \(x\in\Omega\), then (1.9) obviously holds. For example \(\mathcal{G}\left(x,s\right)=\widetilde{\mu}|s|^{\gamma-1}s\) with \(\widetilde{\mu}>0,0<\gamma<p_{\infty}-1\) verifies (7.1) and (1.8) and (1.9). We observe that we can relax condition (7.1) assuming that
\[\left(\mathcal{G}\left(x,s\right)-\mathcal{G}\left(x,s^{\prime}\right)\right) \left(s-s^{\prime}\right)>0\text{ for }s>s^{\prime}.\]
As usual when we deal with uniqueness results for equation in which we have the dependence on the unknown \(u\) we consider two cases: first when \(p_{i}\leq 2\) for every \(i=1,\cdots,N\), then when \(p_{i}\geq 2\) for every \(i=1,\cdots,N\) and finally the mixed case.
### First case: \(p_{i}\leq 2\) for every \(i=1,\cdots,N\)
In this subsection we prove the uniqueness of weak solutions of problem (1.1) when all \(p_{i}\leq 2\). For example in model cases (1.11) the main difficulty is due to the terms \(B_{i}(x,u)=\beta_{i}(x)|u|^{\frac{p}{p_{i}}-1}u\) with \(\beta_{i}\in L^{\frac{Np_{i}^{\prime}}{p},\infty}(\Omega)\), who are Holder continuous but not Lipschitz continuous with respect to solution \(u\). To prove uniqueness we will strongly use the presence of zero order term. We assume that
\[\left(\mathcal{A}_{i}\left(x,\xi\right)-\mathcal{A}_{i}\left(x,\xi^{\prime} \right)\right)\left(\xi_{i}-\xi_{i}^{\prime}\right)\geq\widetilde{\alpha} \left(|\xi_{i}|+\left|\xi_{i}^{\prime}\right|\right)^{p_{i}-2}\left|\xi_{i}- \xi_{i}^{\prime}\right|^{2} \tag{7.2}\]
with \(\widetilde{\alpha}>0\) and
\[\left|\mathcal{B}_{i}\left(x,s\right)-\mathcal{B}_{i}\left(x,s^{\prime} \right)\right|\leq\widetilde{b}_{i}(x)\left|s-s^{\prime}\right|^{\frac{p}{p_{ i}^{\prime}}} \tag{7.3}\]
with \(\widetilde{b}_{i}:\)\(\Omega\rightarrow[0,+\infty)\) measurable function such that
\[\widetilde{b}_{i}\in L^{\frac{Np_{i}^{\prime}}{p},\infty}(\Omega), \tag{7.4}\]
for \(i=1,..,N\). We stress that \(\mathcal{A}_{i}\left(x,\xi\right)=\alpha|\xi_{i}|^{p_{i}-2}\xi_{i}\) verifies (7.2) and \(\mathcal{B}_{i}\left(x,s\right)=\beta_{i}(x)|s|^{\frac{p}{p_{i}^{\prime}}-1}s\) verifies (7.3), recalling that \(\frac{\bar{p}}{p_{i}^{\prime}}\leq 1\), because \(\bar{p}\leq 2\leq p_{i}^{\prime}\).
**Theorem 7.1**: _Let \(u\in W_{0}^{1,\overrightarrow{p}}(\Omega)\) be a weak solution to problem (1.1) under assumptions (\(\mathcal{H}1\))-(\(\mathcal{H}4\)) with \(p_{i}\leq 2\) for \(i=1,..,N\), \(\bar{p}<N\) and \(\mathcal{G}\not\equiv 0\). If \(2\min_{i}\{\frac{\bar{p}}{p_{i}^{\prime}}\}\geq 1\), (7.1),(7.2) and (7.3) are in force, then \(u\) is the unique weak solution to problem (1.1)._
**Proof.** We generalize the ideas contained in [14] where a linear isotropic operator is considered.
Let \(u\) and \(v\) be two weak solutions to problem (1.1). Let us denote \(w=\left(u-v\right)^{+}\) and \(D\) = \(\left\{x\in\Omega:w>0\right\}\). Denoting \(r_{\min}=\min_{i}\{\frac{\bar{p}}{p_{i}^{\prime}}\}\) and recalling that \(2r_{\min}\geq 1\), for every \(\varepsilon>0\) there exists \(\delta(\varepsilon)\) such that
\[\int_{\delta(\varepsilon)}^{\varepsilon}\frac{1}{s^{2r_{\min}}}ds=1.\]
Let us define
\[\Psi_{\varepsilon}^{\sigma}(t)=\left\{\begin{array}{ll}0&\mbox{se }t\leq\delta( \varepsilon),\\ \int_{\delta(\varepsilon)}^{t}\frac{1}{s^{\sigma}}ds&\mbox{se }\delta( \varepsilon)<t<\varepsilon\\ 1&\mbox{se }t\geq\varepsilon.\end{array}\right. \tag{7.5}\]
We take \(\Psi_{\varepsilon}(t)=\Psi_{\varepsilon}^{\sigma}\) with \(\sigma=2r_{\min}\). We stress that \(\Psi_{\varepsilon}(t)\) is Lipschitz continuous, \(\Psi_{\varepsilon}^{\prime}(t)\geq 0\) and \(\varphi\Psi_{\varepsilon}(w)\in W_{0}^{1,\overrightarrow{p}}(\Omega)\) taking \(\varphi\geq 0\) and \(\varphi\in W^{1,\overrightarrow{p}}(\Omega)\cap L^{\infty}(\Omega)\).
Supposing that \(D\) has positive measure, we use \(\varphi\Psi_{\varepsilon}(w)\) as test function in the difference of the equations. It follows
\[I_{\varepsilon}:= \underset{i=1}{\overset{N}{\sum}}\int_{\Omega}\left\{\left[ \mathcal{A}_{i}\left(x,\nabla u\right)-\mathcal{A}_{i}\left(x,\nabla v\right) \right]+\left[\mathcal{B}_{i}\left(x,u\right)-\mathcal{B}_{i}\left(x,v\right) \right]\right\}\partial_{x_{i}}\varphi\Psi_{\varepsilon}(w)\,dx\] \[+\int_{\Omega}\left[\mathcal{G}(x,u)-\mathcal{G}(x,v)\right] \varphi\Psi_{\varepsilon}(w)\,dx=\] \[-\underset{i=1}{\overset{N}{\sum}}\int_{\Omega}\left\{\left[ \mathcal{A}_{i}\left(x,\nabla u\right)-\mathcal{A}_{i}\left(x,\nabla v\right) \right]+\left[\mathcal{B}_{i}\left(x,u\right)-\mathcal{B}_{i}\left(x,v\right) \right]\right\}\partial_{x_{i}}w\Psi_{\varepsilon}^{\prime}(w)\varphi\,dx.\]
By (7.2) and (7.3) we get
\[-\underset{i=1}{\overset{N}{\sum}}\int_{\Omega}\left\{\left[ \mathcal{A}_{i}\left(x,\nabla u\right)-\mathcal{A}_{i}\left(x,\nabla v\right) \right]\partial_{x_{i}}w\Psi_{\varepsilon}^{\prime}(w)\varphi+\left[\mathcal{ B}_{i}\left(x,u\right)-\mathcal{B}_{i}\left(x,v\right)\right]\right\}\partial_{x_{i}}w \Psi_{\varepsilon}^{\prime}(w)\varphi\,dx\] \[\leq-\underset{i=1}{\overset{N}{\sum}}\int_{\Omega}\widetilde{ \alpha}\frac{|\partial_{x_{i}}w|^{2}}{(|\partial_{x_{i}}u|+|\partial_{x_{i}}v|) ^{2-p_{i}}}\Psi_{\varepsilon}^{\prime}(w)\varphi\,dx+\underset{i=1}{\overset{N }{\sum}}\int_{\Omega}\widetilde{b}_{i}(x)w^{r_{i}}|\partial_{x_{i}}w|\Psi_{ \varepsilon}^{\prime}(w)\varphi\,dx,\]
where \(r_{i}=\frac{\bar{p}}{p_{i}}\). Using Young inequality with \(\theta<\widetilde{\alpha}\) we obtain
\[-\underset{i=1}{\overset{N}{\sum}}\int_{\Omega}\left\{\left[ \mathcal{A}_{i}\left(x,\nabla u\right)-\mathcal{A}_{i}\left(x,\nabla v\right) \right]+\left[\mathcal{B}_{i}\left(x,u\right)-\mathcal{B}_{i}\left(x,v\right) \right]\right\}\partial_{x_{i}}w\Psi_{\varepsilon}^{\prime}(w)\varphi\,dx\] \[\leq -\underset{i=1}{\overset{N}{\sum}}\int_{\Omega}(\widetilde{ \alpha}-\theta)\frac{|\partial_{x_{i}}w|^{2}}{(|\partial_{x_{i}}u|+|\partial_{x _{i}}v|)^{2-p_{i}}}\Psi_{\varepsilon}^{\prime}(w)\varphi\,dx+\] \[+C(\theta)\underset{i=1}{\overset{N}{\sum}}\int_{\Omega} \widetilde{b}_{i}^{2}(x)w^{2r_{i}}(|\partial_{x_{i}}u|+|\partial_{x_{i}}v|)^{ 2-p_{i}}\Psi_{\varepsilon}^{\prime}(w)\varphi\,dx\] \[\leq C(\theta)\|\varphi\|_{\infty}\underset{i=1}{\overset{N}{ \sum}}\varepsilon^{2(r_{i}-r_{\min})}\int_{\left\{\delta(\varepsilon)<w< \varepsilon\right\}}\widetilde{b}_{i}^{2}(x)(|\partial_{x_{i}}u|+|\partial_{x_ {i}}v|)^{2-p_{i}}\,dx=J_{\varepsilon}.\]
We underline that \(\widetilde{b}_{i}^{2}(x)(|\partial_{x_{i}}u|+|\partial_{x_{i}}v|)^{2-p_{i}}\in L^ {1}(\Omega)\). Indeed we have
\[\int_{\Omega}\widetilde{b}_{i}^{2}(x)(|\partial_{x_{i}}u|+| \partial_{x_{i}}v|)^{2-p_{i}}\,dx \leq\left(\int_{\Omega}\widetilde{b}_{i}^{p^{\prime}_{i}}(x)\,dx \right)^{\frac{2}{p^{\prime}_{i}}}\left(\int_{\Omega}(|\partial_{x_{i}}u|+| \partial_{x_{i}}v|)^{p_{i}}\,dx\right)^{\frac{2-p_{i}}{p_{i}}}\] \[\leq C(\Omega,p_{i})\|\widetilde{b}_{i}\|^{p^{\prime}_{i}}_{L^{ \frac{p}{p}_{i},\infty}(\Omega)}\left(\int_{\Omega}(|\partial_{x_{i}}u|+| \partial_{x_{i}}v|)^{p_{i}}\,dx\right)^{\frac{2-p_{i}}{p_{i}}}.\]
Since \((r_{i}-r_{\min})\geq 0\), letting \(\varepsilon\to 0\) we get \(J_{\varepsilon}\to 0\). Moreover \(\Psi_{\varepsilon}(w)\to\chi_{\{u-v>0\}}\), where \(\chi_{\{u-v>0\}}\) denotes the characteristic function of the set \(\{u-v>0\}\). By the Lebesgue dominated convergence Theorem we obtain
\[\sum_{i=1}^{N}\int_{\{u-v>0\}}\left\{\left[\mathcal{A}_{i}\left(x,\nabla u \right)-\mathcal{A}_{i}\left(x,\nabla v\right)\right]+\left[\mathcal{B}_{i} \left(x,u\right)-\mathcal{B}_{i}\left(x,v\right)\right]\right\}\partial_{x_{i} }\varphi+\left[\mathcal{G}(x,u)-\mathcal{G}(x,v)\right]\varphi\,dx\leq 0. \tag{7.6}\]
Taking \(\varphi=1\) in (7.6) it follows
\[\int_{\{u-v>0\}}\left[\mathcal{G}(x,u)-\mathcal{G}(x,v)\right]\,dx\leq 0.\]
Assumption (7.1) allows us to conclude that \(\{u-v>0\}\) ha zero measure. Exchanging \(u\) for \(v\) we conclude.
**Remark 7.2**:
* _If_ \(3/2\leq p_{i}\leq 2\) _for all_ \(i\) _then_ \(2\min_{i}\{\frac{\bar{p}}{p^{\prime}_{i}}\}\geq 1\)_._
* _Obviously when all_ \(p_{i}=2\) _the uniqueness results holds even_ \(\mathcal{G}\equiv 0\)_, because we have Lipschitz dependence on_ \(u\)_._
* _Theorem_ 7.1 _holds replacing the condition_ \(p_{i}\leq 2\) _for every_ \(i=1,\cdots,N\) _with_ \(\frac{\bar{p}}{p^{\prime}_{i}}\leq 1\) _for every_ \(i=1,\cdots,N\)_._
### Second case: \(p_{i}\geq 2\) for every \(i=1,\cdots,N\)
In this subsection we prove the uniqueness of weak solutions of problem (1.1) when all \(p_{i}\geq 2\). In the model cases (1.11) the term \(B_{i}(x,u)=\beta_{i}(x)|u|^{\frac{\bar{p}}{p^{\prime}_{i}}-1}u\) with \(|\beta_{i}|\in L^{\frac{Np^{\prime}_{i}}{\bar{p}},\infty}(\Omega)\) are locally Lipschitz continuous with respect to solution \(u\), but it is well-known that even in the isotropic case when \(p>2\) the uniqueness can fail (see for example in [2] at the end of Section 2). Again to prove uniqueness we will strongly use the presence of zero order term. We assume
\[\left(\mathcal{A}_{i}\left(x,\xi\right)-\mathcal{A}_{i}\left(x,\xi^{\prime} \right)\right)\left(\xi_{i}-\xi^{\prime}_{i}\right)\geq\widehat{\alpha}\left| \xi_{i}-\xi^{\prime}_{i}\right|^{p_{i}} \tag{7.7}\]
with \(\widehat{\alpha}>0\) and
\[\left|\mathcal{B}_{i}\left(x,s\right)-\mathcal{B}_{i}\left(x,s^{\prime} \right)\right|\leq\widehat{b}_{i}(x)\left|s-s^{\prime}\right|\left(|s|+|s^{ \prime}|+\zeta\right)^{\frac{\bar{p}}{p^{\prime}_{i}}-1} \tag{7.8}\]
with \(\widehat{b}_{i}(x)\) defined as in (7.4) and \(\zeta\geq 0\). We stress that \(\frac{\bar{p}}{p^{\prime}_{i}}\geq 1\) under our assumptions.
Moreover \(\mathcal{A}_{i}\left(x,\xi\right)=\alpha|\xi_{i}|^{p_{i}-2}\xi_{i}\) verifies (7.7) and \(\mathcal{B}_{i}\left(x,s\right)=\beta_{i}(x)|s|^{\frac{\bar{p}}{p^{\prime}_{i}}- 1}s\) verifies (7.3), recalling that \(p^{\prime}_{i}\leq 2\leq\bar{p}\).
**Theorem 7.3**: _Let \(u\in W_{0}^{1,\,\overline{p}}\left(\Omega\right)\) be a weak solution to problem (1.1) under assumptions (\(\mathcal{H}1\))-(\(\mathcal{H}4\)) with \(p_{i}\geq 2\) for \(i=1,..,N\), \(\bar{p}<N\) and \(\mathcal{G}\not\equiv 0\). If (7.1), (7.7) and (7.8) are in force, then \(u\) is the unique weak solution to problem (1.1)._
**Proof.** The idea of the proof follows the previous theorem. Let \(u\) and \(v\) be two weak solutions to problem (1.1). Let us denote \(w=\left(u-v\right)^{+}\) and \(D=\left\{x\in\Omega:w>0\right\}\).
Let \(\Psi_{\varepsilon}(t)=\Psi_{\varepsilon}^{\sigma}(t)\) with \(\sigma=\min_{i}p_{i}^{\prime}\) defined in (7.5). Supposing that \(D\) has positive measure, we use \(\varphi\Psi_{\varepsilon}(w)\) as test function in the difference of the equations. By (7.7) and (7.8) we get
\[-\!\!\sum_{i=1}^{N}\int_{\Omega}\left\{\left[\mathcal{A}_{i} \left(x,\nabla u\right)-\mathcal{A}_{i}\left(x,\nabla v\right)\right]\partial _{x_{i}}w\Psi_{\varepsilon}^{\prime}(w)\varphi+\left[\mathcal{B}_{i}\left(x,u \right)-\mathcal{B}_{i}\left(x,v\right)\right]\right\}\partial_{x_{i}}w\Psi_{ \varepsilon}^{\prime}(w)\varphi\,dx\] \[\qquad\qquad\leq-\!\sum_{i=1}^{N}\int_{\Omega}\widehat{\alpha}| \partial_{x_{i}}w|^{p_{i}}\Psi_{\varepsilon}^{\prime}(w)\varphi\,dx+\sum_{i=1 }^{N}\int_{\Omega}\widehat{b}_{i}w\left(|u|+|v|+\zeta\right)^{r_{i}-1}| \partial_{x_{i}}w|\Psi_{\varepsilon}^{\prime}(w)\varphi\,dx,\]
where we put \(r_{i}=\frac{\bar{p}}{p_{i}^{\prime}}\). Using Young inequality with \(\theta<\widehat{\alpha}\) we obtain
\[-\!\!\sum_{i=1}^{N}\int_{\Omega}\left\{\left[\mathcal{A}_{i} \left(x,\nabla u\right)-\mathcal{A}_{i}\left(x,\nabla v\right)\right]+\left[ \mathcal{B}_{i}\left(x,u\right)-\mathcal{B}_{i}\left(x,v\right)\right]\right\} \partial_{x_{i}}w\Psi_{\varepsilon}^{\prime}(w)\varphi\,dx\] \[\qquad\qquad\leq -\sum_{i=1}^{N}\int_{\Omega}(\widehat{\alpha}-\theta)|\partial_{x_{ i}}w|^{p_{i}}\Psi_{\varepsilon}^{\prime}(w)\varphi\,dx+\] \[\qquad\qquad+C(\theta)\!\!\sum_{i=1}^{N}\int_{\Omega}\widehat{b} _{i}^{p_{i}^{\prime}}w^{p_{i}^{\prime}}\left(|u|+|v|+\zeta\right)^{(r_{i}-1)p _{i}^{\prime}}\Psi_{\varepsilon}^{\prime}(w)\varphi\,dx\] \[\qquad\qquad\leq C(\theta)\|\varphi\|_{\infty}\!\sum_{i=1}^{N} \!\varepsilon^{(p_{i}^{\prime}-\min_{i}p_{i}^{\prime})}\int_{\left\{\delta \left(\varepsilon\right)<w<\varepsilon\right\}}\widehat{b}_{i}^{p_{i}^{\prime }}\left(|u|+|v|+\zeta\right)^{(\bar{p}-p_{i}^{\prime})}\,dx=J_{\varepsilon}.\]
We underline that \(\widehat{b}_{i}^{p_{i}^{\prime}}\left(|u|+|v|+\zeta\right)^{(\bar{p}-p_{i}^{ \prime})}\in L^{1}(\Omega)\). Indeed we have
\[\int_{\Omega}\widehat{b}_{i}^{p_{i}^{\prime}}\left(|u|+|v|+\zeta\right)^{(r_{i }-1)p_{i}^{\prime}}\,dx\leq C\left(\|\widehat{b}_{i}\|_{L^{\frac{N\,p_{i}^{ \prime}}{N\,p_{i}^{\prime}}}(\Omega)}\||u|+|v|\|\|_{L^{\frac{N\left(\bar{p}-p_{ i}^{\prime}\right)}{N-\bar{p}},\bar{p}-p_{i}^{\prime}}(\Omega)}^{\bar{p}-p_{i}^{ \prime}}+\|\widehat{b}_{i}\|_{L^{p_{i}^{\prime}}(\Omega)}^{p_{i}^{\prime}} \right).\]
The right-hand side is finite, since \(u,v\in L^{\frac{N\left(\bar{p}-p_{i}^{\prime}\right)}{N-\bar{p}},\bar{p}-p_{i}^{ \prime}}(\Omega)\) and \(L^{\bar{p}^{*},\bar{p}}(\Omega)\subset L^{\frac{N\left(\bar{p}-p_{i}^{\prime} \right)}{N-\bar{p}},\bar{p}-p_{i}^{\prime}}(\Omega)\). We conclude as in Theorem 7.1 letting \(\varepsilon\to 0\).
**Remark 7.4**: _Theorem 7.1 holds replacing the condition \(p_{i}\geq 2\) for every \(i=1,\cdots,N\) with \(\frac{\bar{p}}{p_{i}^{\prime}}\geq 1\) for every \(i=1,\cdots,N\)._
### The mixed case: when \(\min_{i}p_{i}\leq 2\) and \(\max_{i}p_{i}>2\).
We end the section studying the mixed case when \(\min_{i}p_{i}\leq 2\) and \(\max_{i}p_{i}>2\). For simplicity of presentation we take into account the model problem (1.11) when \(\min_{i}p_{i}\leq 2\) and \(\max_{i}p_{i}\geq 2\).
**Theorem 7.5**: _Let \(u\in W^{1,\overrightarrow{p}}_{0}(\Omega)\) be a weak solution to problem (1.11) with \(p_{i}>1\) for \(i=1,..,N\), \(\min_{i}p_{i}\leq 2\) and \(\max_{i}p_{i}>2\),\(\vec{p}<N\), \(\widetilde{\mu}>0\) and \(\beta_{i}\in L^{\frac{Np_{i}^{\prime}}{\vec{p}},\infty}(\Omega)\). If \(2\min_{i}\{\frac{\vec{p}}{p_{i}^{\prime}}\}\geq 1\), then \(u\) is the unique weak solution to problem (1.11)._
We stress that problem (1.11) verifies assumptions (7.2) and (7.3) for \(p_{i}\leq 2\). Otherwise for \(p_{i}>2\) conditions (7.7) and (7.8) are fulfilled. Then we can blend the proof of Theorem 7.1 and Theorem 7.3 to prove Theorem 7.5. In particular \(\Psi_{\varepsilon}(t)=\Psi_{\varepsilon}^{\sigma}(t)\) with \(\sigma=\min_{i}\{p_{i}^{\prime},2\frac{\vec{p}}{p_{i}^{\prime}}\}\) defined in (7.5).
## Acknowledgments
The authors are partially supported by GNAMPA of the Italian INdAM (National Institute of High Mathematics).
|
2308.15948 | Exploring Cybercriminal Activities, Behaviors and Profiles | While modern society benefits from a range of technological advancements, it
also is exposed to an ever-increasing set of cybersecurity threats. These
affect all areas of life including business, government, and individuals. To
complement technology solutions to this problem, it is crucial to understand
more about cybercriminal perpetrators themselves, their use of technology,
psychological aspects, and profiles. This is a topic that has received little
socio-technical research emphasis in the technology community, has few concrete
research findings, and is thus a prime area for development. The aim of this
article is to explore cybercriminal activities and behavior from a psychology
and human aspects perspective, through a series of notable case studies. We
examine motivations, psychological and other interdisciplinary concepts as they
may impact/influence cybercriminal activities. We expect this paper to be of
value and particularly insightful for those studying technology, psychology,
and criminology, with a focus on cybersecurity and cybercrime. | Maria Bada, Jason R. C. Nurse | 2023-08-30T10:57:19Z | http://arxiv.org/abs/2308.15948v1 | # Exploring Cybercriminal Activities, Behaviors and Profiles
###### Abstract
While modern society benefits from a range of technological advancements, it also is exposed to an ever-increasing set of cybersecurity threats. These affect all areas of life including business, government and individuals. To complement technology solutions to this problem, it is crucial to understand more about cybercriminal perpetrators themselves, their use of technology, psychological aspects, and profiles. This is a topic that has received little socio-technical research emphasis in the technology community, has few concrete research findings, and is thus a prime area for development. The aim of this article is to explore cybercriminal activities and behavior from a psychology and human aspects perspective, through a series of notable case studies. We examine motivations, psychological and other interdisciplinary concepts as they may impact/influence cyber-criminal activities. We expect this paper to be of value and particularly insightful for those studying technology, psychology, and criminology, with a focus on cybersecurity and cybercrime.
Keywords:Cybersecurity, cyber psychology, cognition, human aspects, cybercrime, cybercriminal, online offender, behavior.
## 1 Introduction
Cybercrime has grown substantially in the last 18 months, and has impacted businesses, members of the public, and governments alike. While the trajectory of cyber
attacks has been on the rise for a number of years, the increased digitization that has emerged a result of COVID-19 (SARS-CoV-2), the stress and uncertainty caused in the population because of the pandemic, and the general challenges to securing remote workforces, has led to significant issues of online crime [34; 21]. One study has reported that cybercrime has increased 600% due to COVID-19 pandemic [41] and in some countries (e.g., the UK) this rise has led to record numbers of attacks faced by the society [43]. International and regional policing organizations (e.g., Interpol and Europol) have thus warned businesses and individuals about these attacks, and released guidance on staying safe and boosting threat response and cyber hygiene.
To understand the nature of cybercrime, it is imperative to examine the threat actors or offenders behind the crimes, what motivates them, their behaviors and profiles. This area of research has often been referred to as cybercriminal (or online offender) understanding or profiling, and tends to mirror the offline, and more traditional action of criminal profiling [51]. In this regard, and extending upon prior works [3; 15; 52], we consider cybercriminal profiling/understanding to generally be an educated attempt to define information about the person who committed a cybercrime, which considers their characteristics, patterns or other factors of uniqueness.
While there has been an increasing amount of research in the cybercriminal space, this topic that has received little socio-technical research emphasis in the technology community, has few concrete research findings, and is thus a prime area for development. Bada & Nurse [3] summarized the outstanding challenges with specific mention of the need to explore the actions and personality traits apparent from certain online criminal behaviors; a factor also driven by the lack of studies drawing on actual data linked to behavioral profiles.
The aim of this article therefore is to investigate cybercriminal activities and behavior from a socio-technical (psychology and human aspects) perspective, through reflecting on the state of the art as well as a series of notable cybercriminal case studies. This work considers the motivations of online offenders, and psychological and other interdisciplinary concepts as they may impact/influence cybercriminal actions. The remainder of this contribution is as follows. Section 2 reflects on the threat of cybercrime more broadly, outlines the main types of attack and the traditional threat actors commonly discussed in research and practice. Section 3 examines several cybercriminal cases in detail drawing on cyberpsychology, cognition, human aspects and cybersecurity research, to identify characteristics and profiles of offenders. Finally, Section 4 concludes this article and highlights key aspects when exploring cybercriminal activities, behaviors and profiles.
## 2 The Threat of Cybercrime: Actions and Actors
Cybercrime is often used in the media and in research to refer to a range of crimes conducted in the online (or cyber) space. The reality, however, is that these crimes are extremely varied. The UK's Crown Prosecution Service (CPS) deconstructs cybercrimes in two primary types: Cyber-dependent crimes and Cyber-enabled crimes.
Cyber-dependent crimes are: "crimes that can be committed only through the use of Information and Communications Technology ('ICT') devices, where the devices are both the tool for committing the crime, and the target of the crime (e.g. developing and propagating malware for financial gain, hacking to steal, damage, distort or destroy data and/or network or activity)" [50]. These include hacking, causing disruption due to malware, and the use of botnets for service disruption. Alternately, Cyber-enabled crimes are: "traditional crimes which can be increased in scale or reach by the use of computers, computer networks or other forms of ICT" [50]. Examples of these crimes include online fraud, data theft, cyber harassment, and child sexual offences.
The characterization above has evolved since early work on cybercrime (e.g., [13]) but there are still various similarities, particularly the focus on technology versus other aspects (Gordon & Ford for instance, refer to a continuum with technology crime on one side and people crime on the other [13]). Other research overlooks high-level categorizations and concentrates on the specific actions/crimes. Relevant examples include Stabek et al. [46] who examine specific scam types, Nurse [33] that explores crimes against individuals, and Chiew et al. [6] who assess the nature (types, vectors, approaches) of phishing attacks.
Behind criminal actions (be they referred to as crimes or cyber-attacks) are perpetrators who are responsible for planning, orchestration or execution. Initial characterizations of these individuals centred on high-level groupings, such as script kiddies, hackers, fraudsters, insider threats, hacktivists, and nation states. In that context, script kiddies were typically viewed as the lowest skilled and resourced, while nation states were at the other end of the spectrum.
Today, online offenders and attack perpetrators share some similarities with the groupings above but their profiles are also often much more nuanced. For instance, research has examined the psyche of cybercriminals [4; 18; 42] and the theories behind why cybercrime occurs [38; 47], and other work has investigated attackers in depth--be it on the presence of the hacktivist group Anonymous online [16] or nation state Advanced Persistent Threats (APTs) [32]. Considering the psychology of perpetrators themselves, online criminal behavior has been related to psychopathy and other antisocial behaviors [44], persons high on Machiavellianism (one of the three Dark Triad personality traits) have been shown as more likely to engage in criminal behavior [45], and we have found relationships cited between cybercriminal actions and conditions such as autism [25]. These all point to the importance of exploring perpetrators as a part of understanding cybercrime.
Theories of crime developed by the field of cyberpsychology such as the online disinhibition effect [48] can also be considered relevant to understanding why an individual may engage in online criminal acts, however, its usefulness depends on the type of cybercrime considered. Neutralizations [49] from offenders offering explanations for crimes that they would normally consider to be morally unacceptable are common in different types of crime including cybercrime. Such excuses can include denying responsibility for their actions or denial of injury to the victim. In summary, the reality is that developing a better understanding of the persons behind cybercrimes is key for research and practice.
## 3 Cybercriminal Case Studies
### Overview and Method of Analysis
In this study our method of analysis drawns on the different factors and abilities described in models such as the Deductive Cybercriminal Profile Model [35] and the Theoretical Model of Profiling a Hacker [24]. These models guide the collection of information required in order to create a holistic profile. In general, they propose that in order to form a psychological profile of an offender, different factors need to be considered: a) biological factors and the external environment which influences an individual; b) intelligence; c) personality; d) social abilities; and e) technical abilities. The theoretical model of profiling a hacker [24] also includes factors such as: f) motivation for offending; g) the method of the attack; and h) the effectiveness of the attack.
Below we will present cases of persons identified in the literature (at one point or another) as real cyber offenders, and describe their characteristics, traits, motivations and behaviors. This approach will also allow for a reflection on the similarities and differences among the different cases. When analysing the cases, theories of the Dark Triad/Tetrad [9], the HEXACO model of personality [27] and theories of crime will be utilised as well. Readers should note that we intentionally do not directly name the persons that we present given the sensitivity of the topic. Moreover, we present point in time analyses based on literature and existing reports. This is worth noting because people change (e.g., some once-famous hackers are now well-respected security professionals), and secondly, we rely on reports for our reflection (thus, rely on the accuracy of the reports we draw on).
### Case 1
Case 1 was known as the first cybercriminal in the US, releasing the first 'worm' on the internet in 1988 whilst attending Cornell University [31]. Utilising the Unix Sendmail program, he reportedly altered it to replicate itself, and it caused computers to crash (with as many as 6,000 computers impacted).
_Skills_: Case 1 studied computer science and graduated from Harvard. At Harvard, Case 1 was reportedly known for his technological skills, but also his social skills [31]. After graduating, he continued his studies at Cornell; he later developed a malicious program which was released via a hacked MIT computer [31].
_Characteristics and Psychological Traits_: Case 1's father was an early innovator at a technology lab so he grew up immersed in computers [31]. Case 1 reportedly was the type of student who found homework boring and therefore focused his energy in programming; he also preferred to work alone [22]. This rather agrees with findings indicating that personality traits such as introversion are associated to online criminal behaviour [44].
_Motivation_: According to reports, Case 1 claimed that his actions did not have malicious intent but rather his aim was to point out the safety issues and vulnerabilities of systems [36]. The worm did not damage or destroy any files, but it slowed down University functions causing substantial economic losses [31]. The network community tried several techniques in order to understand the worm and to remove it from their systems. Some of the affected institutions disconnected their computers while others had to reset their systems. Case 1, however, was not imprisoned but he was sentenced to probation for three years and also community service [36].
### Case 2
Case 2 was a teen hacker, a computer programmer and the founder of an non-profit organisation that publishes leaks. As covered by [23], during his studies in Australia he lived in a student house where he spent much of his time dreaming of setting up a new way to disseminate classified information. By 1991 Case 2 was reportedly one of the most accomplished hackers in Australia [23].
_Skills_: He was characterised by high, analytical intelligence. In 1991, he reportedly formed a hacking group called the International Subversives [8]. During this time, he hacked into Military Institutions, such as MILNET, the US military's secret defence data network, and Universities [23]. According to reports, his father had a highly logical intellect which Case 2 is said to have inherited from him [11].
_Characteristics and Psychological Traits_: As a student, articles (e.g., [23]) note that Case 2 was not interested much in the school system. In terms of his personality, resources state that he lacked social skills, had a dry sense of humour, and at times also often forgot basic hygiene behaviors [23].
Case 2 reportedly disregarded those he disapproved of, he could easily get angry, and had instant mood changes [23]. Eysenck's Theory of Crime proposes that personality traits such as Psychoticism (being anti-social, aggressive and uncaring), Extraversion (seeking sensation) and Neuroticism (being unstable in behavioral patterns) indicate a personality susceptible to criminal behavior [14]. However, in Case 2 we may also see a similar pattern as in Case 1, a sense of superiority seen in narcissistic personalities [9].
_Motivation_: In terms of the motive behind Case 2, according to the prosecution during his trial at the Victoria County Court in Melbourne, it was "simply an arrogance and a desire to show of his computer skills" [23]. Case 2 pleaded guilty to 24 counts of hacking [23].
### Case 3
Case 3 was a known ex-member of the group Anonymous. This group is referred to as hacktivists, who utilise sometimes criminal acts as a way to pursue particular
motives. Case 3 was found to be a member when he gave his identity willingly to the police during an attempt to pursue justice in a rape case [1]. The back story involves a football team in the US that was accused of raping a 16-year-old girl, but were not prosecuted, despite evidence (see [20]). This led to Anonymous' hacking of the football website and email of someone affiliated with the team, revealing indecent images of young women. Case 3 was noted to be one of the main activists behind these events [20].
_Skills_: While Case 3 reportedly dropped out of school, he showed a keen interest in computers/technology, teaching himself how to code for instance [20].
_Characteristics and Psychological Traits_: Case 3 was reported being shy and a frequent target of bullying at school, experiencing violent episodes during adulthood [20]. Research [54] has suggested that bullying is linked to altered cognitive responses to stressful and threatening situations. Further, [37] noted that the presence of school problems during adolescence may contribute to criminal behavior. Case 3 was reportedly unstable during his teenage years, he formed a gang to bully the bullies, had drinking issues and spent some time homeless [10]. These behaviors could potentially indicate personality traits such as neuroticism and psychoticism, as defined by Eysenck's theory [14]. In addition, as the Five Factor Model [7] and the HEXACO Model [27] describe, an individual low in agreeableness may tend to be critical, hostile and aggressive. In this case these traits may be portrayed by being critical to others and speaking of injustice.
_Motivation_: Case 3 claimed his motives were for justice, defending the victims being targeted. He spoke of a few cases of hacking he conducted under the signature Anonymous mask [20]. He claimed he would target bullies; those who also used technology to harm others. Reflecting on this case, there is again a possible implication that he was better suited than law enforcement to manage such a situation. This self-justification of labelled criminal acts potentially suggests narcissistic personality traits [9].
It is likely that this individual found a sense of power through hacking, something he may have not had as a child when he himself was the victim. Reports [10] note that Case 3 optimised the overall Anonymous group persona, hiding his face, creating a false name, and posting videos online with distortions to protect his identity. It is such a persona that can facilitate such behavior online [48].
### Case 4
Case 4 was a hacktivist reported to be responsible for hacking into a large number of government computer systems, such as the FBI, stealing large amounts of data [39].
_Skills_: Activism and hacking were a noteworthy theme in Case 4's life. According to resources, by the age of 8, he had enough skills to rewrite the computer code of applications [12]. Case 4 and his sibling, enjoyed playing video games and this led them into finding techniques to cheat the technology so that they would always win [28]. Early on during his education he appears to have become bored, and in
lower school he was assigned a dedicated tutor because, as he stated, "there was nothing left to teach me on the curriculum" [12]. Case 4 studied computer science at A-level and at university. As he stated, "One of the things that attracted me to computers is that they are consistent and make sense. If it doesn't do what you think it should do, you can eventually figure out why and it's perfectly rational and reasonable" [12].
_Characteristics and Psychological Traits_: His professional development has been impacted by his symptoms of depression which, from reports, appears to have played some part in him leaving university twice [39]. When he was 29 he was diagnosed with Asperger's syndrome [39]. As he stated,"It's a bit morphid to count the number of times you've had suicidal thoughts, but it was getting to be six to 12 times a day at a peak last winter" [17]. Reportedly, for him hacking was a form of problem-solving exercise which could have an impact and affect change, just like activism. Research has posited that, "increased risk of committing cyber-dependent crime is associated with higher autistic-like traits"; however, a diagnosis of autism is not necessarily associated with an increased risk of committing such crime [40].
_Motivation_: Regarding motivation, it is useful to consider some of the key quotes related to this Case. In [39] for instance, Case 1 is reported as saying that a hacktivist's ideology "began to shape his philosophy deeply". It continued, "I started to see the power of the internet to make good things happen in the world". Once again we see a potential sense of a push to use skills for a purpose. In addition, in a sense one may note a tendency for neutralisation in terms of the potential consequences of his actions [49].
### Case 5
Case 5 was a systems administrator and hacker. He was reportedly accused in 2002 of hacking into a large number of military and NASA computers during a period of 13 months [53]. He became famous in the UK after a protracted attempt by the USA government to have him extradited ultimately ended in failure [30].
_Skills_: Case 5 got a computer and practised his technical skills from 14 years old [30]. After he finished school he went on to become a hairdresser. However, reports [30] note that his friends later persuaded him to study computers. Following this advice, he completed a computing course and subsequently started work as a contractor in computing. He continued his training in programming and it was these programming skills that he is assumed to have later utilised to hack into government computer systems [30].
_Characteristics and Psychological Traits_: Case 5 was diagnosed with Asperger's syndrome during his trial [30]. This diagnosis lends some explanation to his personality. Reports suggest that Case 1 was introverted and hated leaving his flat [19; 26]. Like many people with Asperger's, there is often a development of highly focused interests. His mother described him as, "obsessive, naive, intelligent,... highly introverted, prone to obsessions and meltdowns and fearful of confrontation" according
to one article [26]. As covered by [19] his diagnosis may explain his behavior which seemed unreasonable to others.
Case 5 did not see himself as a hacker and was acting alone. Obsessed with UFOs since childhood, reports note that he was convinced that the US was suppressing alien technology and evidence of UFOs [19]. As he said, "I'd stopped washing at one point. I wasn't looking after myself. I wasn't eating properly. I was sitting around the house in my dressing gown, doing this all night" and to continue, "I almost wanted to be caught, because it was ruining me. I had this classic thing of wanting to be caught so there would be an end to it" [30].
Overall, once again there may be a push or entitlement to use skills for an important purpose as seen in other Cases above (with entitlement linked to other psychological factors [9]). Personality traits such as neuroticism, as defined by Eysenck's theory [14] are associated with traits such as, depression, anxiety, low-self-esteem, shyness, moodiness, and emotionality. Personality traits such as introversion and neuroticism have also been associated with online criminal behavior [44].
_Motivation_: In terms of the motive, Case 5 may have committed his acts due to his tendency to form obsessions. He was noted to be obsessed with space and UFOs, and as said above became convinced that the American government was hiding their existence. Allegedly therefore, he hacked into USA military and NASA systems ultimately to prove to himself that UFOs existed [30]. He admitted hacking into US computers but says he had been on a "moral crusade" to find classified documents about UFOs [30]. Noting his comments: "I found out that the US military use Windows and having realised this, I assumed it would probably be an easy hack if they hadn't secured it properly" [30].
## 4 Discussion and Conclusion
In exploring how cybercrime occurs, a key component is understanding the nature of attacks and the individuals/actors who have conducted them. This chapter advanced the discussion on cybercriminals (online offenders) with reflection on pertinent literature and an analysis of five prominent cases. From this work, we identified a number of key technology skills that individuals attained throughout their lifetimes, especially in younger years (e.g., Cases 3 and 4). This is by no means definitive but does pose some interesting questions regarding pathways to cybercrime; some of which have been explored before [2; 29].
There were a range of characteristics and psychological traits covered in the cases including boredom and challenges at school, lower social skills, instability in teenage years, and conditions such as Asperger's syndrome. Some research (e.g., [5; 40]) has sought to investigate the links between these factors and online offenders, but clearly more is needed to understand the area given the increasing number and variety of online attacks. To consider the motivation of the cases, in a number of situations there is a push or feeling of entitlement present. This is notable for numerous reasons, but one of the most intriguing is the desire to find the truth or to prevent injustice. These
motivations - as studied here - are quite different to those of several cybercriminal gangs (e.g., those involved in ransomware or fraud) for instance, who are more motivated by finances.
There are various avenues in the area of cybercriminal profiling and understanding where more research is needed. One of the most important of these is a natural extension of this research and involves a critical examination of a larger set of offender cases. In this work, we concentrated on a number of notorious cases to demonstrate what can be done with openly available reports and data. However, other work could engage with individuals firsthand to understand their profiles and experiences. This may be more representative and not limited (or unduly biased) by cases that feature in the media. Embedding cognitive science and technology into these analyses would provide value for researchers from both fields, and contribute significantly to a more nuanced understanding of cybercrime and its prevention.
## Biography
**Maria Bada** is a Lecturer in Psychology at Queen Mary University in London and a RISCS Fellow in cybercrime. Her research focuses on the human aspects of cybercrime and cybersecurity, such as profiling online offenders, studying their psychologies and pathways towards online deviance as well as the ways to combat cybercrime through tools and capacity building. She is a member of the National Risk Assessment (NRA) Behavioural Science Expert Group in the UK, working on the social and psychological impact of cyber-attacks on members of the public. She has a background in cybersychology, and she is a member of the British Psychological Society and the National Counselling Society.
**Jason R.C. Nurse** is an Associate Professor in Cyber Security in the School of Computing at the University of Kent, UK and the Institute of Cyber Security for Society (iCSS), UK. He also holds the roles of Visiting Academic at the University of Oxford, UK and Associate Fellow at the Royal United Services Institute for Defence and Security Studies (RUSI). His research interests include security risk management, corporate communications and cyber security, secure and trustworthy Inter-net of Things, insider threat and cybercrime. He has published over 100 peer-reviewed articles in internationally recognized security journals and conferences.
|
2306.14122 | Chain-of-Thought Prompt Distillation for Multimodal Named Entity
Recognition and Multimodal Relation Extraction | Multimodal Named Entity Recognition (MNER) and Multimodal Relation Extraction
(MRE) necessitate the fundamental reasoning capacity for intricate linguistic
and multimodal comprehension. In this study, we explore distilling the
reasoning ability of large language models (LLMs) into a more compact student
model by generating a \textit{chain of thought} (CoT) -- a sequence of
intermediate reasoning steps. Specifically, we commence by exemplifying the
elicitation of such reasoning ability from LLMs through CoT prompts covering
multi-grain (noun, sentence, multimodality) and data-augmentation (style,
entity, image) dimensions. Subsequently, we present a novel conditional prompt
distillation method to assimilate the commonsense reasoning ability from LLMs,
thereby enhancing the utility of the student model in addressing text-only
inputs without the requisite addition of image and CoT knowledge. Extensive
experiments reveal that our approach attains state-of-the-art accuracy and
manifests a plethora of advantages concerning interpretability, data
efficiency, and cross-domain generalization on MNER and MRE datasets. | Feng Chen, Yujian Feng | 2023-06-25T04:33:56Z | http://arxiv.org/abs/2306.14122v3 | Chain-of-Thought Prompt Distillation for Multimodal Named Entity Recognition and Multimodal Relation Extraction
###### Abstract
Multimodal Named Entity Recognition (MNER) and Multimodal Relation Extraction (MRE) necessitate the fundamental reasoning capacity for intricate linguistic and multimodal comprehension. In this study, we explore distilling the reasoning ability of large language models (LLMs) into a more compact student model by generating a _chain of thought_ (CoT) - a sequence of intermediate reasoning steps. Specifically, we commence by exemplifying the elicitation of such reasoning ability from LLMs through CoT prompts covering multi-grain (noun, sentence, multimodality) and data-augmentation (style, entity, image) dimensions. Subsequently, we present a novel conditional prompt distillation method to assimilate the commonsense reasoning ability from LLMs, thereby enhancing the utility of the student model in addressing text-only inputs without the requisite addition of image and CoT knowledge. Extensive experiments reveal that our approach attains state-of-the-art accuracy and manifests a plethora of advantages concerning interpretability, data efficiency, and cross-domain generalization on MNER and MRE datasets.
## 1 Introduction
Multimodal named entity recognition (MNER) [23, 14] and multimodal relation extraction (MRE) [3, 15] aim to use auxiliary visual clues from images to improve the recognition of multisense or out-of-vocabulary words/relations. However, text-image pairs, amassed from diverse real-world domains such as social media, film critiques, and news articles, pose challenges in comprehending linguistic context and multimodal relationships. A majority of prior methodologies [23, 24, 25] employ retrieval augmentation from Wikipedia or sample databases to seek pertinent knowledge concerning images and text, thereby facilitating model reasoning. For instance, KB-NER [23] builds a multilingual knowledge base based on Wikipedia to provide related context to the NER model. MoRe [23] utilizes KNN with ViT-B/32 from CLIP [1] to retrieve the top-k related images from Wikipedia. Consequently, the knowledge procured from these methods exhibits inconsistencies with the current domain [11]. For example, the definition of 'Harry Potter' in Figure 1 (a) is misleading, and the retrieved images in (b) tend to be irrelevant for understanding the query sample. Additionally, these approaches are indirect for interpreting image-text pairs, given that the retrieved knowledge might exhibit semantic discrepancies from the query in the task at hand. As shown in Figure 1 (b), the retrieved text from the sample database is hard for understanding the original example. Finally, few of the existing methods provide auxiliary multimodal knowledge that jointly explains the text and image, which is important for MNER and MRE.
Figure 1: Illustration of retrieving knowledge from Wikipedia, sample database and LLMs.
Recently, large-scale models have demonstrated remarkable performance on intricate tasks that simulate the reasoning process one might employ when addressing a problem Yao et al. (2023). Thus, this study investigates the distillation of the reasoning capabilities of large language models (LLMs) into a small model through Chain-of-Thought Prompt Distillation (CoTPD). Previous researches Brown et al. (2020); Kojima et al. (2022); Wei et al. (2022) reveal that the _chain of thought_ (CoT) approach favorably elicits multi-step reasoning abilities from LLMs. By generating intermediate natural language rationales that culminate in the final response via CoT prompting, these _chain of thought_ demonstrations enable models to delineate a reasoning pathway that deconstructs intricate reasoning into several simpler steps. In this paper, our objective is to harness CoT prompts to synthesize explicit and direct CoT knowledge to understand each sample, and subsequently distill the reasoning capabilities of LLMs into the student model by using such CoT knowledge.
**How is CoT knowledge synthesized?** We propose prompting LLMs to interpret each sample in multi-grain, and data-augmentation perspectives. We term the combination of demonstrations with respect to different CoT prompts as CoT knowledge. Multi-grain CoT knowledge includes noun, sentence, and multimodality perspectives. The noun perspective inquires about the definitions of potential entities and specialized vocabulary in the text. The sentence perspective elucidates ambiguous semantics in the original text and supplies necessary background information. The multimodality perspective enables LLMs to interpret the correlation between images and text, explicitly and jointly determining whether an image is helpful for understanding the text.
In addition to multi-grain reasoning, our method inherits the zero-shot reasoning abilities of LLMs through data augmentation. Existing text-based NER methods Chen et al. (2021, 2020) typically employ rule-based data augmentation to address low-resource and cross-domain challenges. However, the augmented sample is usually homogeneous and unrealistic. In this study, we extend this strategy to the multimodal case through fact-based entity, style, and image augmentation. LLMs are encouraged to perform augmentation according to common sense, enabling the generation of augmented samples that are both diverse and realistic. To the best of our knowledge, our research is the first attempt to utilize multimodal data augmentation in MNER and MRE.
**How to distill reasoning ability using CoT knowledge?** We believe, combining text-image pair with CoT knowledge as knowledge-enhanced input, i.e., [TXT] + [IMG] + [Knowledge], a small model can easier recognize the entity and relation. As shown in Figure 1 (c), CoT knowledge can exceptionally infer the vague relation between 'Harry Potter' and the boy. However, such reasoning ability of LLM is not inherited by the small model, since it just simply summarizes the extensive analysis to the final answer, rather than figuring everything out by itself. To solve this issue, we propose a conditional prompt distillation. Specifically, we leverage prompt learning to hint the student model by combining text and a learnable conditional prompt as input, i.e., [TXT] + [Prompt]. By aligning the output distributions predicted from the knowledge-enhanced input view and the prompt-enhanced input view, such contextual CoT knowledge is expected to be distilled in the parameterized prompt. It allows our method to be more practical in dealing with text-only inputs.
The contributions of our method can be summarized in three aspects:
1 We propose a simple but effective method to distill the reasoning ability of LLM to student model, which largely improves the performance with trivial additional cost.
2 Additionally, we propose a fact-based multimodal data augmentation strategy, which exhibits effectiveness in low-resource and cross-domain settings.
3 Our method substantially surpasses existing state-of-the-art approaches with significant improvement in MNER and MRE.
## 2 Related Work
**Chain-of-Thought:** Recently, _chain of thought_ (CoT) has been widely used to elicit the multi-step reasoning ability of LLMs, which encourages the LLMs to generate intermediate reasoning chains for solving a problem. Wei et al. (2022) prompts LLMs with 'Let's think step by step' to facilitate arithmetic, commonsense, and symbolic reasoning. Tree-of-Thoughts Yao et al. (2023) explores performing deliberate decision-making by considering multiple different reasoning paths and self-evaluating choices. In this paper, we propose to use
CoT to interpret each sample where the response from LLM is used as reliable knowledge to help the model understand.
**Knowledge-based NER and RE** Existing NER and RE methods tend to exploit additional knowledge to assist model reasoning. For example, [22] obtains external contexts of a sentence by retrieving and selecting a set of semantically relevant texts through a search engine. Later, Wang et.al [23] extend it to the visual case that MoRe utilizes KNN with ViT-B/32 from CLIP to retrieve the top-k related images from Wikipedia. However, the retrieved knowledge is still vague or misleading for a model to understand. Thus, we propose to use LLMs as knowledge providers which can accurately explain every detail of each sample.
## 3 Our Method
### Motivation
Large language models (LLMs), with their capacity to interpret and process human language in a manner akin to human cognition, facilitate the relation understanding of complex tasks. However, the enormous size of LLMs renders them impractical for industrial use. In the present study, we endeavor to explore an efficient methodology to distill the reasoning prowess of LLMs into a compact model that retains superior performance while ensuring minimal inference latency.
However, traditional knowledge distillation methods [1] are incompatible to achieve this goal. The reasons are mainly twofold. (1) Prevailing LLMs are predominantly generative models that either produce unstructured output logits [15] or are solely accessible through official API 1. (2) Existing LLMs exhibit suboptimal performance in NER and RE tasks [11, 12], attributable to their lack of training under task-specific supervision. To address these issues, we introduce Chain-of-Thought Prompt Distillation (CoTPD), a novel approach designed to distill the explicit CoT knowledge into the student model. This method seeks to facilitate knowledge transfer from the parameterized LLM \(\overset{prompt}{\longrightarrow}\) contextual demonstration \(\overset{distillation}{\longrightarrow}\) parameterized student model. Moreover, by training on NER and RE tasks, the student model can obtain promising performance with the help of CoT knowledge.
Footnote 1: [https://openai.com/blog/chatgpt](https://openai.com/blog/chatgpt)
In addition, existing methods [23, 24, 25] typically demonstrate their intuition and superiority through case studies or Class Activation Mapping (CAM) visualization. However, these strategies still fall short in providing comprehensive interpretability about how models interpret the multimodal context to reach the final prediction [23]. In this investigation, we demonstrate that CoT knowledge, which is akin to how humans understand each sample, benefits the model performance in various domains and tasks.
Figure 2: Overview of extracting multi-grain CoT knowledge and data augmentation CoT knowledge via querying LLMs with noun, sentence, multimodality, style, entity, and image CoT prompts.
### Overview of Our Method
Given an text and image pair \((\mathbf{x},\mathbf{I})\) where \(\mathbf{x}=\{x_{1},x_{2},..,x_{n}\}\) has \(n\) words, our method aspires to predict the outputs \(\mathbf{y}\), which could represent the label sequence or the relations for NER and RE tasks respectively. As shown in Figure 2, we initially employ image caption technology, i.e., BLIP2 Li et al. (2023), to convert image \(\mathbf{I}\) into an image caption. Subsequently, we utilize langchain2 with pre-ordained CoT prompts to query the LLM, thereby acquiring CoT knowledge \(\mathbf{c}\) pertaining to each sample. Ultimately, the student language model amalgamates the text-image pair with CoT knowledge as the knowledge-enhanced view, and concatenates the text with a learnable conditional prompt to create a prompt-enhanced view. The objective is to minimize the Kullback-Leibler divergence over the probability distributions of the two views, thereby distilling the contextual CoT knowledge into the parameterized prompt.
Footnote 2: [https://github.com/hwchase17/langchain](https://github.com/hwchase17/langchain)
### CoT Prompts for LLMs
**Multi-grain CoT Prompts.** To fully understand the text-image pair, we design CoT prompts in noun, sentence, and multimodality perspectives to query LLMs3, as shown in Figure 2.
Footnote 3: Weaker LLM may need accurate descriptions in prompts. We take GPT4 in the Twitter dataset as an example.
_Noun:_ We query LLMs with 'Help me explain the meaning of special words for understanding. + \(\mathbf{x}\)', which can explain potential entities, slang, and terminology.
_Sentence:_ For each sample, we query LLMs with 'Explain the sentence to me with necessary background. + \(\mathbf{x}\)'. It can explain the sentiment, cause, and subject of users.
_Multimodality:_ Similarly, we obtain multimodal relation by prompt LLMs with 'What is the relation between the text and the attached image? + \(\mathbf{x}+\mathbf{I}\)'. In this way, LLM can illustrate the vague relation between entity and object and explicitly determine whether the visual information is noise.
**Data Augmentation CoT Prompts.** We also leverage LLMs to do fact-based multimodal data augmentation, involving linguistic style, entity, and image perspectives.
_Style:_ We query LLMs with 'Transform the sentence in Twitter style without changing the meaning. + \(\mathbf{x}\)' It diversifies the textual expression with consistent entity and meaning.
_Entity:_ Initially, we adhere to Wu et al. (2022) to substitute the entity in \(\mathbf{x}\) with an entity of the same type sourced from the MNER dataset. For instance, we replace the entity 'Tobey Maguire [PER]' with 'Barack Obama [PER]'. Then, we assess the factual accuracy of the augmented sample by prompting the LLM with the query 'Whether the sentence is possible in fact, answer yes or no. + \(\mathbf{x}\)'. We retain only the samples that receive a positive response indicating factual plausibility.
_Image:_ We generate the visual information of text by querying LLMs 'What is a possible image with the text in a tweet? + \(\mathbf{x}\)'
We unify multi-grain CoT knowledge with the original image and text pair. For data augmentation CoT knowledge, we reorganize it as a new sample. Notably, the total knowledge may exceed the max token limits for the student model, therefore, we first summarize the over-length knowledge.
### Conditional Prompt Distillation
To distill CoT knowledge \(\mathbf{c}\) into a compact model, we first try multi-view alignment Wang et al. (2021) to match the txt-only output distribution and text-image-CoT output distribution. However, we find it results in suboptimal performance (Sec 4.3). We believe the reasons are twofold: (1) CoT knowledge is higher-level context than previous retrieve knowledge. (2) The knowledge is personalized which needs extra parameters to distill. Therefore, inspired by prompt learning Li and Liang (2021); Zhou et al. (2022), we propose a conditional prompt distillation with respect to the text to hint the student model adjustably.
We introduce our conditional prompt generator, which is composed of a transformer decoder with learnable queries \(\mathbf{q}\in\mathbb{R}^{N\times d}\). The conditional prompt \(\mathbf{p}\) is generated by:
\[\mathbf{p}=Softmax(\frac{(\mathbf{q}W_{q})(\mathbf{x}W_{k})^{T}}{\sqrt{d}})\cdot\mathbf{x}W_{ v}, \tag{1}\]
where \(W_{q},W_{k}\), and \(W_{v}\) are learnable projectors for query, key, and value. We concatenate text, image caption, and CoT knowledge as the knowledge-enhanced view \([\mathbf{x};\mathbf{I};\mathbf{c}]\). Similarly, the prompt-enhanced view is composed by concatenating text and conditional prompt as \([\mathbf{x};\mathbf{p}]\). We feed these two views to the same text encoder \(E\) separately:
\[\begin{split} H_{k}=\{h_{1},..,h_{n},..,h_{n+l}\}=E([\mathbf{x};\mathbf{I };\mathbf{c}]),\\ H_{p}=\{h_{1},..,h_{n},..,h_{n+l}\}=E([\mathbf{x};\mathbf{p}]),\end{split} \tag{2}\]
where \(H=\{h\}_{1}^{n+l}\) denotes the output of encoder with max token limit \(n+l\). The subscripts \(k\) and \(t\) represent knowledge-enahcned and prompt-enhanced view respectively. The former \(n\) tokens correspond to the embedding of the text part in two views, which are used for following NER or RE prediction.
Then, we reduce the gap between these two views over the output distributions, so that the CoT knowledge and image are distilled into the student model under specific task. We denote it as conditional prompt distillation loss:
\[\begin{split}\mathcal{L}_{CPD}&=KL\left(p_{\theta} \boldsymbol{(y}\mid\boldsymbol{x},\boldsymbol{I},\boldsymbol{c})||p_{\theta} (\boldsymbol{y}\mid\boldsymbol{x},\boldsymbol{p})\right)\\ &=KL(H_{k}[1:n]||H_{t}[1:n])\end{split} \tag{3}\]
Besides, for MNER and MRE, we use the negative log-likelihood (NLL) as the training loss for the knowledge-enhanced view with gold labels \(\boldsymbol{y}^{*}\):
\[\mathcal{L}_{\mathrm{NLL}}=-\log p_{\theta}\left(\boldsymbol{y}^{*}\mid \boldsymbol{x},\boldsymbol{I},\boldsymbol{c}\right) \tag{4}\]
Thus, the total loss \(\mathcal{L}_{total}\) is the combination of \(\mathcal{L}_{CPD}\) and \(\mathcal{L}_{NLL}\) with coefficient factor \(\alpha\):
\[\mathcal{L}_{total}=\mathcal{L}_{NLL}+\alpha\cdot\mathcal{L}_{\mathrm{CPD}} \tag{5}\]
## 4 Experiments
### Settings
**Datasets.** For MNER evaluation, we verify the effectiveness of our method on Twitter2015 Zhang et al. (2018), Twitter2017 Yu et al. (2020), SNAP Lu et al. (2018) and WikiDiverse Wang et al. (2022). These datasets containing 4,000/1,000/3,357, 3,373/723/723 and 4,290/1,432/1,459, 6,312/755/757 sentences in train/development/test split respectively. The former three datasets are collected from social media domain, while WikiDiverse is reorganized by Wang et al. (202) from Wikinews as a multimodal NER dataset. For MRE, we use MNRE dataset Zheng et al. (2021) constructed by user tweets, which contains 12,247/1,624/1,614 sentences in train/development/test split.
**Model Configuration.** To fairly compare with most of the recent works, we adopt BERT-base-uncased as our text encoder. Besides, we follow Wang et al. (2021, 2022) to adopt XLM-RoBERTa-large (XLMR) to achieve state-of-the-art performance. For LLM prompting, we use langchain to query each sample in a dialogue way and record the answer of LLMs as CoT knowledge. The version of ChatGPT used in our experiments is gpt-3.5-turbo and the version of GPT4 is gpt-4. All LLMs used in our method set the sampling temperature to 0 for stable output.
**Training Configuration.** Our approach is implemented in Pytorch framework on an NVIDIA A100 GPU. We adopt AdamW optimizer to minimize the loss function. Besides, we follow Wang et al. (2022) to grid search the learning rate of our models within [\(1\times 10^{-6}\), \(5\times 10^{-5}\)]. The max length of sentence input is set to 512 to cover more CoT knowledge. The model undergoes a fixed 15 epochs of training with 32 mini-batch.
**Baseline Model and Variants.** We refer to a model training and testing with [TXT]+[IMG] input as our baseline model. For the knowledge variants, we take retrieve knowledge from MoRe Wang et al. (2022) as Petri, and knowledge queried from Wikipedia Wang et al. (2021) as Wiki, and our CoT knowledge as CoT. For Retri, MoRe provides image-retrieved knowledge Retri\({}_{txt}\) and text-retrieved knowledge Retri\({}_{img}\). For distillation variants, we refer to multi-view alignment from ITA Wang et al. (2021) as MV. Furthermore, in our method, considering different kinds of learnable prompts, we compare prefix Li and Liang (2021) and unconditional prompt Zhou et al. (2022), which are denoted as PrefixD and UPD respectively, to verify the effectiveness of the proposed conditional prompt distillation (CPD).
### Main Result
We compare our approach with other BERT-based Wang et al. (2022); Lu et al. (2022); Zhao et al. (2022) and XLMR-based Wang et al. (2021, 2022); a,c) state-of-the-art methods on MNER and MRE. To fully show the effectiveness of our approach, we evaluate ChatGPT and GPT4 on Twitter2015, Twitter2017 and MNRE. For our approach, we also show the model performance using different LLM teacher and student models. As shown in Table 1, we observe LLMs are inferior to existing state-of-the-art methods, but the CoT knowledge significantly boosts our method. We think the training data of LLMs includes the dataset but the training of LLM is not in NER or RE supervision. Therefore, LLMs can generally understand each sample but still underperform in specific tasks. Besides, compared with other previous SOTA methods, our approach suppresses them by a large margin. Compared to MoRe, our method
achieves 0.82% and 5.96% improvement on Twitter2015 and MNRE respectively. We attribute such significant gain to the direct and comprehensive CoT knowledge from LLM, so that the student model can easily understand the multimodal sample, instead of analogically learning from the related samples. We also observe that the ability of LLMs and student models significantly influences the final performance. For example, with respect to the same student XMLR model, GPT4 can generally provide a further 0.4%-1.2% improvement over ChatGPT. Similarly, a powerful student model also brings large performance boosts.
with minimal decrease. Moreover, we test other two kinds of prevailing learnable prompts for distillation, i.e., prefix and unconditional prompt. It shows the proposed conditional prompt is better than them. We think the personalized/conditional prompt with respect to the text can hint the student model easier.
### Detailed Analysis
**Cross-domain Generalization.** The difference in type distribution and data characteristics often brings significant performance gaps in practice. Thus, we examine the cross-domain generalization of our method in Table 4. First, without data augmentation, our method achieves comparable results as the previous best model where our method suppresses CAT-MNER with 0.68% F1 improvement but is inferior to it with 0.28% decrease in two cross-domain settings respectively.
Second, when the entity augmentation uses entities from the source dataset (in-domain aug), our method achieves 1.59% and 0.73% F1 gains over CAT-MNER. It demonstrates the data augmentation strategy can substantially boost generalization ability. Besides, in this work, we aim to let the student model inherit the zero-shot ability of LLM. Therefore, we first query LLMs to provide a list of 400 Twitter entities, covering four types but without overlapping the entities in the source set, and then apply the novel entities for entity replacement. We refer to this implementation as zero-shot augmentation. As shown in Table 4, augmenting novel entities can further provide a 0.3%-0.6% F1 improvement. This demonstrates the effectiveness of utilizing data-augmentation CoT knowledge for achieving cross-domain generalization
**Case Study.** We showcase how CoT knowledge can help improve the predictive performance of the model. As illustrated in Figure 4, LLM first provides a detailed explanation for'special' words in the text, largely enriching the linguistic semantics. Especially for abbreviation, slang, and hashtag, LLMs directly return crucial definitions for easy understanding of the whole context while KB-NER and MoRe only provide related or vague context needing further consideration. For multimodal CoT knowledge, our method can distinguish the relatedness of text-image pair. For example, in (b) and (d), the images do not have a corresponding entity in the text. However, LLMs can also interpret the long-range image-text relation.
Furthermore, compared with the baseline model and MoRe, our method can easily recognize hard entities like 'North and South Korea' in (b) and 'fikile_nhleko' in (c). Besides, the correct prediction with and without CoT knowledge indicates that the student model inherits the reasoning ability of LLMs through conditional prompt distillation.
**Limitations and Boarder Impacts.** We notice that the quality of generated CoT knowledge heavily depends on the well-posed LLM that understands the prompt and example comprehensively. We believe next-generation multimodal large models, such as GPT5, will make the CoT knowledge more reliable.
Our method is simple yet effective to distill the general task-agonist knowledge from LLMs to a small model. It can be easily expanded to other tasks which need sophisticated reasoning.
## 5 Conclusion
In this paper, we propose a novel CoT knowledge covering multi-grain understanding of examples and data-augmentation generalization from LLMs. Furthermore, we introduce conditional prompt distillation to distill the reasoning ability of LLMs to a compact student model. Extensive experiments on five datasets show that our method outperforms all existing state-of-the-art methods and manifests a plethora of advantages concerning interpretability, data efficiency, and cross-domain generalization.
Figure 4: Case studies of how CoT knowledge can help model predictions. The subscript \(know\) and \(CPD\) denote inference with and without CoT knowledge respectively. |
2307.03777 | Unsupervised 3D out-of-distribution detection with latent diffusion
models | Methods for out-of-distribution (OOD) detection that scale to 3D data are
crucial components of any real-world clinical deep learning system. Classic
denoising diffusion probabilistic models (DDPMs) have been recently proposed as
a robust way to perform reconstruction-based OOD detection on 2D datasets, but
do not trivially scale to 3D data. In this work, we propose to use Latent
Diffusion Models (LDMs), which enable the scaling of DDPMs to high-resolution
3D medical data. We validate the proposed approach on near- and far-OOD
datasets and compare it to a recently proposed, 3D-enabled approach using
Latent Transformer Models (LTMs). Not only does the proposed LDM-based approach
achieve statistically significant better performance, it also shows less
sensitivity to the underlying latent representation, more favourable memory
scaling, and produces better spatial anomaly maps. Code is available at
https://github.com/marksgraham/ddpm-ood | Mark S. Graham, Walter Hugo Lopez Pinaya, Paul Wright, Petru-Daniel Tudosiu, Yee H. Mah, James T. Teo, H. Rolf Jäger, David Werring, Parashkev Nachev, Sebastien Ourselin, M. Jorge Cardoso | 2023-07-07T18:00:38Z | http://arxiv.org/abs/2307.03777v1 | # Unsupervised 3D out-of-distribution detection with latent diffusion models
###### Abstract
Methods for out-of-distribution (OOD) detection that scale to 3D data are crucial components of any real-world clinical deep learning system. Classic denoising diffusion probabilistic models (DDPMs) have been recently proposed as a robust way to perform reconstruction-based OOD detection on 2D datasets, but do not trivially scale to 3D data. In this work, we propose to use Latent Diffusion Models (LDMs), which enable the scaling of DDPMs to high-resolution 3D medical data. We validate the proposed approach on near- and far-OOD datasets and compare it to a recently proposed, 3D-enabled approach using Latent Transformer Models (LTMs). Not only does the proposed LDM-based approach achieve statistically significant better performance, it also shows less sensitivity to the underlying latent representation, more favourable memory scaling, and produces better spatial anomaly maps. Code is available at [https://github.com/marksgraham/ddpm-ood](https://github.com/marksgraham/ddpm-ood).
Keywords:Latent diffusion models Out-of-distribution detection
## 1 Introduction
Methods for out-of-distribution (OOD) detection are a crucial component of any machine learning pipeline that is deployed in the real world. They are particularly necessary for pipelines that employ neural networks, which perform well on data drawn from the distribution they were trained on but can produce unexpected results when given OOD data. For medical applications, methods for OOD detection must be able to detect both far-OOD data, such as images of a different organ or modality to the in-distribution data, and near-OOD data, such as in-distribution data corrupted by imaging artefacts. It is also necessary that these methods can operate on high-resolution 3D data. In this work, we focus on methods trained in a fully unsupervised way; without any labels or access to OOD data at train time.
Recently, Latent Transformer Models (LTMs) [9] have proven themselves to be effective for anomaly detection and synthesis in medical data [23, 21, 27]. These two-stage models first use a VQ-VAE [20] or VQ-GAN [9] to provide a compressed, discrete representation of the imaging data. An autoregressive Transformer [29] can then be trained on a flattened sequence of this representation. LTMs are particularly valuable in medical data, where the high input size makes training a Transformer on raw pixels infeasible. Recently, these models have been shown to be effective for 3D OOD detection by using the Transformer's likelihood of the compressed sequence to identify both far- and near-OOD samples [11]. These models can also provide spatial anomaly maps that highlight the regions of the image considered to be OOD, particularly valuable for highlighting localised artefacts in near-OOD data.
However, LTMs have some disadvantages. Firstly, likelihood models have well documented weaknesses when used for OOD detection [19, 3, 13], caused by focusing on low-level image features [12, 26]. It can help to measure likelihood in a more abstract representation space, such as that provided by a VQ-VAE [8], but how to train models that provide optimal representations for assessing likelihood is still an open research problem. For example, [11] showed in an ablation study that LTMs fail at OOD when lower levels of VQ-VAE compression are used. Secondly, the memory requirements of Transformers mean that even with high compression rates, the technique cannot scale to very high-resolution medical data, such as a whole-body CT with an image dimension \(512^{3}\). Finally, the spatial anomalies maps produced by LTMs are low resolution, being in the space of the latent representation rather than that of the image itself.
A promising avenue for OOD detection is denoising diffusion probabilistic model (DDPM)-based OOD detection [10]. This approach works by taking the input images and noising them multiple times to different noise levels. The model is used to denoise each of these noised images, which are compared to the input; the key idea is that the model will only successfully denoise in-distribution (ID) data. The method has shown promising results on 2D data [10] but cannot be trivially extended to 3D; as even extending DDPMs to work on high-resolution 2D data is an area of active research. We propose to scale it to 3D volumetric data through the use of Latent Diffusion Models (LDMs). These models, analogous to LTMs, use a first-stage VQ-GAN to compress the input. The DDPM then learns to denoise these compressed representations, which are then decoded and their similarity to the input image is measured directly in the original image space.
The proposed LDM-based OOD detection offers the potential to address the three disadvantages of an LTM-based approach. Firstly, as the method is not likelihood based, it is not necessary that the VQ-GAN provides an ill-defined 'good representation'. Rather, the only requirement is that it reconstructs the inputs well, something easy to quantify using reconstruction quality metrics. Secondly, DDPMs have more favourable memory scaling behaviour than Transformers, allowing them to be trained on higher-dimensional representations. Finally, as the comparisons are performed at the native resolution, LDMs can produce high-resolution spatial anomaly maps. We evaluate both the LTM and the proposed
LDM model on several far- and near-OOD detection tasks and show that LDMs overcome the three main failings of LTMs: that their performance is less reliant on the quality of the first stage model, that they can be trained on higher dimensional inputs, and that they produce higher resolution anomaly maps.
## 2 Methods
We begin with a brief overview of LDMs and relevant notation before describing how they are used for OOD detection and to estimate spatial anomaly maps.
### Latent Diffusion Models
LDMs are trained in two stages. A first stage model, here a VQ-GAN, is trained to compress the input image into a latent representation. A DDPM [14] is trained to learn to sample from the distribution of these latent representations through iterative denoising.
**VQ-GAN**: The VQ-GAN operates on a 3D input of size \(\mathbf{x}\in\mathbb{R}^{H\times W\times D}\) and consists of an encoder \(E\) that compresses to a latent space \(\mathbf{z}\in\mathbb{R}^{h\times w\times d\times n}\), where \(n\) is the dimension of the latent embedding vector. This representation is quantised by looking up the nearest value of each representation in a codebook containing \(K\) elements and replacing the embedding vector of length \(d\) with the codebook index, \(k\), producing \(\mathbf{z_{q}}\in\mathbb{R}^{h\times w\times d}\). A decoder \(G\) operates on this quantised representation to produce a reconstruction, \(\mathbf{\hat{x}}\in\mathbb{R}^{H\times W\times D}\).
In a VQ-VAE [20], \(E\), \(G\) and the codebook are jointly learnt with a \(L_{2}\) loss on the reconstructions and a codebook loss. The VG-GAN [9] aims to produce higher quality reconstructions by employing a discriminator \(D\) and training adversarially, and including a perceptual loss component [32] in addition to the \(L_{2}\) reconstruction loss. Following [28], we also add a spectral loss component to the reconstruction losses [7].
The encoder and decoder are convolutional networks of \(l\) levels. There is a simple relationship between the spatial dimension of the latent space, the input, and number of levels: \(h,w,d=\frac{H}{2^{l}},\frac{W}{2^{l}},\frac{D}{2^{l}}\), so the latent space is \(2^{3l}\) times smaller spatially than the input image, with a \(4\times 2^{3l}\) reduction in memory size when accounting for the conversion from a float to integer representation. In practice, most works use \(l=3\) (\(512\times\) spatial compression) or \(l=4\) (\(4096\times\) spatial compression); it is challenging to train a VQ-GAN at higher compression rates.
**DDPM:** A DDPM is then trained on the latent embedding \(\mathbf{z}\) (the dequantised latent). During training, noise is added to \(\mathbf{z}\) according to a timestep \(t\) and a fixed Gaussian noise schedule defined by \(\beta_{t}\) to produce noised samples \(\mathbf{z}_{t}\), such that
\[q(\mathbf{z}_{t}|\mathbf{z}_{0})=\mathcal{N}\left(\mathbf{z}_{t}|\sqrt{\bar{ \alpha}_{t}}\mathbf{z}_{0},(1-\bar{\alpha})\mathbf{I}\right) \tag{1}\]
where we use \(\mathbf{z}_{0}\) to refer to the noise-free latent \(\mathbf{z}\), we have \(0\leq t\leq T\), and \(\alpha_{t}:=1-\beta_{t}\) and \(\bar{\alpha}_{t}:=\prod_{s=1}^{t}\alpha_{s}\). We design \(\beta_{t}\) to increase with \(t\) such that the
latent \(\mathbf{z}_{T}\) is close to an isotropic Gaussian. We seek to train a network that can perform the reverse or denoising process, which can also be written as a Gaussian transition:
\[p_{\theta}(\mathbf{z}_{t-1}|\mathbf{z}_{t})=\mathcal{N}\left(\mathbf{z}_{t-1}| \boldsymbol{\mu}_{\theta}(\mathbf{z}_{t},t),\mathbf{\Sigma}_{\theta}(\boldsymbol {z}_{t},t)\right) \tag{2}\]
In practice, following [14], we can train a network \(\boldsymbol{\epsilon}_{\theta}(\mathbf{z}_{t},t)\) to directly predict the noise used in the forward noising process, \(\boldsymbol{\epsilon}\). We can train with a simplified loss \(L_{\text{simple}}(\theta)=\mathbb{E}_{t,\mathbf{z}_{0},\boldsymbol{\epsilon} }\left[\|\boldsymbol{\epsilon}-\boldsymbol{\epsilon}_{\theta}\left(\mathbf{z}_ {t}\right)\|^{2}\right]\), and denoise according to
\[\mathbf{z}_{t-1}=\frac{1}{\sqrt{\alpha_{t}}}\left(\mathbf{z}_{t}-\frac{\beta _{t}}{\sqrt{1-\bar{\alpha}_{t}}}\boldsymbol{\epsilon}_{\theta}\left(\mathbf{z }_{t},t\right)\right)+\sigma_{t}\mathbf{n} \tag{3}\]
where \(\mathbf{n}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\).
While in most applications an isotropic Gaussian is drawn and iteratively denoised to draw samples from the model, in this work, we take a latent input \(\mathbf{z}_{0}\) and noise to \(\mathbf{z}_{t}\) for a range of values of \(t<T\) and obtain their reconstructions, \(\hat{\mathbf{z}}_{0,t}=p_{\theta}(\mathbf{z}_{0}|\mathbf{z}_{t})\).
### OOD detection with LDMs
In [10], an input image \(\mathbf{x}\) that has been noised to a range of \(t\)-values spanning the range \(0<t<T\) is then denoised to obtain \(\hat{\mathbf{x}}_{0,t}\), and we measure the similarity for each reconstruction, \(\mathbf{S}(\hat{\mathbf{x}}_{0,t},\mathbf{x})\). These multiple similarity measures are then combined to produce a single score per input, with a high similarity score suggesting the input is more in-distribution. Typically, reconstruction methods work by reconstruction through some information bottleneck - for an autoencoder, this might be the dimension of the latent space; for a denoising model, this is the amount of noise applied - with the principal that ID images will be successfully reconstructed through the bottleneck, yielding high similarity with the input, and OOD images will not. Prior works have shown the performance becomes dependent on the choice of the bottleneck - too small and even ID inputs are poorly reconstructed, too large and OOD inputs are well reconstructed [18, 22, 6, 33]. Reconstructing from multiple \(t\)-values addresses this problem by considering reconstructions from multiple bottlenecks per image, outperforming prior reconstruction-based methods [10].
In order to scale to 3D data, we reconstruct an input \(\mathbf{x}\) in the latent space of the VQ-GAN, \(\mathbf{z}=E\left(\mathbf{x}\right)\). Reconstructions are performed using the PLMS sampler [17], which allows for high-quality reconstructions with significantly fewer reconstruction steps. The similarity is measured in the original image space by decoding the reconstructed latents, \(\mathbf{S}\left(G\left(\hat{\mathbf{z}}_{0,t}\right),\mathbf{x}\right)\). As recommended by [10], we measure both the mean-squared error (MSE) and the perceptual similarity [32] for each reconstruction, yielding a total of \(2N\) similarity measures for the \(N\) reconstructions performed. As the perceptual loss operates on 2D images, we measure it on all slices in the coronal, axial, and sagittal planes and average these values to produce a single value per 3D volume. Each similarity metric is converted into a \(z\)-score using mean and standard deviation parameters calculated on the validation set, and are then averaged to produce a single score.
### Spatial anomaly maps
To highlight spatial anomalies, we aggregate a set of reconstruction error maps. We select reconstructions from \(t\)-values = \([100,200,300,400]\), calculate the pixel-wise mean absolute error (MAE), \(z\)-score these MAE maps using the pixel-wise mean and standard deviation from the validation set, and then average to produce a single spatial map per input image.
## 3 Experiments
### Data
We use three datasets to test the ability of our method to flag OOD values in both the near- and far-OOD cases. The **CROMIS** dataset [30, 31] consists of 683 head CT scans and was used as the train and validation set for all models, with a 614/69 split. The **KCH** dataset consists of 47 head CTs acquired independently from CROMIS, and was used as the in-distribution test set. To produce near-OOD data, a number of corruptions were applied to this dataset, designed to represent a number of acquisition/ data preprocessing errors. These were: addition of Gaussian noise to the images at three levels (\(\sigma=0.01,0.1,0.2\)), setting the background to values different to the 0 used during training (0.3, 0.6, 1), inverting the image through either of the three imaging planes, removing a chunk of adjacent slices from either the top or centre of the volume, skull-stripping (the models were trained on unstripped images), and setting all pixel values to either 1% or 10% of their true values (imitating an error in intensity scaling during preprocessing). Applying each corruption to each ID image yielded a total of 705 near-OOD images. The **Decathlon** dataset [1] comprises a range of 3D imaging volumes that are not head CTs and was used to represent far-OOD data. We selected 22 images from each of the ten classes. All CT head images were affinely registered to MNI space, resampled to 1mm isotropic, and cropped to a \(176\times 208\times 176\) grid. For the images in the Decathlon dataset, all were resampled to be 1mm isotropic and either cropped or zero-padded depending on size to produce a \(176\times 216\times 176\) grid. All CT images had their intensities clamped between \([-15,100]\) and then rescaled to lie in the range \([0,1]\). All non-CT images were rescaled based on their minimum and maximum values to lie in the \([0,1]\) range.
### Implementation details
All models were implemented in PyTorch v1.13.1 using the MONAI framework v1.1.0 [2]. Code is available at [https://github.com/marksgraham/ddpm-ood](https://github.com/marksgraham/ddpm-ood). LTM model code can be found at [https://github.com/marksgraham/transformer-ood](https://github.com/marksgraham/transformer-ood).
**LDMs:** VQ-GANS were trained with levels \(l=2\), 3, or 4 levels with 1 convolutional layer and 3 residual blocks per level, each with 128 channels. Training with \(l=3/4\) represents standard practice, training with \(l=2\) (\(64\times\) spatial compression) was done to simulate a situation with higher-resolution input data. All
VQ-GANs had an embedding dim of 64, and the 2, 3, 4 level models have a codebook size of 64, 256, 1024, respectively. Models were trained with a perceptual loss weight of 0.001, an adversarial weight loss of 0.01, and all other losses unweighted. Models were trained with a batch size of 64 for 500 epochs on an A100, using the Adam optimizer [16] with a learning rate of \(3\times 10^{-4}\) and early stopping if the validation loss did not decrease over 15 epochs. The LDM used a time-conditioned UNet architecture as in [25], with three levels with (128, 256, 256) channels, 1 residual block per level, and attention in the deepest level only. The noise schedule had \(T=1000\) steps with a scaled linear noise schedule with \(\beta_{0}=0.0015\) and \(\beta_{T}=0.0195\). Models were trained with a batch size of 112 on an A100 with the Adam optimizer, learning rate \(2.5\times 10^{-5}\) for 12,000 epochs, with early stopping. During reconstruction, the PLMS scheduler was used with 100 timesteps. Reconstructions were performed from 50 \(t\) values spaced evenly over the interval \([0,1000]\).
**LTM:** The Latent Transformer Models were trained on the same VQ-GAN bases using the procedure described in [11], using a 22-layer Transformer with dimension 256 in the attention layers and 8 attention heads. The authors in [11] used the Performer architecture [4], which uses a linear approximation to the attention matrix to reduce memory costs and enable training on larger sequence lengths. Instead, we use the recently introduced memory efficient attention mechanism [24] to calculate exact attention with reduced memory costs. This enables us to train a full Transformer on a 3-level VQ-GAN embedding, with a sequence length of \(22\times 27\times 22=13,068\). Neither the Performer nor the memory-efficient Transformer was able to train on the 2-level embedding, with a sequence length of \(44\times 52\times 44=100,672\). Models were trained on an A100 with a batch size of 128 using Adam with a learning rate of \(10^{-4}\).
## 4 Results & Discussion
Results and associated statistical tests are shown in Table 1 as AUC scores, with tests for differences in AUC performed using Delong's method [5]. At 4-levels, the LDM and LTM both perform well, albeit with the proposed LDM performing better on certain OOD datasets. LTM performance degrades when trained on a 3-level model, but LDM performance remains high. The 3-level LTM result is in agreement with the findings in [11]. This is likely caused by the previously discussed tendency for likelihood-based models, such as Transformers, to be sensitive to the quality of the underlying representation. For instance, [12] showed that likelihood-based models can fail unless forced to focus on high-level image features. We posit that at the high compression rates of a 4-level VQ-GAN the representation encodes higher-level features, but at 3-levels the representation can encode lower-level features, making it harder for likelihood-based models to perform well. By contrast, the LDM-based method only requires that the VQ-GAN produces reasonable reconstructions. While memory constraints prevented training a 2-level LTM, the more modest requirements on the UNet-based LDM meant it was possible to train. This result has implications for the application of
very high-resolution medical data: for instance, a whole-body CT with an image dimension \(512^{3}\) would have a latent dimension \(32^{3}\) even with 4-level compression, too large to train an LTM on but comfortably within the reach of a LDM. The 2-level LDM had reduced performance on two classes that have many pixels with an intensity close to 0 (Hippocampal MR, and Scaling 1%). Recent research shows that at higher resolutions, the effective SNR increases if the noise schedule is kept constant [15]. It seems this effect made it possible for the 2-level LDM to reconstruct these two OOD classes with low error for many values of \(t\). In future work we will look into scaling the noise schedule with LDM input size.
Anomaly maps are shown in Figure 4 for near-OOD cases with a spatially localised anomaly. The LDM-based maps are high-resolution, as they are generated
\begin{table}
\begin{tabular}{c l c|c c|c c} \hline \hline \multicolumn{2}{c}{**Dataset**} & \multicolumn{5}{c}{**Model**} \\ & \multicolumn{2}{c}{2-level} & \multicolumn{2}{c}{3-level} & \multicolumn{2}{c}{4-level} \\ & & LTM & LDM & LTM & LDM & LTM & LDM \\ \hline \multirow{9}{*}{\begin{tabular}{c} LTM \\ \end{tabular} } & Head MR & N/A & 72 & 0 & **100** & 100 & 100 \\ & Colon CT & N/A & 100 & 100 & 100 & 100 & 100 \\ & Hepatic CT & N/A & 100 & 100 & 100 & 99.9 & 100 \\ & Hippocampal MR & N/A & 3.51 & 0 & **100** & 100 & 100 \\ & Liver CT & N/A & 100 & 100 & 100 & 99.8 & 100 \\ & Lung CT & N/A & 100 & 89 & 100 & 100 & 100 \\ & Pancreas CT & N/A & 100 & 100 & 100 & 99.3 & 100 \\ & Prostate MR & N/A & 99.9 & 0 & **100** & 100 & 100 \\ & Spleen CT & N/A & 100 & 100 & 100 & 99.6 & 100 \\ & Cardiac MR & N/A & 100 & 90 & 100 & 100 & 100 \\ \hline \multirow{9}{*}{
\begin{tabular}{c} LTM \\ \end{tabular} } & Noise \(\sigma=0.01\) & N/A & 59.7 & 48.1 & 59.3 & 50.7 & 54.5 \\ & Noise \(\sigma=0.1\) & N/A & 100 & 57.5 & **100** & 44.7 & **100** \\ & Noise \(\sigma=0.2\) & N/A & 100 & 88.3 & **100** & 45.6 & **100** \\ & BG value=0.3 & N/A & 100 & 100 & 100 & 100 & 100 \\ & BG value=0.6 & N/A & 100 & 100 & 100 & 100 & 100 \\ & BG value=1.0 & N/A & 100 & 100 & 100 & 100 & 100 \\ & Flip L-R & N/A & 53.5 & 49.4 & 61.2 & 51.1 & 58.6 \\ & Flip A-P & N/A & 100 & 65.6 & **100** & 90.7 & 100 \\ & Flip I-S & N/A & 100 & 69.7 & **100** & 90.5 & 100 \\ & Chunk top & N/A & 46.1 & 28.6 & **94.6** & 97.6 & 99.8 \\ & Chunk middle & N/A & 94.4 & 22 & **100** & 96.2 & 100 \\ & Skull stripped & N/A & 98.1 & 0 & **100** & 100 & 100 \\ & Scaling 1\% & N/A & 0.317 & 0 & **100** & 100 & 100 \\ & Scaling 10\% & N/A & 100 & 0 & **100** & 100 & 100 \\ \hline \hline \end{tabular}
\end{table}
Table 1: AUC scores for identifying OOD data, with the CT-2 dataset used as the in-distribution test set. Results shown split according to the number of levels in the VQ-GAN. Tests for difference in AUC compare each LTM and LDM models with the same VQ-GAN base, **bold values** are differences significant with \(p<0.001\) and underlined values significant with \(p<0.05\). Results are shown as N/A for the 2-level LTM as it was not possible to train a Transformer on such a long sequence.
in image space, and localise the relevant anomalies. The LTM maps are lower resolution, as they are generated in latent space, but more significantly often fail to localise the relevant anomalies. This is most obvious in anomalies that cause missing signal, such as missing chunks, skulls, or image scaling, which are flagged as low-anomaly regions. This is caused by the tendency of likelihood-based models to view regions with low complexity, such as blank areas, as high-likelihood [26]. The anomaly is sometimes picked up but not well localised, notable in the 'chunk top' example at 4-levels. Here, the transition between brain tissue and the missing chunk is flagged as anomalous rather than the chunk itself.
Memory and time requirements for all models are tabulated in Supplementary A. These confirm the LDM's reduced memory use compared to the LTM. All models run in \(<30s\), making them feasible in a clinical setting.
Figure 1: Example anomaly maps for models based on 3- and 4-level VQ-GANs. Maps for each model are shown on the same colour scale, but the scales vary between each model to obtain the best display for each model. Brighter regions are more anomalous.
## 5 Conclusion
We have introduced Latent Diffusion Models for 3D out-of-distribution detection. Our method outperforms the recently proposed Latent Transformer Model when assessed on both near- and far-OOD data. Moreover, we show LDMs address three key weaknesses of LTMs: their performance is less sensitive to the quality of the latent representation they are trained on, they have more favourable memory scaling that allows them to be trained on higher resolution inputs, and they provide higher resolution and more accurate spatial anomaly maps. Overall, LDMs show tremendous potential as a general-purpose tool for OOD detection on high-resolution 3D medical imaging data.
#### 5.0.1 Acknowledgements
MSG, WHLP, RG, PW, PN, SO, and MJC are supported by the Wellcome Trust (WT213038/Z/18/Z). MJC and SO are also supported by the Wellcome/EPSRC Centre for Medical Engineering (WT203148/Z/16/Z), and the InnovateUK-funded London AI centre for Value-based Healthcare. PTD is supported by the EPSRC (EP/R513064/1). YM is supported by an MRC Clinical Academic Research Partnership grant (MR/T005351/1). PN is also supported by the UCLH NIHR Biomedical Research Centre. Datasets CROMIS and KCH were used with ethics 20/ES/0005.
|
2301.04396 | Matters Arising: Entanglement-enhanced matter-wave interferometry in a
high-finesse cavity | In their paper "Entanglement-enhanced matter-wave interferometry in a
high-finesse cavity" Nature (2022), Greve et. al. claim to use entanglement in
a matter-wave interferometer to achieve a sensitivity beyond that achievable
with the same number of independent particles -- a limit known as the standard
quantum limit (SQL). In particular, using squeezed momentum states of 700
atoms, the authors claim to directly observe a sensitivity 3.4 dB (a factor of
1.5) below the SQL. This claim is incorrect. The authors do not measure
anything beyond the SQL, nor do they achieve a sensitivity beyond what one
could obtain with a single atom. The achieved sensitivity is at least a factor
of 39 worse than the claimed value. | Liam P. McGuinness | 2023-01-11T10:42:19Z | http://arxiv.org/abs/2301.04396v2 | # Matters Arising: Entanglement-enhanced matter-wave interferometry in a high-finesse cavity
###### Abstract
In their paper "Entanglement-enhanced matter-wave interferometry in a high-finesse cavity" Nature (2022) [1], Greve et. al. claim to use entanglement in a matter-wave interferometer to achieve a sensitivity beyond that achievable with the same number of independent particles - a limit known as the standard quantum limit (SQL). In particular, using squeezed momentum states of 700 atoms, the authors claim to directly observe a sensitivity \(3.4\,\mathrm{dB}\) (a factor of 1.5) below the SQL. This claim is incorrect. The authors do not measure anything beyond the SQL, nor do they achieve a sensitivity beyond what one could obtain with a single atom. The achieved sensitivity is at least a factor of 39 worse than the claimed value.
In "Entanglement-enhanced matter-wave interferometry in a high-finesse cavity" Nature (2022) [1], Greve et. al. describe measuring the phase \(\phi\) between two quantum states \(\left|a\right\rangle,\left|b\right\rangle\), given by \(\frac{1}{\sqrt{2}}\left(\left|a\right\rangle+e^{i\phi}\left|b\right\rangle\right)\). With no prior information on \(\phi\), i.e. \(0<\phi\leq 2\pi\), one cannot estimate this phase with a single measurement to an angular uncertainty better than \(\Delta\phi=1\,\mathrm{radians}\). Allowing for \(N\) trials, which can be implemented by encoding \(\phi\) into the state of \(N\) independent atoms, the uncertainty must be greater than \(\Delta\phi_{\mathrm{SQL}}=1/\sqrt{N}\,\mathrm{rad}\), a limit known as the standard quantum limit (SQL). As Greve et. al. note, this is a fundamental limit which cannot be improved upon, even with entanglement. In fact, if one entangles \(N\) atoms to obtain the state \(\frac{1}{\sqrt{2}}\left(\left|a_{N}\right\rangle+e^{i\phi}\left|b_{N}\right\rangle\right)\), where \(\left|a_{N}\right\rangle,\left|b_{N}\right\rangle\) are states in an \(N\)-dimensonal Hilbert space, measurement of this phase has an uncertainty is restricted to \(\Delta\phi>1\,\mathrm{rad}\), much worse than the SQL. The reason being that we have lost the ability to perform \(N\) independent trials and are thus limited to the uncertainty of a single measurement. This is clear to see, since the entangled state is identical to the single atom state up to a relabelling.
While the above is well-known for estimating an unknown quantum phase, it is widely accepted that despite this entanglement can lead to improved estimation of some other phase \(\theta\). Why is this? The argument is that entangled states accumulate a bigger quantum phase in response to some physical Hamiltonian to be measured (\(N\)-fold greater than a single atom). With particular reference to Mach-Zehnder interferometry, discussed by Greve et. al., this is the phase that particles in one arm of the interferometer accumulate with respect to particles in the other arm. If we also allow somewhat sneakily that we now have more prior information on \(\phi\) - it is known to within a much narrow range, then it is expected that, for the same measurement time, entangled states achieve greater sensitivity to small phase shifts than unentangled states [2]. As a result, although the uncertainty in estimating \(\phi\) is \(\sqrt{N}\) worse with entanglement than with independent atoms, the value of \(\phi\) in an entangled state is \(N\)-fold greater than any of the independent atomic states. Again, it is important to be clear here. With entanglement we have not improved the uncertainty in estimating \(\phi\), it has gotten worse. Whenever measurement of a quantum phase is performed as described above, one can immediately rule out surpassing the SQL, it is only when the quantum phase is used to infer some other parameter \(\theta\) that the uncertainty in \(\theta\) can be reduced.
Let's assume an element with phase \(\theta\) is in one arm of the interferometer, and passing a single atom through the interferometer performs a one-to-one mapping of \(\theta\) to the atomic phase \(\phi\). Then with no entanglement we have \(\phi=\theta\), whereas with entanglement \(\phi=N\theta\). In the former case (assuming no additional errors) \(\Delta\theta=\Delta\phi\) and estimation of \(\theta\) with \(N\) atoms is limited by the SQL to: \(\Delta\phi>1/\sqrt{N}\Longrightarrow\Delta\theta>1/\sqrt{N}\). With entanglement \(\Delta\theta=\Delta\left(\phi/N\right)\) again assuming no additional errors, so we obtain \(\Delta\phi>1\Longrightarrow\Delta\theta>1/N\). If the second inequality can be saturated with no overheads, then entanglement outperforms unentangled sensors. Contrary to what is often stated, this superiority of entanglement in sensing is not proven and only holds under strong experimental assumptions. For that reason, experimental evidence with correct analysis and complete details is critical in validating the theory [3].
So what is the phase shown in Fig. 1b that Greve et. al. measure with a precision beyond the SQL? Described in section **Entangled matter-wave interferometry:** as "A relative phase accumulates between the wave packets during a free evolution time \(T_{\text{evol}}\)", at first reading one might be surprised to find that the cause of the relative phase shift is not explicitly defined in the main text. One reason for this oversight could be that Fig. 1b and the description of the matter-wave interferometer is somewhat misleading. Greve et. al. do
not really perform Mach-Zehnder interferometry because they do not measure a phase shift between different arms of the interferometer. So what are the authors measuring? Greve et. al. allow the atoms to fall through a gravitational potential and measure the energy shift of the atoms - manifesting as a Doppler shift of atomic resonance with respect to the Raman laser (see Methods - Raman transitions and velocity selection). Notably this shift is the same for both arms of the interferometer and can be measured without a Mach-Zehnder interferometer. Most importantly, entangled atoms do not experience a greater phase shift (see Fig. 4c), since their velocity is the same as unentangled atoms.
Put simply, the authors measure the difference between the atomic phase \(\phi\) and the laser phase \(\theta\) at the end of the experiment. As entanglement produces no enhanced phase shift, the uncertainty in measuring either phase when using entanglement is strictly worse than the SQL, and we have shown that Greve et. al.'s claim in beating the SQL is incorrect. In fact, assuming the ensemble is fully entangled, the obtained sensitivity must be worse than the single atom precision limit. Even assuming the ensemble is not fully entangled, there are many experimental imperfections that prevent Greve et. al. reaching the single atom limit. With \(N=660\) atoms, this means that the achieved sensitivity is at least a factor of 31 worse than the claimed 1.7 dB enhancement. Similar analysis can show that all other claims made by the authors are similarly incorrect. So how can Greve et. al. claim to have done otherwise? Maybe it is better to reframe this question with a focus on the audience. These are the questions one should ask anybody claiming to beat the SQL.
1. Precisely what parameter do you measure?
2. What sensitivity/uncertainty for this parameter do you explicitly achieve in your experiment? This should have the correct units, including the total measurement time and the number of particles used.
3. Is it _really_ impossible to measure this parameter with better sensitivity using the same measurement time and the same number of independent particles? How about just a single particle. Is it impossible to measure this parameter with better sensitivity using the same measurement time and a single particle?
If 1) and 2) are properly defined, then the answer for all experiments to date, including the work by Greve et. al. is a resounding - 'No!'.
It is important that the scientific community is made aware of the current state of the art in quantum metrology. I am sure that many people would be extremely surprised to learn that entanglement has never been used to improve any experiment beyond what one could achieve without entanglement. Even more surprising is that fully entangled ensembles have never demonstrated a precision beyond the single particle limit. The community should be made aware of this for a variety of reasons. First, if the experimental data conflicts with the message being presented then we should demand better scientific rigour in published papers. In quantum metrology broadly, the standards have a long way to improve. Secondly, misrepresentation of data is hindering progress since people are currently unaware of a massive discrepancy between quantum mechanics as interpreted and experiment; even going so far as to prevent plausible explanations from being investigated. Thirdly, there is currently huge investment in technologies dependent on quantum entanglement (in both quantum sensing and quantum computing), if entangled ensembles cannot provide fundamentally more information than a single atom then these technologies will never reach their goals.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.